Kubernetes and OpenShift  | Brad Topol, Jake Kitchener & Michael Elder

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Kubernetes and OpenShift  | Brad Topol, Jake Kitchener & Michael Elder. The summary for this episode is: <p>In this episode of In the Open we bring you a conversation with Brad Topol, Jake Kitchener &amp; Michael Elder.&nbsp; We will be discussing their new O'Reilly book Hybrid Cloud Apps with OpenShift and Kubernetes. Topics include the fundamental concepts of Kubernetes, as well as more, advanced practices such as continuous delivery and multi-cluster management.</p><p>Dr. Brad Topol, Open Tech &amp; Dev Advocacy CTO, <a href="https://twitter.com/bradtopol" rel="noopener noreferrer" target="_blank">@bradtopol</a></p><p>Jake Kitchener, Senior Technical Staff Member @ IBM, <a href="https://twitter.com/kitch" rel="noopener noreferrer" target="_blank">@kitch</a> </p><p>Michael Elder, Senior Distinguished Engineer @ Red Hat, <a href="https://twitter.com/mdelder" rel="noopener noreferrer" target="_blank">@mdelder</a></p><p>Joe Sepi, Open Source Engineer &amp; Advocate, <a href="https://twitter.com/joe_sepi" rel="noopener noreferrer" target="_blank">@joe_sepi</a></p><p>Luke Schantz, Quantum Ambassador, @IBMDeveloper, <a href="https://twitter.com/lukeschantz" rel="noopener noreferrer" target="_blank">@lukeschantz</a></p><p>Hybrid Cloud Apps with OpenShift and Kubernetes <a href="https://ibm.biz/hybridappbook" rel="noopener noreferrer" target="_blank">ibm.biz/hybridappbook</a></p><p><br></p><p><strong>Key Takeaways:</strong></p><ul><li>[00:05&nbsp;-&nbsp;00:24] Intro to the episode</li><li>[05:06&nbsp;-&nbsp;14:45] Intro to Michael, Jake, and Brad</li><li>[14:55&nbsp;-&nbsp;19:53] What their first book was about</li><li>[26:21&nbsp;-&nbsp;27:53] What is the most important aspect of the book?</li><li>[34:06&nbsp;-&nbsp;40:03] The concept of roles in continuous integration and delivery</li><li>[41:40&nbsp;-&nbsp;46:10] Problem stories that the authors wanted to cover in the book</li><li>[48:24&nbsp;-&nbsp;50:54] How do I make the choice between running open source Kubernetes myself, Kubernetes service, or do I need OpenShift?</li></ul>
Intro to the episode
00:19 MIN
Intro to Michael, Jake, and Brad
09:39 MIN
What their first book was about
04:57 MIN
What is the most important aspect of the book?
01:32 MIN
The concept of roles in continuous integration and delivery
05:56 MIN
Problem stories that the authors wanted to cover in the book
04:30 MIN
How do I make the choice between running open source Kubernetes myself, Kubernetes service, or do I need OpenShift?
02:29 MIN

Luke: In this episode, we bring you a conversation with Brad Topol, Jake Kitchener, and Michael Elder. We will be discussing their new O'Reilly book, Hybrid Cloud Apps with OpenShift and Kubernetes. Topics include the Kubernetes fundamentals, as well as more advanced practices such as continuous delivery and multi cluster management. Before we welcome our guests, let's say hello to our co- host, Joe Sepi.

Joe Sepi: Hey, Luke. How are you?

Luke: Good. How are you doing, Joe?

Joe Sepi: I'm doing all right. I'm doing all right. It's some nice day out. We got blue skies. What's great about this is then my camera looks good. I got good color, good lights, good things all around.

Luke: Excellent lighting, I agree. It's very well lit.

Joe Sepi: Thanks. Yeah, how are things by you?

Luke: I would say I'm enjoying this weather. This is like the weather I was hoping to have in the spring. Just it's temperate. It's a cool morning, we're going to have a nice warm afternoon. Love it.

Joe Sepi: Excellent, excellent. Cool. Without further ado, let's bring in our friends.

Luke: Yeah, let's bring in our friends. Hello, Brad. Hello, Michael. Hello, Jake.

Jake Kitchener: Howdy, gents. Happy Friday.

Joe Sepi: Hello.

Brad Topol: Joe. It's good to see you again.

Joe Sepi: Nice to see you as well. Where are you all at? How's the weather by you? How about Brad, how are things where you're at?

Brad Topol: There's going to be real diversity in these three answers, I can tell. I am in basically Cary, North Carolina, a suburb of Raleigh. And the weather looks pretty nice today.

Joe Sepi: Nice. Very nice.

Brad Topol: Michael and Jake, tell him the diversity of your answers.

Michael Elder: Oh, it's great. I'm in Durham, which is completely different than Raleigh. So we get inaudible, but we do it with more style. That's just how we roll.

Brad Topol: Yeah.

Jake Kitchener: And I'm in Raleigh. Which is just Raleigh, so yes.

Joe Sepi: This is funny, because Luke and I are both in Connecticut. I'm actually in New York right now, but we're usually like, " Oh..." But we're on the opposite side of Connecticut. He's close to the water, I'm in the mountains, so we do actually have different weather. But you guys are just all right together, huh?

Michael Elder: Exactly, exactly. We came together, working together in the same labs with IBM, which are right in Research Triangle Park. So that's how we met over the years and where the collaboration started.

Brad Topol: Yep.

Joe Sepi: Yeah, interesting. How long ago was that? And what all were you working on?

Michael Elder: I would say probably going back to 2012, I think some of the early work that Jake and I did with OpenStack, and then Brad with OpenStack as well. And then that evolved between a couple of different projects and culminated with some of the things with the book and other things.

Jake Kitchener: Yeah. It's funny, as the industry has evolved, we still occupy the same places in the industry respectively, it's just the technology has moved on. As everybody in the entire planet of cloud and open source started in OpenStack, we all do Kube and OpenShift now. It took a couple years to complete the transition, but I'm just going to conferences and seeing the exact same people that I saw 10 years ago. It hasn't changed.

Joe Sepi: Yeah. You all evolved together, the whole platform and the technology. But that the people are there as well moving things forward.

Jake Kitchener: Yeah.

Brad Topol: Yeah. And we also got to evolve, because the three of us have now done two books together. But you can go back to that snapshot in time when we first got the three of us together and said... I came to Michael and Jake and said, " Hey, I think we got this opportunity to write a book on Kubernetes. Let's go for it." And so it's it fun to go back to thinking of that first time where we were like, " Hey, can we actually write a book? Can we do this?" And then go through it once with one book, and then get the opportunity to go through it again with a second book. And then everybody's leveled up and ready for the second book, and they're not quite as fear of the unknown as they were maybe in the first book.

Michael Elder: Now we fear all of the known in the second go around.

Jake Kitchener: Also, we're at least 50% smarter for the book two than we were for book one.

Joe Sepi: That helps. That helps a lot. It's interesting too, because having worked at IBM for a few years, a lot of the stuff that I do, remotes and async, and collaborating and stuff. But you three are all in the same town, so I'm curious how much... I imagine there's a bunch of it that was done asynchronously, but how much did you get together over a meal or a drink or something and kind of work through some of it?

Jake Kitchener: I mean, book two was a COVID publication. So I think we had the initial brainstorming meeting in person, I remember, in building 502 and RTP on IBM's campus last January or February, or something?

Michael Elder: Maybe late February. Because it was inaudible to Red Hat.

Jake Kitchener: Yeah. And that was the last we ever got to see each other, I guess.

Brad Topol: That was right when Michael broke the news and said his whole team was moving to Red Hat. And we were like, " Oh, we got this book to do."

Joe Sepi: Well, that actually might be a good segue. I mean, we can go around the horn with a couple of guest introductions here. I know, Michael, you're at Red Hat now. Maybe you can start? And we'll just hear where you are and what you've done.

Michael Elder: Sure. I've had basically a two decade history with IBM. I started as an intern in college with their Extreme Blue program. Which if you're watching this and you're an intern and you're looking for something fun to do, the Extreme Blue program really is a great opportunity to put your wits together with other team members and put together a great project on a fun topic. Just to highlight for anyone who's interested in that. But I started working at RTP, and then continued coming up in the areas around agile development, software development methodology, worked in the rational dev tools. And in 2007, 2008, I was focused on how can you make it easier to deliver software to distributed servers and run applications on that middleware? And then that problem has stayed the same, as we kind of highlighted, that it's just a bunch of different technologies to do it. And moved into DevOps, worked in areas around UrbanCode, which was a company that IBM acquired, helped lead that portfolio for a time. Worked a little bit with IBM Cloud and the continuous delivery service. And then started working on Kubernetes and containers, and using that as a methodology to streamline how you deliver and run middleware and applications. And then after the Red Hat acquisition, the team that I was a part of was transitioned into Red Hat because we wanted to contribute that part of the technology. So for maybe people who are outside of the IBM neighborhood, Red Hat still very much is a separate, distinct, independent entity. But there were about two or three very strategic areas where IBM contributed technology into Red Hat, that we've then been working to take open source. So we've started a project open cluster management, which is focused on making it easier to run Kubernetes in OpenShift clusters. And have been doing that now about a year and a half.

Joe Sepi: Cool. Yeah, and it's interesting too how much we do and then take it to the open source space, and kind of build there and dedicate the resources and the time to make that happen. So it's inaudible.

Michael Elder: And really just one more quick point. When I started within IBM, I had the opportunity to be an Eclipse committer. And so from an early time in my career as a software engineer, we were contributing and working in the open source, at that time around Eclipse. But certainly that's been a long- running theme for IBM technology through what I've been able to see.

Joe Sepi: Yeah. I feel like the people who are deep in open source know IBM's commitment to open source, but I think the majority of folks doing development have no idea. And so I always use any opportunity to highlight how active we are in the open source space, and how much we open source and build on top of open source. So, that's great.

Michael Elder: Absolutely.

Joe Sepi: I'll switch over to Jake.

Jake Kitchener: Yeah. I am also a two decade IBM- er now. I also interned with IBM OS in college and have been here since. My path is definitely a little bit different than Michael's. I've always been an infrastructure platform kind of person. I started off doing systems management software for IBM server hardware and things like that, which when I look back on it was the precursor to cloud. It was like, " Hey, how do we centrally manage a lot of infrastructure from one place?" Which is what cloud is doing, it's just doing it in an API driven way where the end user gets to control some of that. And so I did that for a number of years doing development at the beginning of my career. And then moved into the IBM systems CTO office working on OpenStack technologies, and as we started our journey into cloud- based technologies. And during that tenure, I built a ridiculous Frankenstein prototype with IBM research, of how we could build our own container orchestration engine based on OpenStack and Docker sewn together in some very mutant ways. And we all thought it was cool enough that we turned it into the real public cloud service. So that was the Bluemix container service back in 2015. And then that was where I really cut my teeth on public cloud and containers, et cetera. And from there, we evolved with the industry and started getting involved in the Kubernetes project in mid to late 2016. And now I'm the lead architect for our public cloud Kubernetes and OpenShift services, and also IBM Cloud Satellite, which is hybrid cloud solutions that are like cloud managed services. Technologies where customers have the ability to bring their own infrastructure from on- premises or other cloud providers, and use IBM cloud services on top of that platform just like they would in IBM Cloud. So you get the consistent platform services experience regardless of what infrastructure you're bringing to the table. But the cool thing is all of this technology is rooted back to Kubernetes and now open over the past four or five years of my team's evolution. So pretty cool stuff.

Joe Sepi: Yeah, it's interesting. All sorts of product kind of stuff there, but it's all built primarily on open source technologies that you all have been working on for years, right?

Jake Kitchener: Yep. Yeah. And it's been nice from an open source perspective. During our sort of evolution as a cloud services delivery team, we've had open source projects that we've spun out of our own operations team, like some of the continuous delivery tools, like Razee. io, and some other things that we've been contributing back into the community. Into Kubernetes proper as well as other projects outside of that.

Joe Sepi: Yeah, yeah. Fascinating. There's so much stuff to dig into there, but we'll maybe come back to some of it. And let Brad do his intro as well.

Brad Topol: Yeah. So I as well have multiple decades at IBM. I've got 23 years, so is that... Michael or Jake, did you have more than 23 years, or?

Jake Kitchener: No. I think you're the most senior, Brad.

Brad Topol: Have I got the most mileage on these tires?

Jake Kitchener: I think so.

Brad Topol: So I came to IBM straight from finishing my PhD in distributed computing at Georgia Tech. And jumped right into a team that was focused on moving IBM research technologies into product. And the first product I helped get started, pulling some technologies from Watson Labs, Almaden Labs, and the Tokyo Research, was a product called WebSphere Transcoding Publisher. And back in the day, we're talking 1999, the phones didn't have real browsers. They had these weird browsers and you had to convert and shrink images, and convert from HTML to WML. And so it was a lot of fun, and I did that for a while. And then I moved in and did some autonomic computing. And then started doing some automated problem determination and cross product serviceability. And I made DE, distinguished engineer, doing that. And then it was time for a new something, I really had to do something new. It was time to go find a new opportunity. It was like 2011. And it was IBM software group at the time, they did not have a lot of open source contributors to a project called OpenStack. And I was given the opportunity to take three new folks, who had never done any open source contributions, and turn them into significant open source contributors. And I wasn't one myself, I hadn't done any open source. And I just looked at the team and said, " Yeah, we're all scared to death. We have no idea what we're doing. We're going to all roll up our sleeves and we're going to figure this out." And I just told the team, " This is going to be the best thing ever for our careers. This is going to be amazing. I know you're scared, I'm scared, we're going to roll in." And that's what we did, as working on proprietary IBM software for so many years we didn't use... I had never used Internet Relay Chat. We weren't at the time using Jenkins, or Git, or Gerrit, all these wonderful open source tools. And it was just throw people in the water and see if we could swim. And I was real fortunate, I started with three folks. I helped get them comfortable. And then I ended up having a team of 20 contributing to OpenStack. And we all found ways to contribute to the open source project and make contributions to the community. So that went really well, that was a great ride. I ran into Michael there, right? Michael, you were doing a little work that related to an OpenStack project called Heat. And I ran into Jake as well. He was, as he said, we were on the infrastructure side. And then things started to cool down a little with OpenStack. And the next thing you know, it was 2016 and I'm on a plane heading to my first Kubernetes conference. And I looked to my left and totally out of random, there's Jake. I don't know if Jake remembers? We were sitting together going, 2016, first Kubernetes conference, I think Seattle. And we both looked at each other and we said, " This is going to be really cool. This stuff looks really cool." I think Jake knew more about it than I did at the time. He was looking at me going... Probably trying to explain it to me as usual. And we had the best time going to that conference. And we looked at it and just said, " Oh, this is going to be bigger than OpenStack. We got to get into this." And then I got to do the same thing again, which was lead a large team of Kubernetes contributors and other similar cloud- native, projects like Istio and etcd. And it's been a fun ride ever since. And then, of course, along the ways we got an opportunity to write our first book. I can't remember how, through IBM and O'Reilly, we had a deal. And I said, " If you're going to write a book, Brad, find people smarter than you." And there was Michael, Michael had world class expertise in at the time the continuous delivery and the agile development. And he was IBM Cloud Private at the time. He was working on IBM Cloud Private and continuous delivery. And there was Jake in public cloud, and there was me working more in the community. I was like, " Oh, this is the perfect combination. We got the public cloud guy, we got the IBM Cloud Private on- premises person. We're going to somehow write a book and it's going to be really cool."

Joe Sepi: That's great. You got the best from all the different angles, right?

Brad Topol: Yep.

Joe Sepi: Maybe Michael, what was the first book about? And then we'll get into the second one.

Michael Elder: So at the time we were at the first book, we were really trying to introduce a lot more of the basics. You got to think about that was maybe 2017. Certainly there were a lot of people in the community that were very active, but we wanted to also make it tractable for people that were maybe in larger enterprises that hadn't begun adopting Kubernetes as the replacement for how they ran and delivered software. So in that world we were used to, " I build an application, I run a document of steps to install it. Maybe I'm a little bit more advanced, I've got a little bit of automation. Maybe a little bit more advanced, I've actually got a continuous delivery pipeline that can deploy applications into a running host operating system on an ongoing basis." And now we're coming at you with this idea that, " Well, maybe the right way to do this is containers." And in 2017, everyone was still playing with Docker. Developers like Docker on their desktop, their laptop. It made it easy to pull in databases and other things that developers didn't know how to run and didn't want to know how to run. And just, " Give me a database endpoint, let me use it in my app." And Kubernetes really was an opportunity both for IBM to think about, " Hey, how can we bring a lot of these concepts of what Jake and team are doing in public cloud to run applications in containers, how can we make that accessible to what our enterprise customers are doing in their data center? Can we make it more cloud- like?" And that was the idea. So Cloud Private was a distribution of Kubernetes built by IBM as a precursor or a catalyst so that IBM could containerize the entire middleware portfolio. Which was a gargantuan effort. Products that go back like MQ, decades and decades, that are critical infrastructure for a lot of large applications, large customers, banks and other financial transactions. And say, " Okay, now we're going to try and experiment with running these in containers, running databases in containers. What works, what breaks?" And so in the first book, just to bring it back around, we introduce a lot more fundamentals of, " Here's what Kubernetes is about. Here's how it takes that thing that you know as Docker and makes it more about running a highly available replicated microservice application. Here's how you can leverage it to consume IBM middleware that we've containerized, or open source middleware, leverage that from a development perspective, but also as part of your staging and QA environment, and also even out to full production."

Joe Sepi: That's interesting. Because I went through this as a consumer front end and backend developer, and was really excited to see the evolution. And I was lucky to work on some progressive teams that were really adopting stuff as it was evolving. So it sounds like that book is getting the first ones, getting people going. And early journey to moving to containerized cloud sort of stuff?

Michael Elder: Very much. And I think a lot of what's in the first book is still relevant to a lot of people who are new to it. But certainly a lot of the information around the introduction of the concepts, and how to consume content from a catalog, Helm charts as a methodology to deploy and run containerized workloads, maybe more of that's more well known. So with the second book, what we really wanted to focus on is, " Okay, Kubernetes as a term, as a community is, I think, very well understood. Clearly they've become an industry de facto standard in how you orchestrate and run containerized workload." IBM had a managed cloud service for Kubernetes way back. We saw other clouds come along, including Amazon and Azure, and obviously Google before that. But inaudible may be the first pieces of community middleware that I think we've seen where multiple cloud vendors have independently run managed services around Kubernetes. And that's great. There's a lot of great things there. With the Red Hat acquisition, that gave IBM a way to think about not only how to run Kubernetes in Amazon, or in IBM cloud, or in Azure, but how can you use one distribution of Kubernetes that's supported by the backing of all of Red Hat and the long history of Linux and Linux expertise there, and take that distribution of Kubernetes and make it applicable to any Hyperscaler environment? So obviously you've got a managed cloud service of OpenShift on IBM cloud. Red Hat as an independent partner has worked with Amazon and Azure to offer cloud services for OpenShift in those environments. And so Brad and Jake and myself wanted to take that idea, how do we help people understand the impact of having this type of portability, of having this model of open hybrid cloud that they can use in public cloud, they can use in a private data center? What are some of the gotchas that they may not recognize as they're wading into that and they're really trying to apply it to the entire organization to the way that they run software? In fact, maybe Jake, if you want to highlight, there's a lot of great capability content in the book you captured around the availability models and how to think about recovery times and scheduling.

Jake Kitchener: Yeah, sure. I think having run public cloud services at a pretty massive scale over the past five years on top of the Kubernetes platform, and I wanted to share some of the learnings and expertise that I've gained and that I've gained through working with our clients. About what are some of the challenges that the smallest customer to the largest enterprise will encounter when trying to move to a Kubernetes platform? Kubernetes, I think, and OpenShift along with it, provide a huge amount of value in tools and capabilities that allow you to deploy and build modern applications and services that are cloud native and highly available. But I think there's a lot of things that maybe folks aren't necessarily aware of, from managing tenancy and optimizing for resource consumption. And I really tried to take some of the key learnings that I've gained within my team and with our customers over the past four years on this platform and try to share that with them in the book. Things like a lot of customers in their journey to cloud are thinking about having real measured SLAs and SLOs for their applications and services that they're really going to be held to. And IBM, for certain, has huge models that allow you to put together modeling for SLAs and projected uptime for our mainframe and similar systems. And those things are all based on hardware redundancy and MTTRs and things like that, mean time to recovery, mean time between failures. All these traditional hardware concepts that mainframe technology really brought to the forefront of the industry. And it's like those same concepts still apply in cloud native with Kubernetes and OpenShift. And I was really trying to focus on helping people to understand, " Hey, what does it even mean to provide something that's highly available? How do you go about doing that? What are the catches that you may encounter? What are some of the best practices? How do you make decisions about architecting applications in cloud platforms with Kube and OpenShift in your applications?" To deliver on those expectations that I think every enterprise, and every small and medium business on the planet now, I think, is really beginning to expect. Obviously the explosion of the internet and online everything means that you can't just say, " Oh, we don't really have any customers in our shops from 8:00 PM to 8: 00 AM, it doesn't matter." That's just not how it works anymore. I want to be able to jump on my iPhone at 2: 30 in the morning and go order whatever dog food I need or something. And so CTOs expect something completely different from an availability perspective than we did even a decade ago. And I think there's a lot of things that we can do to help users in the community understand how do you dive into those areas and get off on the right start.

Brad Topol: And I remember when we were first getting together for that second book. And I could see right away, both Michael and Jake had this tremendous desire. They wanted to do a book that covered very advanced topics, and they both wanted to cover a lot of aspects of the operations piece that they felt hadn't been addressed in a lot of the books that cover basic Kube, right? " Oh, you do some basic Kube and here's your deployment, here's your service." But they really wanted to get in. And that's within this book, whether it's Jake covering the high availability and the resource management, or where Michael was doing the advanced cluster management running on multiple clusters. You saw they were both really driven to make sure that they covered those advanced topics really well, because they just felt like it hadn't been covered in other places. And then for me, what I was getting into, we were seeing this transition. We had the Red Hat acquisition, and we were moving from the IBM Cloud Private to OpenShift. And what I was seeing in my day job, there wasn't a lot of understanding, what does OpenShift bring to the table beyond Kubernetes? So what does it do and the benefits it gives you both for developers and for IT operators? And from my perspective, I wanted to make sure the book really covered that. Because OpenShift does amazing things and gives you amazing capabilities beyond Kubernetes, both for developers and IT operators. And somehow we wanted to make sure, in addition to all the other great stuff we were covering, that we cover that. To make sure you get a great feel of, " Oh, here's how I use Kubernetes." You still get Kubernetes fundamentals, a little bit of Kubernetes architecture in the book, but then you also get to contrast it with what OpenShift gives you beyond that. And that gives, I think, the reader a really broad perspective on which way they want to go, which tools they want to go. If you look at the chapter on continuous delivery, we're covering a large, broad spectrum of continuous delivery tools. And so I think between the three of us, we did a great job of covering a lot of topics that people will find very valuable.

Luke: That is so interesting. Because one of the biggest negatives I hear about Kubernetes when I'm out there on forums is people talk about, " Yeah, it solves all these problems, but it introduces a lot of complexities." And I think the answer is... Or there's two ways, you could go try to make every mistake yourself, or you could go talk to the experts and figure out how to learn from these things. So I really appreciate that you're able to bring the expertise of working on enterprise level mission critical stuff that's run in serious business, and then you're able to share that expertise through this book. Which I want to make sure to extend that to our listeners to know, hey, we're on here now. We'll be on Twitter later. But if you have questions right now while we're live, please feel free to drop them in the chat on whatever platform you're listening on. And if you're catching this as a podcast replay later, hey, tweet at us, drop them in the comments, we'll be able to answer your questions asynchronously.

Brad Topol: Yeah. And just to be very clear here, this is the title of the book, Hybrid Cloud Apps with OpenShift and Kubernetes, the short link there will get you to it. But it's available in all the places you would get these types of books. Definitely check that out. And thanks again... There we go, perfect.

Joe Sepi: It's got the nice animal on the front.

Brad Topol: I love it.

Michael Elder: Brad, if you can't tell, not only is the organizer and orchestrator of this band of Amigos, but also the hype man. So he's the one that brings the excitement.

Joe Sepi: Hype man. Love them. Awesome, so I'm curious, I think maybe a good thing to ask is, what's the most important takeaway from the book, do you think? And I'll ask you, Jake. If you said, " If you read this book, only take one thing away..." What is the most important aspect of the book? And we'll get into some other stuff afterwards.

Jake Kitchener: Oh, wow. One thing? I guess just it's more of a general thing, is that I think that the main takeaway is that there are a lot of challenges presented in modern cloud technologies, and Kubernetes and OpenShift can address a lot of those. But, as Luke mentioned, there's still plenty of unknowns that remain. And make sure that you get educated and you put together a plan for how are you going to leverage these technologies and these tools as a platform for accelerating your company's or your team's journey into cloud native? Don't just jump in willy nilly. There's plenty of ground has been covered already by the community, folks like ourselves and others around. And learn from them, take advantage of the knowledge that we've gained from some of our mistakes and mishaps, and the evolution of the technology underneath it before you go running off to production. As exciting as it is, I think that's one of the things... I know my team, we run fast and loose, I guess some people would say, but we've definitely learned over time. It was easy in 2015 to just say, " Oh, all of this stuff is so immature. It's what it is." But I think it's come real far, and so the expectations are much different now from the industry as a whole. And as easy as it can be to get started with Kubernetes and OpenShift, it does take real thought and expertise to make sure that you're going to put together a platform that is resilient, and highly available, and performant, and secure. And take advantage of the knowledge of others before you go shove all that fun stuff into production.

Joe Sepi: Yeah. And I think that's important too. We joke about the CNCF landscape, the actual web view of the landscape, but Kubernetes is complex in all the different aspects of it. So it makes sense to really focus on how you do that. I'm curious too, I'll ask you, Michael, and any of our guests here, feel free to redirect if someone else is the better person to answer this. But thinking about OpenShift and perhaps... I don't know how much you dive into this in the book, but I think of OpenShift as an opinionated way to do the Kubernetes platform. What do you think the book brings to that sort of conversation that people can learn from?

Michael Elder: Certainly. So I think very much in the way that Red Hat Enterprise Linux is a very opinionated view and a very supportive view. Every time that Red Hat delivers a capability from open source technology, not only as a culture, as a community, Red Hat's thoroughly engaged in the upstream, contributing, helping to lead, helping to steer, helping to influence on behalf of customers. But there's also a very strong set of opinions on how Red Hat consumes and builds and supports, and puts trust into the supply chain of that open source software. So OpenShift is very much like all the other capabilities within REL, focused on imbuing that trust of the code. Has it been tampered with from what's in the upstream? It's been built in secure environment. It's been linked to a very definitive list of all the dependencies from all the places it can be pulled. And all of that source was built by Red Hat as well, and all the way down to the very bare bones of the kernel. So on top of that, then we look at, okay, we have this supply chain of how we consume software. Then when OpenShift runs, OpenShift very much thinks about security. So a lot of the times when, if you hear about the CVE from Kubernetes, there's a lot of times where that CVE doesn't apply to OpenShift, because the out- of- the- box configuration disabled some part of that attack vector. There was one that had to do with host port, host networking. And if you were able to send a certain set of signals in a certain way, you might be able to grab a hold of a host port or host networking access that you shouldn't have as a pod on a cluster. Particularly that was a risk in multi- tenant clusters. And OpenShift clusters weren't susceptible to that out- of- the- box, because they'd already disabled that as one of your default options. So it's those types of things that, particularly if you're getting started, being able to just deploy OpenShift, you can actually run it in your inaudible medley, you can bring coast. But what's neat is when you look at the model of how OpenShift thinks about the world, it will provision your cloud infrastructure, or you'll bring it credentials for your AWS account, run the installer under your control. It'll create all the hosts in that cloud account for AWS or Google or Azure. And so then what's neat is if you look at... And I might go off in the weeds here because there's some fun stuff to me. But there's this concept of operators. At the end of the day operators in that control loop the Kubernetes runs is what differentiates it from the past generation of run automation one time. Because that control loop is constantly reconciling, does the real world look like your desired state? And if it doesn't, it'll go through and change it. And so OpenShift applies that same concept to the way that it configures that Kubernetes platform. There's an operator that understands how to configure authentication, so that your Kubernetes API can determine, " This is Joe, this is Luke, this is Michael," this is whoever. It has an operator that configures storage and networking, so I can just declare with the Kubernetes YAML object, with that API object, " Apply it to the cluster." And the operator will take care of bringing it into alignment. And that, to me, not only is that I think highly opinionated in terms of how to make it easy and secure. But also focused on applying the concepts that Kubernetes brings to the workloads and using them to manage the infrastructure. And, to me, that's a really powerful concept overall.

Joe Sepi: Yeah, that sounds smart. That makes sense. Let's maybe take a moment, Luke, for some housekeeping? And then we'll dive back into the book.

Luke: Absolutely. So... What were you going to say? I thought I cut you off again. I know that's my thing, to cut you off. But.. Okay. So first thing I want to mention is if you're listening live, thanks for tuning in. We do this twice a month, this show. We're actually about to launch another podcast, a data science podcast, which I'm very excited about. We've got a whole bunch of other special series, App Modernization, Tech for Good Initiative, Call for Code. We've got some stuff on hybrid cloud and OpenShift coming up, as well as... And this is a little sidebar I want to mention here, is ISV builds stuff. So independent software vendors and building is a big part of what we're doing at IBM. And it's really... I think especially the topic we're talking about today is a great example. Where if you are a smaller company and you want to sell to the enterprise, or you just want to have enterprise grade systems, this is a great place to tune in. Even if your business isn't normally enterprise, tuning in to the IBM Developer Podcast will really highlight some best practices and you can borrow from the expertise of enterprise. But you can always find our show on IBM. biz/ intheopen. And this will have all of our past episodes embedded as videos, as well as the most current episode and live episode in the header. You can find all the back episodes of the podcasts on IBM.biz/ intheopen. And yeah, please check out more podcasts. Subscribe on your platform of choice.

Joe Sepi: And of course, so we should mention developer. IBM. com has all sorts of resources beyond podcasting. So check all that great stuff out. Cool. It seems to me, and I'll ask you, Brad... And I assume that the book touches on this, so maybe you can dig into it a bit. But the concept of roles in this sort of environment, like the whole landscape of continuous integration and delivery, roles are starting to become something that you really want to be mindful of as you're building out your plan and your team and everything. Do you guys touch on that at all in the book?

Brad Topol: Yeah. We certainly cover the continuous delivery. And what's nice is we cover a large amount of tools, and also sort of the OpenShift opinionated versions of those tools. So you can get a kind of feel whichever way you want to go. And on the roll side, you, Jake really gets into that... And I'll let him cover a little bit more. But understanding how to break the users into different groups for the tenancy. And I'll let him talk more about that, that's his expertise. But then we also cover a lot of the security, which is very role related. And what OpenShift does to make the security simpler to manage and handle. But, Jake, do you want to cover the roles part a little bit more, and what have you?

Jake Kitchener: Yeah, sure. So it is interesting to look at how things have evolved over the last few years in the community, and how much role- based access control in Kube and OpenShift has become central to the technology platform. But we do talk quite a bit about, in the first book actually, just the fundamentals of RBAC and how it works. And then in this book we get into a little bit more details about the types of permissions that users should be given access to within the cluster, based on their role in the business or the enterprise or the team. And I think this is actually one of the places where OpenShift helps out quite a bit versus box stock Kubernetes. Is that out- of- the- box I think OpenShift is providing a much more opinionated and curated set of roles that are included by default for you to be able to use for your team, and some nice tools to be able to distribute that access and stuff. But I think the big thing that we cover, and I'll let Michael add onto this after I finish, is with the proliferation... It's so funny, when Kube first started getting traction, people always talked about the VM explosion, and how do you keep track of all your VMs? And then we're like, " Oh, Kubernetes is going to make sure that we solve all of those problems for containers before you even get started. Your 10,000 containers, Kube's going to manage and orchestrate all those for you. Don't worry about it." But then it was a year later we're like, " Oh my gosh. Now we have how many clusters we have to manage around the world." And so this is where things like advanced cluster management and some of the content in the book really gets into how critical it is to have strong tools and GitOps, best practices and things like that to make sure that you are distributing policy and enforcing it in a uniform fashion across your entire estate. For my team, we run something like 140 Kube and OpenShift clusters that belong to us, and tens of thousands for our customers. And yeah, I forget about how many hundreds of thousands or millions of containers are being run across that entire estate. It boggles my mind that we've made it this far, and I think it's thanks to things like the advent of GitOps as a fundamental best practice. And some of the work that Michael's team and our team have been doing around multi cluster management together really has become super critical. So it's those fundamental simple things of, "Oh, this is how you access control one clusters." Yeah, but now you have to do that for an entire enterprise worth of clusters, and applications, and policies, and different environments moving from dev to staging, and production clusters. And things like getting into geographically enforced compliance requirements, unique to things like EU Cloud or FedRAMP environments. And, man, if I could go back and talk to 2017 Jake, I'd be like, " Get ready, buddy. Because you have no idea how crazy it's about to get."

Joe Sepi: Yeah. I'm sweating just thinking about it.

Michael Elder: So this is something that when you get your hands on it, the concept's attractable. At the end of the day, we're using computers to manage computers. And we're going to tell computers on one half of the equation what we want those computers to tell the other computers to go and do. That's all it is, at the end of the day. So there's an entire chapter where we focus in on policy governance. And talk about examples where I'm going to create... I've got a subject, a user or a group of users, and I've got a role. And then Kubernetes was a concept of ClusterRole, which is just a different scope, but we'll just call it a role. And the role declares all the permissions of what any user assigned to that role can do. They can create deployments, they can view deployments, they can create but not delete or view secrets, whatever you're going to slice up their permissions. And in chapter seven, we go into this concept of policy governance across the fleet. Where once you make those decisions, like, " Here's how I want my roles to be crafted. Here is the groups that I want to be declared on every cluster. Here is the bindings between the groups and the roles. So this group called Dev Team A, they get bound to the developer role. And this group called Ops Team 23, they get bound to a different role." And so all of that gets wrapped up in YAML, this text yet another markup language. And that's what Jake's highlighting, is this concept of GitOps. Because we can put that configuration in a gitRepo, make it part of our continuous delivery process and how we suggest changes through pull requests, through reviews, through testing in earlier stages. And then what we bring into the equation with a project called Open Cluster Management is focusing on how I can then take the configuration that I work through this process, got it to where I want it to be in my gitRepo, and have a hub cluster that can now deliver it out to every single attached cluster that's in my fleet. And then report back through... Again, not only just apply the change, but report the status back. That is that Kubernetes model of the continuous reconciliation. And in the book it shows a concrete example of, " Here's a policy. Here's how we apply it with a placement rule. Here's how it gets distributed out to the fleet at large," to try to make that more attractable and more consumable. So it's not magic. At the end of the day there is no magic, there's only ones and zeros.

Joe Sepi: Yeah. I think the concept of computers managing computers really make sense. And having configuration files, YAML files, that's what the computers use to manage the other computers. I think it's a very digestible concept, for sure.

Jake Kitchener: I'm just disappointed there's no magic.

Joe Sepi: There's never really magic.

Brad Topol: I was going to say, also in some of the earlier chapters, what Jake goes through is for the scheduling. And he talks about, " You can change this knob and you can change that knob. And let's see what happens." Because what you can really do is you might think you're making a good decision on how you're mapping priorities for your pods or what have you. And Jake shows you, " Ooh, this one may not be a good idea. And here's why." So yeah, we've got the advanced cluster and policy, but you still also have the single cluster both availability and resource management and scheduling that I think people will find very valuable. Just a lot of subtle details in there that maybe people didn't see or think about the consequences of their decisions. And I don't know if Jake wants to add more to that or not, but.

Jake Kitchener: I always think about the consequences of my decisions, Brad. What are you talking about?

Brad Topol: Well, I know you do. Teaching everybody else.

Joe Sepi: Yeah. But it keys up a question that I had, though. I'm wondering if when the three of you working on this book, are there any instances where you saw somebody making... I don't want to say bad decisions, but maybe poorly informed choices that were problematic? And you thought, " Oh, we definitely want to get this into the book." Or any sort of problem stories that you wanted to make sure you essentially covered that you experienced?

Jake Kitchener: Oh, resource management, number one on the list. One of the things that I don't think people realize about how Kube and OpenShift works is that it doesn't understand anything about the real world consumption of resources, only what you tell it you think you are going to use and the limits that you tell it to enforce. And on day one, nobody uses any of those advanced scheduling and resource management tools that are in Kubernetes and build the policies that Michael is talking about, to go distribute and enforce those things. Kubernetes has evolved tremendously to have policy enforcement and quota management, and all these other things that help you do it. But on day one, everybody's just like, "Go run my app, whatever. Just give me all the CPU, all the memory, whatever you want." And the number of phone calls that I've gotten from customers who are like, " All of our worker nodes keep dying. And we don't really understand why." And I'm like, "But you have four cores and 16 gigs of memory, and you're running these Java apps with 32 gig heaps. That's just not going to end well." And so they quickly learn that... There's a lot of things that Kubernetes can do to help protect those nodes from bad actors, but there are limits. And customers and users are extremely good at exceeding those things. So that is by far and away the number one. It's like everybody's like, " Why are my apps always getting killed? Why don't they perform well? Why are my nodes dying all the time?" And the answer is always because you were, we'll say-

Michael Elder: You had opportunities to improve your resource consumption that you failed inaudible.

Jake Kitchener: Yeah. It's like I was looking for a nice way to say something like, " You were being lazy or ignorant." I started off exactly the same way, and my team did, and I think everybody does. Because at the beginning you're just like, " I want to see cool stuff happen." And cool stuff is running your app, and you kill pods and they get restarted automatically, and they can auto scale and all that kind of fun stuff. But it's like that resource management is by far the number one thing that I think people really need to get into. And then just it's phenomenal that the community has been able to put together the guardrails, and the tools, and the fundamentals to help put up those guardrails to prevent people from doing crazy and irresponsible things with their compute resources. So yes, very grateful for that.

Joe Sepi: Yeah. We're all humans, right? We're all kind of at some point in our journey, so it's great to have books like these that's, " I learn from my mistakes and the things that I've figured out."

Jake Kitchener: Yeah.

Joe Sepi: What about you, Michael? You got any good stories?

Michael Elder: Everything in the book comes out of some part of our collective experience, between the three of us. I don't know there's anything specifically where it's" Let me make sure no one ever does this." But more, " There's lots of partial mediocre ways to accomplish that goal." And I think more than anything else, I wanted people to see, " Look, here's what we think the best way is based on all of the knowledge that we've collected." But maybe one of the stories that I'll tell, based on Jake's last comment, about at the pod restarting. When we were first beginning to look at Kubernetes as a way to containerize the middleware stack and run software more efficiently in the data center, Jake had already been, along with lots of other great people, running a public cloud service around Kubernetes. But looking at, " Okay, can IBM distribute our own? Or ultimately acquire something to distribute our own capability for containers?" And I was doing this demo to a very senior executive person who was less technically savvy. And I'm explaining, " Look, here I can see all of the containers that are running on the set of hosts. And here I can click this button and kill this pod. And then let's see if it has an effect on the example application that we have." And the person was like, "No, no, whatever you do, don't kill the pod. Don't kill the pod." I was like, " No, it'll be okay. Do you want to kill the pod?" And they're like, " Oh, I don't want to kill the pod. I do not want to touch that pod. I don't know what's going to happen when I kill that pod. Demos go south all the time when you do them live." But even that concept that Kubernetes introduced of, " Look, let's assume everything will fail. Assume that everything underneath you has an opportunity to cause you a bad day. And then let's put enough automation around it to correct for most of the things, so I don't have to get up in the middle of the night on a pager duty call and address it." And then that's where I think Kubernetes has a lot of allure and where people get started. And then to Jake's point, " Okay, yes, my pod got restarted. But why did it get killed 48 times in the last two hours? What is going on there?" And that's where you understand a little bit more about how does Kubernetes address scheduling? And how does it address resource management? And things like that. It makes a big difference.

Joe Sepi: Yeah, yeah. No, that's interesting. I know we're running out of time here, but I'm curious, do you at all talk about in the book any sort of AI machine learning? Taking information from what's happening in your applications and being able to tweak how you're doing any of the continuous delivery work? Is there any sort of AI/ ML stuff in there?

Michael Elder: To some extent, I think we ran out of pages. We started out with a goal and then I think we ended up writing about twice as much as we originally intended. And we pulled content that we otherwise would've put in, just to make sure we didn't go above a certain price level or whatever.

Joe Sepi: So inaudible.

Michael Elder: But I think in terms of additional AI optimization, that's probably the next less well- defined territory. Everyone's looking at that. We're looking at that both within public cloud, within our private offerings. How do we use a collection of data to provide more insights to users? And in fact, that's a capability that we built into the next version of the product, the supported product that is what I work on in my team called Advanced Cluster Management. In that next realm, in that next release, which comes out in about a month or so, it'll actually allow you to send some data to a cloud redhat. com. And collect insights and suggestions on how to optimize that cluster and improve it. So certainly that is, I think, the next horizon. The next generation of, " Okay, we built a lot of computers that can automate other computers. They can make them recover, but now let's make it more efficient." Nobody knows what the right number to pop in, this magic number syndrome. So now can we make software to tell us what numbers to put into the other computers that are running our last computers that were actually running the workload?

Luke: I have a question. I know we touched a little bit on OpenShift and how it's an opinionated stack that solves certain problems for you, like security and these things. But I guess my question is, how do I make a choice if I'm looking at, " Hey, should I just run open source Kubernetes myself? Should I use Kubernetes service somewhere? Or do I need OpenShift?" Where are the thresholds? And you can answer that question now or tell me also, " Hey, read the book." That's a valid answer.

Michael Elder: Yeah, no. I think the simple thing is if you're going to rely on this foundational building block to run your business, then I think you want a trusted partner more than anything else. At the end of the day, you want to know that you're not consuming an open source build that somehow was exploited by yet another ransomware attack. Did it come from a place that has a sovereign protected build environment that produce a binary that you're now running in your production context? Maybe the real choice there is, do I want a managed thing where I consume a cloud service? Or do I want to run software on my own? And that's probably a decision that each organization has to make based on their own structure and their own level of control that they want to have over it. And certainly that's part of the reason why we've gone into OpenShift as a service running on other different cloud environments so that you have consistency. I think there's advantage to that consistency versus picking up one distribution of Kubernetes. It's optimized for cloud ecosystem A, a different version of Kubernetes optimized for cloud ecosystem B. Core concept, certainly it's Kubernetes, but the way that it binds into how this cloud ecosystem does identity or this cloud ecosystem does networking, that's a balance you've got to figure out. " Do I want to spend a lot of time papering over those differences? Or do I just want to have the platform give me consistency out- of- the- box?"

Brad Topol: Yeah. And just to add to that, Luke. The nice thing about OpenShift is it keeps you from running with scissors, it keeps you from hurting yourself. It was one of the things that, the insights that I saw is, like the other folks said, you're so happy in Kubernetes you could get something up and running. But you didn't realize what you got up and running has a lot of attack vectors, it's running as a privileged container, you messed up the rule- based security. And as you move from just standard Kubernetes to OpenShift, you realize all the wonderful things that OpenShift is giving you and as the default configurations to keep you safe. And if you're an enterprise, a bank or an insurance company, you really don't want your folks experimenting to try and figure out if they've got the security right or not, because it's going to look really bad if all those credit card numbers get out to the world. And so just that peace of mind OpenShift gives you with, " We're going to keep you from doing silly things right out- of- the- box. And you're really going to thank us later. It might be a little annoying at first, but boy, you're going to thank us that your stuff has a lot less attack vectors."

Luke: That's so interesting. And it is that you had mentioned there's all these ransomware, and nation state level attacks happening. And I just wanted to mention one of our past guests on the IBM Developer Podcast was a technical executive in the NSA. And I saw just this week on LinkedIn, GitHub hired him as their VP of Security for open source expertise, for exactly this because these pipelines have to be secured. So it's so interesting that's what's baked into the OpenShift.

Michael Elder: And even that continues inaudible is something that OpenShift provides a lot of tools out- of- the- box to make it easy to stand up Tecton, if you're going to use Tecton for builds. Or stand up Knative if you want serverless on top of it. Those types of things as well.

Joe Sepi: Yeah. That makes sense, there are so many footguns in just diving into Kubernetes and all the things that kind of go along with it, to pick something that battle tested and built from lots of experience. Makes sense. So we touched briefly on the AIML stuff and we're just about out of a time. I'm curious, is there a third book that y'all are thinking about already? Or what's going on?

Michael Elder: Not any time soon.

Jake Kitchener: Brad, I'm sure, is already scheming what the next one is going to be. And Michael and I are basically still licking our wounds from the last book.

Brad Topol: That's right. It's just a little too soon for me to pitch ideas to my co- authors. We've just got to give them a little more breathing room, Joe. So we're just going to tread lightly through the tulip there. But one day it could happen.

Michael Elder: For sure.

Joe Sepi: Yeah. Inaudible.

Luke: Well, regardless of a new book, definitely like to have you back on in a few months after. Maybe you have some more stories to tell and things going on, because this was a really fun conversation and I think we only scratched the surface here. There's so much to talk about.

Jake Kitchener: Yeah. I would love to do that.

Michael Elder: Thanks for the opportunity to come and join.

Joe Sepi: Yeah. Thank you all for joining. I'll remind folks watching now, or watching later, hit us up on Twitter. My DMs are open. If you have questions, I'll pass them along, we'll try to surface knowledge. Definitely check the book out, so much good stuff in there. Advance techniques and approaches to Kubernetes and OpenShift. Yeah, I guess we'll call it a day. Thanks, everybody.

Luke: Appreciate it, everyone. Have a good weekend.

Joe Sepi: See you soon. Cheers.

Luke: Cheers.

DESCRIPTION

In this episode of In the Open we bring you a conversation with Brad Topol, Jake Kitchener & Michael Elder.  We will be discussing their new O'Reilly book Hybrid Cloud Apps with OpenShift and Kubernetes. Topics include the fundamental concepts of Kubernetes, as well as more, advanced practices such as continuous delivery and multi-cluster management.

Dr. Brad Topol, Open Tech & Dev Advocacy CTO, @bradtopol

Jake Kitchener, Senior Technical Staff Member @ IBM, @kitch

Michael Elder, Senior Distinguished Engineer @ Red Hat, @mdelder

Joe Sepi, Open Source Engineer & Advocate, @joe_sepi

Luke Schantz, Quantum Ambassador, @IBMDeveloper, @lukeschantz

Today's Guests

Guest Thumbnail

Michael Elder

|Distinguished Engineer/Sr. Director
Guest Thumbnail

Jake Kitchener

|Senior Technical Staff Member, IBM
Guest Thumbnail

Brad Topol

|IBM Distinguished Engineer, Open Technology and Developer Advocate CTO for AI and Kubernetes Technologies