Kubernetes and OpenShift  | Brad Topol, Jake Kitchener & Michael Elder

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Kubernetes and OpenShift  | Brad Topol, Jake Kitchener & Michael Elder. The summary for this episode is: <p>In this episode of In the Open we bring you a conversation with Brad Topol, Jake Kitchener &amp; Michael Elder.&nbsp; We will be discussing their new O'Reilly book Hybrid Cloud Apps with OpenShift and Kubernetes. Topics include the fundamental concepts of Kubernetes, as well as more, advanced practices such as continuous delivery and multi-cluster management. </p><p>Dr. Brad Topol, Open Tech &amp; Dev Advocacy CTO, <a href="https://twitter.com/bradtopol" rel="noopener noreferrer" target="_blank">@bradtopol</a></p><p>Jake Kitchener, Senior Technical Staff Member @ IBM, <a href="https://twitter.com/kitch" rel="noopener noreferrer" target="_blank">@kitch</a>&nbsp; </p><p>Michael Elder, Senior Distinguished Engineer @ Red Hat, <a href="https://twitter.com/mdelder" rel="noopener noreferrer" target="_blank">@mdelder</a></p><p>Joe Sepi, Open Source Engineer &amp; Advocate, <a href="https://twitter.com/joe_sepi" rel="noopener noreferrer" target="_blank">@joe_sepi </a></p><p>Luke Schantz, Quantum Ambassador, @IBMDeveloper, <a href="https://twitter.com/lukeschantz" rel="noopener noreferrer" target="_blank">@lukeschantz</a></p><p>Hybrid Cloud Apps with OpenShift and Kubernetes <a href="https://ibm.biz/hybridappbook" rel="noopener noreferrer" target="_blank">ibm.biz/hybridappbook</a></p><p><br></p><p><strong>Key Takeaways:</strong></p><ul><li>[00:05&nbsp;-&nbsp;00:24] Intro to the episode</li><li>[05:06&nbsp;-&nbsp;14:45] Intro to Michael, Jake, and Brad</li><li>[14:55&nbsp;-&nbsp;19:53] What their first book was about</li><li>[26:21&nbsp;-&nbsp;27:53] What is the most important aspect of the book?</li><li>[34:06&nbsp;-&nbsp;40:03] The concept of roles in continuous integration and delivery</li><li>[41:40&nbsp;-&nbsp;46:10] Problem stories that the authors wanted to cover in the book</li><li>[48:24&nbsp;-&nbsp;50:54] How do I make the choice between running open source Kubernetes myself, Kubernetes service, or do I need OpenShift?</li></ul>
Intro to the episode
00:19 MIN
Intro to the episode
09:36 MIN
What their first book was about
04:58 MIN
What is the most important aspect of the book?
01:32 MIN
The concept of roles in continuous integration and delivery
05:56 MIN
Problem stories that the authors wanted to cover in the book
04:29 MIN
How do I make the choice between running open source Kubernetes myself, Kubernetes service, or do I need OpenShift?
02:29 MIN

Luke: In this episode, we bring you a conversation with Brad Topol, Jake Kitchener, and Michael Elder. We will be discussing their new O'Reilly book: hybrid Cloud Apps with OpenShift and Kubernetes. Topics include the Kubernetes fundamentals, as well as more advanced practices such as continuous delivery and multi cluster management. Before we welcome our guests, let's say hello to our co- host, Joe Seppi.

Joe Seppi: Hey Luke. How are you?

Luke: Good. How are you doing, Joe?

Joe Seppi: I'm doing all right. I'm doing all right. It's some nice day out. We got blue skies. What's great about this is then my camera looks good. I got good color, good lights, good things all around.

Luke: Excellent lighting. I agree. It's very well lit.

Joe Seppi: Thanks. How are things by you?

Luke: I would say I'm enjoying this weather. This is the weather I was hoping to have in the spring. It is just... it's temperate. It's a cool morning. We're going to have a nice warm afternoon. Love it.

Joe Seppi: Excellent. Excellent. Cool. Without further ado, let's bring in our friends.

Luke: Yeah. Let's bring in our friends. Hello, Brad. Hello, Michael. Hello, Jake.

Jake Kitchener: Howdy, gents. Happy Friday.

Brad Topol: Joe. It's good to see you again.

Joe Seppi: Nice to see you as well. Where are you all at? How's the weather by you? How about Brad. How are things where you're at?

Brad Topol: There's going to be real diversity in these three answers, I can tell. I am in basically Cary North Carolina, suburb of Raleigh, and weather looks pretty nice today.

Joe Seppi: Nice. Very nice.

Brad Topol: Michael and Jake, tell him the diversity of your answers.

Michael Elder: Right. I'm in Durham, which is completely different than Raleigh. crosstalk but we do it with more style. That's just how we roll.

Joe Seppi: Yeah.

Jake Kitchener: I'm in Raleigh, which is just Raleigh.

Joe Seppi: This is funny, because Luke and I are both in Connecticut. I'm actually in New York right now, but we're usually like, " oh," but we're on the opposite side of Connecticut. He's close to the water. I'm in the mountains, so we do actually have different weather. You guys are just all right together, huh?

Michael Elder: Exactly. crosstalk We came together, working together in the same labs with IBM, which are right in Research Triangle Park. That's how we met over the years and where the collaboration started.

Brad Topol: Yep.

Joe Seppi: Yeah, interesting. How long ago was that and what all were you working on?

Michael Elder: I would say probably going back to 2012, I think some of the early work that Jake and I did with OpenStack and then Brad with OpenStack as well, and then that evolved between a couple of different projects and culminated with some of the things with the book and other things.

Jake Kitchener: Yeah, it's funny. As the industry has evolved, we still occupy the same places in the industry respectively. It's just the technology has moved on. As everybody in the entire planet of cloud and open source started in OpenStack, We all do KubeOne OpenShift now. It took a couple years to complete the transition, but I'm just going to conferences and seeing the exact same people that I saw 10 years ago. It inaudible changed.

Joe Seppi: We all evolved together, right? The whole platform and the technology, but that the people are there as well, moving things forward.

Brad Topol: Yep, and we also got to evolve because the three of us have now done two books together. You can go back to that snapshot in time when we first got the three of us together and said... I came to Michael and Jake and said, " Hey, I think we got this opportunity to write a book on Kubernetes. Let's go for it." It's fun to go back to thinking of that first time where we're like, " Hey, can we actually write a book? Can we do this?," and then go through it once with one book and then get the opportunity to go through it again with a second book. Then everybody's leveled up and ready for the second book, and they're not quite as fear of the unknown, as they were maybe in the first book.

Michael Elder: No. We fear all of the known in the second go around.

Jake Kitchener: Also, we're at least 50% smarter for the book two than we were for book one.

Joe Seppi: That helps. That helps a lot. It's interesting too, because I, having worked at IBM for a few years, like a lot of the stuff that I do, remote and async and collaborating and stuff, but you three are all in the same town, so I'm curious how much... I imagine there's a bunch of it that was done asynchronously, but how much did you get together over a meal or a drink or something and work through some of it?

Jake Kitchener: Book two is a covid publication. crosstalk We had the initial brainstorming meeting in person, I remember in building 502 and RTP on IBM's campus, last January or February? crosstalk That was the last we ever got to see each other, I guess.

Brad Topol: That was right when Michael broke the news and said his whole team was moving to Red Hat. We were like, " Oh, we got this book to do."

Joe Seppi: That actually might be a good segue. We can go around the horn with a couple of guest introductions here. I know, Michael, you're at Red Hat now. Maybe you can start and we'll just hear

Michael Elder: Sure.

Joe Seppi: Where you are.

Michael Elder: crosstalk I've had basically a two decade history with IBM. I started as an intern in college with their Extreme Blue program, which if you're watching this and you're an intern and you're looking for something fun to do, the Extreme Blue program really is a great opportunity to put your wits together with other team members and put together a great project on a fun topic, just to highlight for anyone who's interested in that. I started working in RTP and then continued coming up in the areas around Agile development, software development methodology, worked in the rational dev tools. 2007, 2008, I was focused on how can you make it easier to deliver software to distributed servers and run applications on that middleware. Then that problem has stayed the same, as we highlighted, that it's just a bunch of different technologies to do it. Moved into DevOps, worked in areas around UrbanCode, which was a company that IBM acquired, helped lead that portfolio for a time, worked a little bit with IBM Cloud and the continuous delivery service, and then started working on Kubernetes and containers and using that as a methodology to streamline how you deliver and run middleware and applications. Then after the Red Hat acquisition, the team that I was a part of was transitioned into Red Hat because we wanted to contribute that part of the technology. For maybe people who are outside of the IBM neighborhood, Red Hat still very much is a separate distinct independent entity, but there were about two or three very strategic areas where IBM contributed technology into Red Hat, that we've then been working to take open source. We've started a project, open cluster management, which is focused on making it easier to run Kubernetes and OpenShift clusters. I have been doing that now about a year and a half.

Joe Seppi: Cool. It's interesting too, how much we do and then take it to the open source space and build there, and dedicate the resources and the time to make that happen.

Michael Elder: Really, just one more quick point. When I started within IBM, I had the opportunity to be an Eclipse committer. From an early time in my career as a software engineer, we were contributing and working in the open source. At that time, we were at Eclipse, but certainly that's been a long- running theme for IBM technology through what I've been able to see.

Joe Seppi: I feel the people who are deep in open source know IBM's commitment to open source, but I think the majority of folks doing development have no idea. I always use any opportunity to highlight how active we are in the open source space and how much we open source and build on top of open source. That's great.

Michael Elder: Absolutely.

Joe Seppi: I'll switch over to Jake.

Jake Kitchener: Yeah, I am also a two decade IBM- er now. I also interned with IBM inaudible in college, and have been here since. My path is definitely a little bit different than Michael's. I've always been an infrastructure platform kind of person. I started off doing systems management software for IBM server hardware and things like that, which when I looked back on it was the precursor to cloud. It was like, " Hey, how do we centrally manage a lot of infrastructure from one place?," which is what Cloud is doing. It's just doing it in an API driven way where the end user gets to control some of that. I did that for a number of years doing development at the beginning of my career, and then moved into the IBM systems CTO office, working on OpenStack technologies. And as we started our journey into cloud- based technologies. During that tenure, I built a ridiculous Frankenstein prototype with IBM research of how we could build our own container orchestration engine based on OpenStack and Docker sewn together in some very mutant ways. We all thought it was cool enough that we turned it into the real public cloud service. That was the Bluemix container service back in 2015. Then that was where I really cut my teeth on public cloud and containers, et cetera. From there, we evolved with the industry and started getting involved in Kubernetes project in mid to late 2016. Now, I'm the lead architect for our public cloud Kubernetes and OpenShift services, and also IBM Cloud satellite, which is hybrid cloud solutions that are cloud managed services, technologies where customers have the ability to bring their own infrastructure from on- premises or other cloud providers and use IBM cloud services on top of that platform, just like they would in IBM Cloud. You get the consistent platform services experience regardless of what infrastructure you're bringing to the table. The cool thing is all of this technology is rooted to it back to Kubernetes and now open over the past four or five years of my team's evolution. Pretty cool stuff.

Joe Seppi: Yeah, it's interesting, all sorts of product stuff there, but it's all built primarily on open source technologies that you all have been working on for years, right?

Jake Kitchener: Yep. It's been nice from an open source perspective. During our evolution as a cloud services delivery team, we've had open source projects that we've spun out of our own operations team, like some of the continuous delivery tools like [Rezi.io 00:10:08] and some other things that we've been contributing back into the community into Kubernetes proper as well as other projects outside of that.

Joe Seppi: Yeah. Fascinating. There's so much stuff to dig into there, but we'll maybe come back to some of it and let Brad do his intro as well.

Brad Topol: Yeah. I as well have multiple decades at IBM. I've got 23 years. Did Michael or Jake, did you have more than 23 years?

Michael Elder: No, I think you're the most senior, Brad.

Brad Topol: Am I the most mileage on these tires? crosstalk I came to IBM straight from finishing my PhD in distributed computing at Georgia Tech, and jumped right into a team that was focused on moving IBM research technologies into product. The first product I helped get started, pulling some technologies from Watson Labs, Almaden Labs, and the Tokyo Research, was a product called Webster Transcoding Publisher. Back in the day, we're talking 1999, the phones didn't have real browsers. They had these weird browsers and you had to convert and shrink images and convert from HTML to WML. It was a lot of fu, and I did that for a while. Then I moved in and did some auto autonomic computing and then started doing some automated problem determination and cross product serviceability. Then I made DE, distinguished engineer doing that. Then it was time for a new something. I really had to do something new. It was time to go find a new opportunity. It was 2011, and it was IBM software group at the time. They did not have a lot of open source contributors to a project called OpenStack. I was given the opportunity to take three new folks who had never done any open source contributions, and turn them into significant open source contributors. I wasn't one myself. I hadn't done any open source. I just looked at the team and said, " Yeah, we're all scared to death. We have no idea what we're doing. We're going to all roll up our sleeves and we're going to figure this out." I just told the team, " This is going to be the best thing ever for our careers. This is going to be amazing. I know you're scared. I'm scared. We're going to roll in." That's what we did. As working on proprietary IBM software for so many years, I had never used internet relay chat. We weren't at the time using Jenkins or Git or Garrett, all these wonderful open source tools. It was just throw people in the water and see if we could swim. I was real fortunate; I started with three folks. I helped get them comfortable. Then I ended up having a team of 20 contributing to OpenStack. We all found ways to contribute to the open source project and make contributions to the community. That went really well. That was a great ride. I ran into Michael there, right? Michael, you were doing a little work that related to an OpenStack project called Heat. I ran into Jake as well. He was, as he said, more on the infrastructure side. Then things started to cool down a little with OpenStack. The next thing you know, it was 2016 and I'm on a plane heading to my first Kubernetes conference. I looked to my left and totally out of random, there's Jake. I don't know if Jake remembers. We were sitting together going 2016, first Kubernetes conference, I think Seattle. We both looked at each other and we said, " This is going to be really cool. This stuff looks really cool." I think Jake knew more about it than I did at the time. He was looking at me going, probably trying to explain it to me as usual, and we had the best time going to that conference. We looked at it and just said, " Oh, this is going to be bigger than OpenStack. We got to get into this." Then I got to do the same thing again, which was lead a large team of Kubernetes contributors and other similar cloud- native projects like Istio and ETCD. It's been a fun ride ever since. Then of course along the way, we got an opportunity to write our first book. I can't remember how, through IBM and O'Reilly, we had a deal. I said, " If you're going to write a book, Brad, find people smarter than you." There was Michael. Michael had world class expertise at the time, in the continuous delivery and the agile development. Then Jake was... and he was IBM Cloud Private at the time. He was working on IBM Cloud private and continuous delivery. There was Jake in public cloud, and there was me working more in the community. I was like, " Oh, this is the perfect combination. We got the public cloud guy, we got the IBM Cloud private on- premises person. We're going to somehow write a book and it's going to be really cool."

Joe Seppi: That's great. We got the best from all the different angles, right? Maybe Michael, what was the first book about? Then we'll get into the second one.

Michael Elder: At the time we wrote the first book, we were really trying to introduce a lot more of the basics. You got to think about that was maybe 2017. Certainly, there were a lot of people in the community that were very active, but we wanted to also make it tractable for people that were maybe in larger enterprises that hadn't begun adopting Kubernetes as the replacement for how they ran and delivered software. In that world, we are used to, I built an application, I run a document of steps to install it. Maybe I'm a little bit more advanced. I've got a little bit of automation, maybe a little bit more advanced. I've actually got a continuous delivery pipeline that can deploy applications into a running host operating system on an ongoing basis. Now, we're coming at you with this idea that, " Well, maybe the right way to do this is containers." In 2017, everyone was still playing with Docker, right? Developers liked Docker on their desktop, their laptop. It made it easy to pull in databases is other things that developers didn't know how to run and didn't want to know how to run. " Just give me a database endpoint; let me use it in my app." Kubernetes really was an opportunity both for IBM to think about, " Hey, how can we bring a lot of these concepts of what Jake and team are doing in public cloud to run applications and containers? How can we make that accessible to what our enterprise customers are doing in their data center? Can we make it more cloud- like?" That was the idea. Cloud private was a distribution of Kubernetes built by IBM, as a precursor or a catalyst so that IBM could containerize the entire middleware portfolio, which was a gargantuan effort. Products that go back, like MQ, decades and decades, that are critical infrastructure for a lot of large applications, large customers, banks and other financial transactions, and say, " Okay, now we're going to try an experiment with running these in containers, running databases in containers. What works, what breaks?" In the first book, just to bring it back around, we introduce a lot more fundamentals of, " Here's what Kubernetes is about. Here's how it takes that thing that you know as Docker and makes it more about running a highly available replicated microservice application. Here's how you can leverage it to consume IBM middleware that we've containerized or open source middleware, leverage that from a development perspective, but also as part of your staging and QA environment and also even out to full production."

Joe Seppi: That's interesting. I went through this as a consumer front end and backend developer, and was really excited to see the evolution. I was lucky to work on some progressive teams that were really adopting stuff as it was evolving. It sounds like that book is getting, the first one's getting people going and early journey to moving to containerized cloud sort of stuff?

Michael Elder: Very much. I think a lot of what's in the first book is still relevant to a lot of people who are new to it, but certainly a lot of the information around the introduction of the concepts and how to consume content from a catalog, helm charts as a methodology to deploy and run containerized workloads, maybe more of that's more well known. With the second book, what we really wanted to focus on is, Kubernetes as a term as a community, I think very well understood, right? Clearly, they've become an industry de facto standard in how you orchestrate and run containerized workload. IBM had a managed cloud service for Kubernetes way back. We saw other clouds come along, including Amazon and Azure, and obviously Google before that. inaudible may be the first pieces of community middleware that I think we've seen where multiple cloud vendors have independently run managed services around Kubernetes. That's great. There's a lot of great things there. With the Red Hat acquisition, that gave IBM a way to think about not only how to run Kubernetes and Amazon or an IBM cloud or an Azure, but how can you use one distribution of Kubernetes that's supported by the backing of all of Red Hat and the long history of Linux and Linux expertise there, and take that distribution of Kubernetes and make it applicable to any hyper- scaler environment? Obviously, you've got it managed, cloud service of OpenShift on IBM cloud. Red Hat, as an independent partner, has worked with Amazon and Azure to offer cloud services for OpenShift in those environments. Brad and Jake and myself wanted to take that idea, " How do we help people understand the impact of having this type of portability, of having this model of open hybrid cloud that they can use in public cloud, they can use in a private data center? What are some of the gotchas that they may not recognize as they're wading into that and they're really trying to apply it to the entire organization to the way that they run software?" In fact, maybe Jake, if you want to highlight, there's a lot of great capability content in the book you captured around the availability models and how to think about recovery times and scheduling.

Jake Kitchener: Yeah, sure. I think having run public cloud services at pretty massive scale over the past five years on top of the Kubernetes platform, and I wanted to share some of the learnings and expertise that I've gained and that I've gained through working with our clients, about what are some of the challenges that the smallest customer to the largest enterprise will encounter when trying to move to a Kubernetes platform? Kubernetes, I think, and OpenShift along with it, provide a huge amount of value and tools and capabilities that allow you to deploy and build modern applications and services that are cloud native and highly available. I think there's a lot of things that maybe folks aren't necessarily aware of, from managing tenancy and optimizing for resource consumption. I really tried to take some of the key learnings that I've gained within my team and with our customers over the past four years on this platform, and try to share that with them in the book. Things like a lot of customers in their journey to cloud are thinking about having real measured SLAs and SLOs for their applications and services that they're really going to be held to. IBM for certain, has huge models that allow you to put together modeling for SLAs and projected uptime for our mainframe and similar systems. Those things are all based on hardware redundancy and MTTRs and things like that, meantime to recovery, meantime between failures, all these traditional hardware concepts that mainframe technology really brought to the forefront of the industry. Those same concepts still apply in cloud native with Kubernetes and OpenShift. I was really trying to focus on helping people to understand, " Hey, what does it even mean to provide something that's highly available? How do you go about doing that? What are the catches that you may encounter? What are some of the best practices? How do you make decisions about architecting applications and cloud platforms with kube and OpenShift in your applications to deliver on those expectations that I think every enterprise and every small and medium business on the planet now I think is really beginning to expect?" Obviously, the explosion of the internet and online everything means that you can't just say, " Oh, we don't really have any customers in our shops from 8: 00 p. m. to 8: 00 a. m. It doesn't matter. That's just not how it works anymore. I want to be able to jump on my iPhone at 2: 30 in the morning and go order whatever dog food I need or something. CTOs expect something completely different from an availability perspective than we did even a decade ago. I think there's a lot of things that we can do to help users in the community understand how do you dive into those areas and get off on the right start.

Brad Topol: I remember when we were first getting together for that second book, and I could see right away, both Michael and Jake had this tremendous desire. They wanted to do a book that covered very advanced topics, and they both wanted to cover a lot of aspects of the operations piece that they felt hadn't been addressed in a lot of the books, that cover basic kube, right? " You do some basic kube and here's your deployment, here's your service." They really wanted to get in. That's what in this book, whether it's Jake covering the high availability and the resource management, or where Michael was doing the advanced cluster management running on multiple clusters. You saw they were both really driven to make sure that they covered those advanced topics really well, because they just felt like it hadn't been covered in other places. Then for me, what I was getting into, we were seeing this transition. We had the Red Hat acquisition and we were moving from the IBM Cloud private to OpenShift. What I was seeing in my day job, there wasn't a lot of understanding. What does OpenShift bring to the table beyond Kubernetes? What does it do and the benefits it gives you both for developers and for IT operators? From my perspective, I wanted to make sure the book really covered that, because OpenShift does amazing things and gives you amazing capabilities beyond Kubernetes, both for developers and IT operators. Somehow, we wanted to make sure, in addition to all the other great stuff we were covering, that we cover that, to make sure you get a great feel of, " Oh, here's how I use Kubernetes." You still get Kubernetes fundamentals, a little bit of Kubernetes architecture in the book, but then you also get to contrast it with what OpenShift gives you beyond that. That gives, I think, the reader, a really broad perspective on which way they want to go, which tools they want to go. If you look at the chapter on continuous delivery, we're covering a large, broad spectrum of continuous delivery tools. I think between the three of us, we did a great job of covering a lot of topics that people will find very valuable.

Luke: That is so interesting, because one of the biggest negatives I hear about Kubernetes when I'm out there on forums is people talk about, " Yeah, it solves all these problems, but it introduces a lot of complexities." I think the answer is there's two ways. You could go try to make every mistake yourself, or you could go listen to, talk to the experts and figure out how to learn from these things. I really appreciate that you're able to bring the expertise of working on enterprise level mission critical stuff that's run in serious business. Then you're able to share that expertise through this book, which I want to make sure to extend that to our listeners to know, " Hey, we're on here now." We'll be on Twitter later, but if you have questions right now while we're live, please feel free to drop them in the chat on whatever platform you're listening on. If you're catching this as a podcast replay later, hey, tweet at us. Drop it in the comments. We'll be able to answer your questions asynchronously.

Joe Seppi: Just to be very clear here, this is the title of the book, Hybrid Cloud Apps with OpenShift and Kubernetes. The shortlink there will get you to it, but it's available in all the places you would get these types of books. Definitely check that out. Thanks again. There we go. Perfect.

Brad Topol: It's got the nice animal on the front.

Joe Seppi: I love it.

Michael Elder: If you can't tell, not only is the organizer and orchestrator of this band of amigos, but also the hype man. crosstalk brings the excitement.

Joe Seppi: Awesome. I'm curious. I think maybe a good thing to ask is: what's the most important takeaway from the book, do you think? I'll ask you, Jake. If you said, if you read this book, I only take one thing away, what is the most important aspect of the book? We'll get into some other stuff afterwards.

Jake Kitchener: Oh, wow. One thing? I guess just it's more of a general thing is that I think that the main takeaway is that there are a lot of challenges presented in modern cloud technologies, and Kubernetes and OpenShift can address a lot of those. As Luke mentioned, there's still plenty of unknowns that remain and make sure that you get educated and you put together a plan for how are you going to leverage these technologies and these tools as a platform for accelerating your companies or your team's journey into Cloud native? Don't just jump in willy- nilly. Plenty of ground has been covered already by the community folks like ourselves and others around, and learn from them. Take advantage of the knowledge that we've gained from some of our mistakes and mishaps and the evolution of the technology underneath it, before you go running off to production. As exciting as it is, I think that's one of the things. I know my team, we are run fast and loose, I guess some people would say, but we've definitely learned over time. It was easy in 2015 to just say, " Oh, all of this stuff is so immature, it's what it is," but I think it's come real far, and so the expectations are much different now from the industry as a whole. As easy as it can be to get started with Kubernetes and OpenShift, it does take real thought and expertise to make sure that you're going to put together a platform that is resilient and highly available and performant and secure and take advantage of the knowledge of others before you go shove all of that fun stuff into production.

Joe Seppi: I think that's important too. We joke about the CNCF landscape, the actual web view of the landscape, but Kubernetes is complex in all the different aspects of it. It makes sense to really focus on how you do that. I'm curious too. I'll ask you Michael, and any of our guests here, feel free to redirect if someone else is the better person to answer this. Thinking about OpenShift, and perhaps I don't know how much you dive into this in the book, but I think of OpenShift as an opinionated way to do the Kubernetes platform. What do you think the book brings to that sort of conversation that people can learn from?

Michael Elder: Certainly. I think very much in the way that Red Hat Enterprise Linux is a very opinionated view and a very supported view. Every time that Red Hat delivers a capability from open source technology, not only as a culture, as a community, Red Hat's thoroughly engaged in the upstream, contributing, helping to lead, helping to steer, helping to influence on behalf of customers. There's also a very strong set of opinions on how Red Hat consumes and builds and supports and puts trust into the supply chain of that open source software. OpenShift is very much like all the other capabilities within inaudible, focused on viewing that trust of the code. Has it been tampered with from what's in the upstream? It's been built in secure environment, it's been linked to a very definitive list of all the dependencies from all the places it can be pulled. All of that source was built by Red Hat as well, all the way down to the very bare bones of the kernel. On top of that, then we look at, okay, we have this supply chain of how we consume software. Then when OpenShift runs, OpenShift very much thinks about security. A lot of the times, if you hear about the CVE from Kubernetes, there's a lot of times where that CVE doesn't apply to OpenShift, because the out- of- the- box configuration disabled some part of that attack vector. It was one that had to do with host post host networking. If you were able to send a certain set of signals in a certain way, you might be able to grab a hold of a host port or host networking access, that you shouldn't have, as a pod on a cluster. Particularly, that was a risk in multi- tenant clusters. OpenShift clusters weren't susceptible to that out of the box because they'd already disabled that as one of your default options. It's those types of things that, particularly if you're getting started, being able to just deploy OpenShift, you can actually run it in your on bare- metal, you can bring inaudible. When you look at the model of how OpenShift thinks about the world, it will provision your cloud infrastructure or you'll bring it credentials for your AWS account, run the installer under your control. It'll create all the hosts in that cloud account for AWS or Google or Azure. Then what's neat is if you look at... and I might go off in the weeds here cause this is some fun stuff to me. There's this concept of operators. At the end of the day, operators and that control loop the Kubernetes runs, is what differentiates it from the past generation of run automation one time. That control loop is constantly reconciling, " Does the real world look like your desired state?" If it doesn't, it'll go through and change it. OpenShift applies that same concept to the way that it configures that Kubernetes platform. There's an operator that understands how to configure authentication, so that your Kubernetes API can determine, " This is Joe, this is Luke, this is Michael, this is whoever. " It has an operator that configures storage and networking. I can just declare with a Kubernetes inaudible object, with that API object, apply it to the cluster and the operator will take care of bringing it into alignment. That to me, not only is that I think highly opinionated in terms of how to make it easy and secure, but also focused on applying the concepts that Kubernetes brings to the workloads and using them to manage the infrastructure. To me, that's a really powerful concept overall.

Joe Seppi: Yeah, that sounds smart. That makes sense. Let's maybe take a moment, Luke, for some housekeeping and then we'll dive back into the book?

Luke: Absolutely. What were you going to say? I thought I cut you off again. I know that's my thing to cut you off. First thing I want to mention is, if you're listening live, thanks for tuning in. We do this twice a month, this show. We're actually about to launch another podcast, a data science podcast, which I'm very excited about. We've got a whole bunch of other special series, app modernization, tech for good initiative, call for code. We've got some stuff on hybrid cloud and OpenShift coming up, as well as... and this is a little sidebar I want to mention here is: ISV builds stuff. Independent software vendors and building is a big part of what we're doing at IBM. I think especially, the topic we're talking about today is a great example, where if you are a smaller company and you want to sell to the enterprise or you just want to have enterprise grade systems, this is a great place to tune in, even if your business isn't normally enterprise. Tuning into the IBM developer podcast will really highlight some best practices and you can borrow from the expertise of enterprise. You can always find our show on ibm. biz/ intheopen. This will have all of our past episodes embedded as videos, as well as the most current episode and live episode in the header. You can find all the back episodes of the podcasts on ibm. biz/intheopen. Please check out more podcasts. Subscribe on your platform of choice.

Joe Seppi: Of course, so we should mention developer. ibm. com has all sorts of resources beyond podcasting. Check all that great stuff out. Cool. It seems to me, and I'll ask you Brad, and I assume that the book touches on this, so maybe you can dig into it a bit, but the concept of roles in this sort of environment. The whole landscape of continuous integration and delivery, roles are starting to become something that you really want to be mindful of as you're building out your plan and your team and everything. You guys touch on that at all in the book?

Brad Topol: Yeah, we certainly cover the continuous delivery. What's nice is we cover a large amount of tools and also the OpenShift opinion opinionated versions of those tools. You can get a feel, whichever way you want to go. On the roll side you, Jake really gets into that and I'll let him cover a little bit more, but understanding how to break the users into different groups for the tendency, and I'll let him talk more about that. That's his expertise. Then we also cover a lot of the security, which is very role related, and what OpenShift does to make the security simpler to manage and handle. Jake, do you want to cover the roles part a little bit more, and what have you?

Jake Kitchener: Yeah, sure. It is interesting to look at how things have evolved over the last few years in the community, and how much role- based access control and Kube and OpenShift has become central to the technology platform. We do talk quite a bit about, in the first book actually, just the fundamentals of our back and how it works. Then in this book we get into a little bit more details about the types of permissions that users should be given access to within the cluster, based on their role in the business or the enterprise or the team. I think this is actually one of the places where OpenShift helps out quite a bit versus inaudible Kubernetes, is that out of the box, I think OpenShift is providing a much more opinionated and curated set of roles that are included by default for you to be able to use for your team, and some nice tools to be able to distribute that access and stuff. I think the big thing that we cover, and I'll let Michael add onto this after I finish this. With the proliferation... It's so funny. When first started getting traction, people always talked about the VM explosion, and how do you keep track of all your VMs? Then we were like, " Oh, Kubernetes is going to make sure that we solve all of those problems for containers before you even get started. Your 10,000 containers, Kube's going to manage and orchestrate all those for you, don't worry about it." Then it was a year later we were like, " Oh my gosh, now we have how many clusters we have to manage around the world?" This is where things like advanced cluster management and some of the content in the book really gets into how critical it is to have strong tools and GitOps, best practices and things like that to make sure that you are distributing policy and enforcing it in a uniform fashion across your entire estate. For my team, we run something like 140 Kube and OpenShift clusters that belong to us and tens of thousands for our customers. I forget about how many hundreds of thousands or millions of containers are being run across that entire estate. It boggles my mind that we've made it this far and I think it's thanks to things like the advent of GitOps as a fundamental best practice. Some of the work that Michael's team and our team have been doing around multi cluster management together, really has become super critical. It's those fundamental simple things of, "Oh, this is how you access control in one cluster." Yeah, but now you have to do that for an entire enterprise worth of clusters and applications and policies and different environments moving from dev to staging and production clusters and things like getting into geographically enforced compliance requirements, unique to things like EU cloud or FedRAMP environments. Man, if I could go back and talk to 2017 Jake, I'd be like, " Get ready buddy, because you have no idea how crazy it's about to get."

Joe Seppi: Yeah, I'm sweating just thinking about it.

Michael Elder: This is something that when you get your hands on it, the concept's attractable. At the end of the day, we're using computers to manage computers and we're going to tell computers on one half of the equation what we want those computers to tell the other computers to go and do. That's all it is at the end of the day. There's an entire chapter where you focus in on policy governance and talk about examples where I'm going to create... I've got a subject, a user, or a group of users, and I've got a role. Then Kubernetes is a concept of cluster role, which is just a different scope, but we'll just call it a role. The role declares all the permissions of what any user assigned to that role can do. They can create deployments, they can view deployments, they can create but not delete or view secrets, whatever you're going to slice up their permissions. In chapter seven, we go into this concept of policy governance across the fleet, where once you make those decisions, " Look, here's how I want my roles to be crafted. Here is the groups that I want to be declared on every cluster, here is the bindings between the groups and the roles." This group called Dev team A, they get bound to the developer role, and this group called ops team 23, they get bound to a different role. All of that gets wrapped up in YAML, just text, yet another markup language. That's what Jake's highlighting is this concept of GitOps, because we can put that configuration in a get repo, make it part of our continuous delivery process and how we suggest changes through pull requests, through reviews, through testing and earlier stages. Then what will you bring into the equation with a project called Open Cluster Management, is focusing on how I can then take the configuration that I work through this process, got it to where I want it to be in my Git Repo, and have a hub cluster that can now deliver it out to every single attached cluster that's in my fleet, and then report back, through again, not only just apply the change, but report the status back. That is that Kubernetes model of the continuous reconciliation. We can actually, and in the book it shows a concrete example of, " Here's a policy, here's how we apply it with a placement rule, here's how it gets distributed out to the fleet at large." To try to make that more at attractable and more consumable, so it's not magic. At the end of the day, there is no magic. There's only ones and zeros.

Joe Seppi: Yeah. I think the concept of computers, managing computers really make sense. Having configuration files, YAML files, that's what the computers use to manage the other computers. I think it's a very digestible concept for sure.

Jake Kitchener: I'm just disappointed there's no magic.

Joe Seppi: There's never really magic.

Brad Topol: I was going to say, also in some of the earlier chapters, what Jake goes through is for the scheduling, and he talks about you can change this knob and you can change that knob and let's see what happens. What you can really do is you might think you're making a good decision on how you're mapping priorities for your pods or what have you. Jake shows you, " Ooh, this one may not be a good idea and here's why." Yeah, we've got the advanced cluster and policy, but you still also have the single cluster both availability and resource management and scheduling that I think people will find very valuable. Just a lot of subtle details in there that maybe people didn't see or think about the consequences of their decisions. I don't know if Jake wants to add more to that or not, but-

Jake Kitchener: I always think about the consequences of my decisions, Brad. What are you talking about?

Brad Topol: Well, I know you do. You're teaching everybody else.

Joe Seppi: It brings up a question that I had though. I'm wondering if when the three of you working on this book, are there any instances where you saw somebody making, I don't want to say bad decisions, but maybe poorly informed choices that were problematic and you thought, " Oh, we definitely want to get this into the book," or any sort of problem stories that you wanted to make sure you essentially covered that you experienced?

Jake Kitchener: Oh, resource management number one on the list. One of the things that I don't think people realize about how Kube in OpenShift works is that it doesn't understand anything about the real world consumption of resources, only what you tell it you think you are going to use and the limits that you tell it to enforce. On day one, nobody uses any of those advanced scheduling and resource management tools that are in Kubernetes and build the policies that Michael is talking about, to go distribute and enforce those things. Kubernetes has evolved tremendously to have policy enforcement and quota management and all these other things that help you do it, but on day one, everybody's just like, "Go run my app, whatever. Just give me all the CPU, all the memory, whatever you want." It's the number of phone calls that I've gotten from customers who are like, " All of our worker notes keep dying. We don't really understand why." I'm like, "but you have four cores and 16 gigs of memory and you're running these Java apps with 32 gig heaps. That's just not going to end well." They quickly learn that if ... there's a lot of things that Kubernetes can do to help protect those nodes from bad actors, but there are limits and customers and users are extremely good at exceeding those things. That is by far and away the number one. Everybody's like, " Why are my apps always getting killed? Why don't they perform well? Why are my nodes dying all the time?" The answer is always because you were... we will say-

Michael Elder: "You had opportunities to improve your resource consumption that you-"

Jake Kitchener: Yeah. I was looking for a nice way to say something like, " You were being lazy or ignorant." I started off exactly the same way, and my team did. I think everybody does because at the beginning you're just like, " I want to see cool stuff happen," and cool stuff is running your app and you kill pods and they get restarted automatically and they can auto- scale and all that kind of fun stuff. That resource management is by far the number one thing that I think people really need to get into. Then it's phenomenal that the community has been able to put together the guardrails and the tools and the fundamentals to help put up those guardrails to prevent people from doing crazy and irresponsible things with their compute resources. Very grateful for that.

Joe Seppi: Yeah. We're all humans, right? We're all at some point in our journey, so it's great to have books like these that's" I learn from my mistakes and the things that I've figured out.: What about you, Michael? You got any good stories?

Michael Elder: Everything in the book comes out of some part of our collective experience between the three of us. I don't know there's anything specifically, where, " Let me make sure no one ever does this," but more, there's lots of partial mediocre ways to accomplish that goal. I think more than anything else, I wanted people to see, " Look, here's what we think the best way is based on all of the knowledge that we've collected." Maybe one of the stories that I'll tell based on Jake's last comment about at the pod restarting, when we were first beginning to look at Kubernetes as a way to containerize the middleware stack and run software more efficiently in the data center, Jake had already been, along with lots of other great people, running a public cloud service around Kubernetes. Looking at, " Can IBM distribute our own or ultimately acquire something to distribute our own capability for containers?" I was doing this demo to a very senior executive person who was less technically savvy. I'm explaining, " Look, here. I can see all of the containers that are running on the set of hosts, and here I can click this button and kill this pod and then let's see if it has an effect on the example application that we have." The person was like, " No, whatever you do, don't kill the pod. Don't kill the pod!" I was like, " No, it'll be okay. Do you want to kill the pod?" They're like, " Oh, I don't want to kill the pod. I do not want to touch that pod. I don't know what's going to happen when I kill that pod. Demos go south all the time when you do them live." Even that concept that Kubernetes introduced of, " Look, let's assume everything will fail. Assume that everything underneath you has an opportunity to cause you a bad day. Then let's put enough automation around it to correct for most of the things so I don't have to get up in the middle of the night on a pager duty call and address it." That's where I think Kubernetes has a lot of allure and where people get started. Then to Jake's point, " Yes! My pod got restarted, but why did it get killed 48 times in the last two hours? Why? What is going on there?" That's where you understand a little bit more about how does Kubernetes address scheduling and how does it address resource management and things like that. It makes a big difference.

Joe Seppi: Yeah. That's interesting. I know we're running out of time here, but I'm curious. Do you at all talk about in the book, any sort of AI machine learning, taking information from what's happening in your applications, and being able to tweak how you're doing any of the continuous delivery work? Is there any sort of AIML stuff in there?

Michael Elder: To some extent, I think we ran out of pages. We started out with a goal and then I think we ended up writing about twice as much as we originally intended. We pulled content that we otherwise would've put in, just to make sure we didn't go above a certain price level or whatever. crosstalk additional AI optimization. That's probably the next less well- defined territory. Everyone's looking at that. We're looking at that both within public cloud, within our private offerings. How do we use a collection of data to provide more insights to users? In fact, that's a capability that we built into the next version of the product, the supported product. That is what I work on in my team, called Advanced cluster Management. In that next realm, in that next release, which comes out in about a month or so, it'll actually allow you to send some data to cloud at redhat. com, and collect insights and suggestions on how to optimize that cluster and improve it. Certainly that is, I think the next horizon, the next generation of, " Okay, we built a lot of computers that can automate other computers. They can make them recover, but now let's make it more efficient. Nobody knows what the right number to pop in." There's this magic number syndrome. Now, can we make software to tell us what numbers to put into the other computers that are running our last computers that were actually running the workload?

Luke: I have a question. I know we touched a little bit on OpenShift and how it's an opinionated stack that solves certain problems for you, like security and these things, but I guess my question is: how do I make a choice if I'm looking at, " Hey, should I just run open source Kubernetes myself? Should I use Kubernetes service somewhere or do I need OpenShift?" Where are the thresholds? You can answer that question now or tell me also, " Hey, read the book." That's a valid answer.

Michael Elder: No. I think the simple thing is if you're going to rely on this foundational building block to run your business, then I think you want a trusted partner more than anything else, right? At the end of the day, you want to know that you're not consuming an OpenSource build that somehow was exploited by yet another ransomware attack. Did it come from a place that has a sovereign protected build environment that produced a binary that you're now running in your production context? Maybe the real choice there is, do I want to manage thing where I consume a cloud service or do I want to run software on my own? That's probably a decision that each organization has to make based on their own structure and their own level of control that they want to have over it. Certainly, that's part of the reason why we've gone into OpenShift as a service, running on all the different cloud environments so that you have consistency. I think there's advantage to that consistency versus picking up one distribution of Kubernetes. It's optimized for cloud ecosystem A, a different version of Kubernetes optimized for cloud ecosystem B. Core concepts, certainly it's Kubernetes, but the way that it binds into how this cloud ecosystem does identity or this cloud ecosystem does networking, that's a balance you've got to figure out. Do I want to spend a lot of time papering over those differences or do I just want to have the platform give me consistency out of the box?

Brad Topol: Just add to that Luke, the nice thing about OpenShift is it keeps you from running with scissors. It keeps you from hurting yourself. It was one of the things that, the insights that I saw is, like the other folks said, you're so happy in Kubernetes to get something up and running, but you didn't realize what you got up and running has a lot of attack vectors. It's running as a privileged container. You messed up the rule- based security. As you move from just standard Kubernetes to OpenShift, you realize all the wonderful things that OpenShift is giving you as the default configurations, to keep you safe. If you're an enterprise, a bank or an insurance company, you really don't want your folks experimenting to try and figure out if they've got the security right or not, because it's going to look really bad if all those credit card numbers get out to the world. Just that peace of mind OpenShift gives you with, " We're going to keep you from doing silly things right out of the box, and you're really going to thank us later." It might be a little annoying at first, but boy, you're going to thank us that your stuff has a lot less attack vectors.

Luke: That's so interesting. It is that you had mentioned. There's all these ransomware and nation state level attacks happening. I just wanted to mention, one of our past guests on the IBM developer podcast was an executive, technical executive in the NSA. I saw just this week on LinkedIn, GitHub hired them as their VP of security for OpenSource expertise for exactly this, because the pipelines have to be secured. It's so interesting, that's what's baked into the OpenShift.

Michael Elder: Even crosstalk is something that OpenShift provides a lot of tools out of the box to make it easy to stand up tekton, if you're going to use tekton for builds or stand up Knative if you want serverless on top of it, those types of things as well.

Joe Seppi: That makes sense. There are so many foot guns, and just diving in the Kubernetes and all the things that go along with it, to pick something that's battle tested and built from lots of experience. Makes sense. We touched briefly on the AIML stuff and we're just about at a time. I'm curious, is there a third book that y'all are thinking about already? What's the plan?

Michael Elder: Not any time soon.

Jake Kitchener: Brad, I'm sure is already scheming what the next one is going to be, and Michael and I are basically still licking our wounds from the last one.

Brad Topol: That's right. It's just a little too soon for me to pitch ideas to my co- authors.

Michael Elder: Thank goodness.

Brad Topol: We just got to give them a little more breathing room, Joe. We're just going to tread lightly through the tulip there, but one day it could happen.

Joe Seppi: For sure. crosstalk

Luke: Well, regardless of a new book, definitely like to have you back on in a few months after maybe you have some more stories to tell and things going on, because this was a really fun conversation and I think we only scratched the surface here. There's so much to talk about.

Michael Elder: I would love to. Thanks for the opportunity to come and join.

Joe Seppi: Yeah, thank you all for joining. I'll remind folks watching now or watching later: hit us up on Twitter. My DMs are open. If you have questions, I'll pass them along. We'll try to surface knowledge. Definitely check the book out. So much good stuff in there. Advanced techniques and approaches to Kubernetes and OpenShift. Yeah, I guess we'll call it a day. Thanks everybody.

Jake Kitchener: Appreciate it. Everyone. Have a good weekend.

Joe Seppi: See you soon. Cheers. crosstalk

DESCRIPTION

In this episode of In the Open we bring you a conversation with Brad Topol, Jake Kitchener & Michael Elder.  We will be discussing their new O'Reilly book Hybrid Cloud Apps with OpenShift and Kubernetes. Topics include the fundamental concepts of Kubernetes, as well as more, advanced practices such as continuous delivery and multi-cluster management.

Dr. Brad Topol, Open Tech & Dev Advocacy CTO, @bradtopol

Jake Kitchener, Senior Technical Staff Member @ IBM, @kitch

Michael Elder, Senior Distinguished Engineer @ Red Hat, @mdelder

Joe Sepi, Open Source Engineer & Advocate, @joe_sepi

Luke Schantz, Quantum Ambassador, @IBMDeveloper, @lukeschantz

Hybrid Cloud Apps with OpenShift and Kubernetes ibm.biz/hybridappbook

Today's Guests

Guest Thumbnail

Jake Kitchener

|Senior Technical Staff Member, IBM
Guest Thumbnail

Brad Topol

|IBM Distinguished Engineer, Open Technology and Developer Advocate CTO for AI and Kubernetes Technologies
Guest Thumbnail

Michael Elder

|Distinguished Engineer/Sr. Director