Dr. Brad Topol | Kubernetes and OpenShift | In the Open with Luke and Joe

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Dr. Brad Topol | Kubernetes and OpenShift | In the Open with Luke and Joe. The summary for this episode is: <p>What you are about to hear is a new podcast and live stream show entitled, “In the Open with Luke and Joe”.&nbsp; In this series my cohost Joe Sepi and I&nbsp; bring you conversations with community and technical leaders from the world of open source and enterprise tech. &nbsp; We do this live twice a month on Fridays at 12 noon eastern time.&nbsp; You can catch us on a variety of streaming platforms or here as replay on your favorite podcast app. To find out all the details go to <a href="https://ibm.biz/intheopen" rel="noopener noreferrer" target="_blank">ibm.biz/intheopen</a>. There you will find our show schedule, an embedded the live streaming video player as well as embeds of past video episodes.&nbsp; Or you can link directly to the podcast page with <a href="https://ibm.biz/intheopenpodcast" rel="noopener noreferrer" target="_blank">ibm.biz/intheopenpodcast</a></p><p>In this inaugural episode, Luke and Joe are pleased to bring you a conversation with Dr. Brad Topol. Brad is an IBM Distinguished Engineer, developer advocate and CTO for Open Technology. We’ll be discussing Kubernetes and OpenShift as well as his upcoming O’Reilly book Hybrid Cloud Apps with OpenShift and Kubernetes.</p><p>Brad has extensive experience in the open source space and we are excited to have him on the show.</p><p><br></p><p><strong>Key Takeaways:</strong></p><ul><li>[00:00&nbsp;-&nbsp;01:29] Intro to the episode</li><li>[03:22&nbsp;-&nbsp;04:50] Intro to Brad Topol</li><li>[08:03&nbsp;-&nbsp;10:45] What we need to know before Brad's book about Kubernetes and OpenShift is released</li><li>[11:03&nbsp;-&nbsp;12:26] What is this new book about?</li><li>[12:38&nbsp;-&nbsp;13:24] Recommended reading before diving into the new book</li><li>[14:01&nbsp;-&nbsp;18:34] What is an ISV, and what is the value?</li><li>[19:06&nbsp;-&nbsp;24:48] Operators: How do they work and what benefits do they bring?</li><li>[27:25&nbsp;-&nbsp;28:29] The OpenShift Marketplace</li><li>[29:10&nbsp;-&nbsp;34:19] A look into history, how did we get here?</li><li>[42:18&nbsp;-&nbsp;47:08] Get engaged with communities, courses, and conferences</li><li>[47:36&nbsp;-&nbsp;52:10] The benefits of microservices</li></ul><p><br></p><p><strong>Resources</strong>:</p><p>Book: Hybrid Cloud Apps with OpenShift and Kubernetes: <a href="https://www.oreilly.com/library/view/hybrid-cloud-apps/9781492083801/ " rel="noopener noreferrer" target="_blank">https://www.oreilly.com/library/view/hybrid-cloud-apps/9781492083801/ </a></p><p>Book:&nbsp;Kubernetes in the Enterprise: <a href="https://www.oreilly.com/library/view/kubernetes-in-the/9781492043270/" rel="noopener noreferrer" target="_blank">https://www.oreilly.com/library/view/kubernetes-in-the/9781492043270/</a></p><p>Open Source Contributor’s Conference: Become a Kubernetes contributor, Kubernetes Operators and OperatorSDK: <a href="https://developer.ibm.com/conferences/oscc_become_a_kubernetes_contributor/kubernetes_operators_and_operatorsdk/" rel="noopener noreferrer" target="_blank">https://developer.ibm.com/conferences/oscc_become_a_kubernetes_contributor/kubernetes_operators_and_operatorsdk/</a></p>
Intro to the episode
01:29 MIN
Intro to Brad Topol
01:27 MIN
What we need to know before Brad's book about Kubernetes and OpenShift is released
02:41 MIN
What is this new book about?
01:23 MIN
Recommended reading before diving into the new book
00:45 MIN
What is an ISV, and what is the value?
04:33 MIN
Operators: How do they work and what benefits do they bring?
05:41 MIN
The OpenShift Marketplace
01:04 MIN
A look into history, how did we get here?
05:09 MIN
Get engaged with communities, courses, and conferences
04:49 MIN
The benefits of microservices
04:33 MIN

Luke Schantz: What you're about to hear is a new podcast and livestream show entitled In The Open with Luke& Joe. In this series, my co- host, Joe Sepi, and I bring you conversations with community and technical leaders from the world of open source and enterprise tech. We do this live twice a month on Fridays at 12: 00 noon Eastern Time. You can catch us on a variety of streaming platforms or here as a replay on your favorite podcast app. To find out all the details, go to ibm. biz/ intheopen. There, you will find our show schedule, an embedded player of the livestreaming video, as well as embeds of past episodes. Or you can link directly to the podcast page with ibm. biz/ intheopenpodcast. Thanks so much. I hope you enjoy our new series, In the Open with Luke and Joe. Welcome to In The Open with Luke and Joe. I'm your host, Luke Schantz, and here's my co- host, Joe Sepi. And a big welcome to our special guest, Brad Topol. Before we get to our show, don't forget to like and subscribe. This is the first episode of our new show, In The Open with Luke& Joe. Whether you're watching live, on video replay, or listening as a podcast, thank you for joining us. Before we welcome our guest, Brad Topol, please meet my esteemed colleague, friend, and co- host, Joe Sepi.

Joe Sepi: How are you doing, Luke? Nice to see you. I love your plants. You've got a great space there. My name is Joe Sepi, as Luke said. I am an open source engineer at IBM. I work primarily in the Node. js space. I was one of a small group who helped merge the JS Foundation and Node. js Foundation into the OpenJS Foundation. And since that merger, I've been leading the inaudible technical advisory committee, the Cross Project Council. Yeah, it's a lot of fun working over there.

Luke Schantz: Just to give some context and backstory, while this is a new show, Joe, I feel very comfortable and at home already because we have this great history of doing live events in New York, and we've done conferences all around the world. I think it's been too long, so I'm happy to come back and be able to do this.

Joe Sepi: Me too.

Luke Schantz: Yeah. And I also want to say this is streaming live, going to be available on replay as a podcast, and we have a bunch of great podcasts coming out on IBM Developer. We've got a new security podcast, we've got Z DevOps Talks. And I just released one this week about the Konveyor. io community, which is related to this episode because, really, it's a community about open source solutions and techniques for application modernization and moving to Kube. Very excited about that. But without further ado, why don't we bring in our guest, Brad Topol?

Joe Sepi: Let's do it.

Brad Topol: Hey Luke! Hey Joe!

Joe Sepi: Hey Brad!

Brad Topol: I sure hope you can hear me.

Luke Schantz: I can hear you.

Joe Sepi: I can hear you. You look great. Nice to see you.

Brad Topol: Good to see you!

Joe Sepi: Yeah. How's the weather over there?

Brad Topol: It's hanging in there. It was getting a little warmer, so hopefully we're getting through the winter.

Joe Sepi: Good. Yeah, it's gorgeous here. I say gorgeous. It's still 30 degrees, but the sun is shining and I'm happy to be outside.

Brad Topol: Fantastic.

Luke Schantz: Dr. Brad, just to give our audience context, maybe give a little self introduction.

Brad Topol: I'm IBM's distinguished engineer for open technology and developer advocacy. I lead a large team that contributes upstream to Kubernetes. I also lead a team that builds a lot of content to help developers embrace open source. So that's my role as developer advocate CTO. And I also have a role of ISV enablement, helping to enable ISVs to get onto Red Hat Marketplace. I've been working in open source communities since 2011. Worked my way up through OpenStack, became a core contributor in a couple projects. I led their interoperability challenge. I was an OpenStack board member for a year or so. And then I moved over to Kubernetes. And what's great about being an open source developer is you get to a certain point where you're at the top of the hill, the top of the mountain, know everything, know everything about the community, and then it's time to move on to the next community and you start at the bottom of the hill and you got to work your way up. You got to push that boulder back up. And I'm doing that in Kubernetes. I became a Kubernetes contributor. And then when I joined, they really needed some help with SIG Docs in Kubernetes. I started helping out with the documentation, which was not something I had done really a lot before, but I said, " Hey, let's do something new." I became a Kubedoc maintainer and I am now a chair of the Kubedoc Localization Subgroup. I've spent a large number of years in now multiple open source communities and enjoying life. That's the place to be.

Joe Sepi: That's great. And that's actually a good point to folks who are thinking about getting into open source, just looking at the docs and getting familiar with how things work. A lot of times, I'll dig in and find things like, " Oh, that could be phrased a little better," or there's a mistake or something. And it's great to just fork the repo, start making updates, push some changes up. And next thing you know, you're doing open source. It's a great entry point.

Brad Topol: And we really do a good job of that in the Kube SIG Docs community. When we used to go to live conferences, remember we all used to travel, Luke and Joe, and we'd see each other? And that's coming back soon, I hope. But at the live conferences, we'd do what were called doc sprints, and it was a great way for us to bring in people who wanted to be new contributors, but maybe they were a little intimidated trying to go straight to the Kube Development Community. We could actually, in these doc sprints, teach them how to contribute to the docs. And it uses basically the same processes of Git that if you want to eventually become a contributor to the software, you get to learn it. Plus, by learning the docs, you learn more about the project. And over the years, I've seen a lot of colleagues of mine that have been really successful attaining leadership roles in different communities. They typically start that way, " Let me go try the software. Let me go try the docs. Let me see if there's a problem," and then go add something to it. It's a great way to learn on your way to becoming a strong contributor in the actual open source project. I started doing that just recently, making some contributions to what's called the Operator SDK, which is a layer on top of Kubernetes that helps you, and helps ISVs and others, to be able to extend Kubernetes with new resources and make it seamlessly fit into Kubernetes. It's a great technology and I'm having a lot of fun with it. It's really taken off and I'm really enjoying that new area. So as I keep going, I keep moving on to new interesting areas and having a lot of fun.

Joe Sepi: That's great. I don't want to spend too much time on this bit, but I'm making a note, a mental note, to come back to you after the call to talk more about this stuff because I think, like I said, docs is a great way to get involved. I work in the Node. js space and the barrier to entry's a little high. It's a little technical to get involved and a little daunting. But getting folks into docs is a great way to go, so I'd love to talk to you more about that. And then even the localization work, we have work in Node to do internationalization, localization, and I'd be curious to talk to you more about the tooling that you're using and how that works. I'm familiar with the work that we do on the Node side and I'm curious how they compare and what we can learn from each other, so it'd be great to talk more about that offline.

Brad Topol: Sure. Absolutely.

Joe Sepi: Don't want to get in the weeds.

Luke Schantz: I think maybe we could talk about the right now, and then do a retrospective and move back. Because I think there's a lot of value in looking at how we got where we are today with Kubernetes, coming from a world of Linux and OpenStack to today, so I want to go through that journey. But I also feel maybe let's not bury the lede and talk about what's really important to know today. So maybe a good place to start there is I know you have a new book coming out dealing with Kubernetes and OpenShift, so maybe give us this sort of" What do we need to know right now to be valuable or to help us along?" and then we can go on a journey that says, " Here's how we got there."

Brad Topol: I don't have the book yet, but it's going to be published in June. A few chapters are available online as a pre- release. But basically, I had the opportunity to work with two really sharp co- authors. One is Jake Kitchener. He is a world's expert at running Kubernetes and OpenShift in production at extremely large scale. He's the chief architect for IBM Public Cloud, so somebody who knows everything you need to know about running OpenShift and Kubernetes in production, worrying about scale, worrying about keeping failures to bare minimum because it's an enterprise cloud, it needs to be up and running and have all the wonderful nines of always running. Getting to work with Jake and having all his expertise about running large, 10, 000 clusters at scale, that's what you're looking at when you're looking at IBM Public Cloud, so a great co- author. The other co- author is Michael Elder. Michael Elder is another world's expert in multi- cloud, multi- cluster Kubernetes and OpenShift. And it wasn't my first time working with them. We actually wrote a previous book on Kubernetes in the Enterprise. That book was really successful. It was a O'Reilly Book. That was really fun because I'd already done a previous O'Reilly book on OpenStack. So I was a person that knew how to write a book and knew all the questions that were going to come up, all the publisher detail kind of questions. And then I was able to pull in those two great experts and we did a great Kubernetes in the enterprise book. And then we moved on to this new book which will come out in June. And it was fun getting the band back together. They now both had one book under their belt so they knew what they were getting into. Writing a book for me is really interesting because it's a lot of work you do at night. You lose a lot of free time. And I like to joke, it can cause a lot of stress. So your family looks at you like... Your family, you tell them you're writing another book, they're like, " Really?" They don't see the joy. They're like, " Oh, you're never around. You're not here to help." We'll see if three is the limit for me. I've survived three books. My marriage has survived three books, so that's a good sign. But they're a lot of stress, but they're very rewarding once you get them done. And so this one will be out in June. And we've got some pre- reviews that have happened. We've had some really nice people review the book. There's a guy named that goes by Pop, and there's a community leader at inaudible and he was one of our reviewers and gave some really nice feedback. So huge thanks to Pop for doing that for us. So that's the process we're in, pretty much written, doing the reviews and getting the proofreading done. It's a lot bigger book than our previous books. And this one's a biggie, and we'll see how it goes.

Luke Schantz: So just a reminder to our audience listening, if you have questions along the way, please feel free to drop them into the chat or the comments on the platform that you are watching on and we will get to those and filter them up to Brad. So let's get to the content of the book.

Brad Topol: Absolutely. The book covers a variety of topics. There's going to be a couple chapters at the beginning that are going to give you a review of Kubernetes in OpenShift, so just in case you haven't seen that somewhere. You've got the overview. And then chapter two is getting started, shows you how to get started for both environments in a variety of ways. And then we start moving on to other topics that are really interesting. The big one that you'll see after that is single cluster high availability. So this is something that Jake Kitchener's an expert at. So before you even worry about multi cluster high availability, get your single cluster high availability up to where it needs to be. Then we start running into topics like continuous delivery across clusters. So how do you do continuous delivery? What are all the tools that are out there for Kubernetes and OpenShift? We cover a variety of the most popular tools for continuous delivery. And then towards the end of the book we cover a lot of topics about multi- cloud, multi- cluster and how you can take advantage of that. So this is more of an advanced level book. It's going to touch a couple of different areas, but it really is going to give you some expertise about how to run in production, what types of failures can occur, how you mitigate them, both single cluster failures, multi- cluster failures, and then how to run in a multi- cluster environment. So the three of us are really proud of the book. It covers a lot of territory. Early reviews have been pretty good. We're excited to see it get published in June.

Joe Sepi: That sounds like some really advanced topics. Do your other books lead up to them or do you have recommendations on how to get to where somebody could come into your book and really dig in?

Brad Topol: Yeah, our first book, Kubernetes in the Enterprise, will cover more of the basics and you can get that one that's available from the O'Reilly site as well, the online learning site. So feel free to check that one out. But yeah, this one, you can get started. We tried to do enough intro material in the early chapters so that if you wanted to be aggressive and just go look at it, you can see how far you get. But then as you get into those advanced chapters, that's where you're going to really get some deep expertise in some topics that aren't covered typically, the high availability, the multi- cluster, of large number of continuous delivery tools and continuous delivery across multiple clusters. So it is a little more of an advanced book. So if you've read some of the basic books, this is a good second book that I think people will enjoy.

Luke Schantz: I'd like to contextualize this in something we were actually talking about yesterday, Brad, about this idea of why a lot of this is maybe important to even smaller companies that aren't at enterprise scale yet if they're looking to either sell themselves to an enterprise or become a vendor that the enterprise can use. So I think in our lingo, we call this ISV. And I think we're so used to that term, I feel like we should maybe unpack that a little bit for our audience just to know... Because it's a bit of a Venn diagram that could encompass a number of just medium- sized companies or startups. And then maybe... So what is an ISV, and then we could get to how this is of value to them.

Brad Topol: There's different types of ISVs. The ones we're really focused on are what I'd call our build partner ISVs. These are ISVs that are looking to, say, partner with IBM and build on top of our cloud and have an offering on top of our cloud. And the advantage to the ISVs is that helps to give them a bigger market. They've got a bigger market. And really what we're focused on is helping them to write what's called an operator. And once they build an operator for their software, we can then move them to what's called Red Hat Marketplace. So that's a great marketplace where if we can get them running in the marketplace, and that's what my team does, we help provide assistance and help them reduce the friction that they may encounter, because these ISVs are all at different places in their journey, but the goal is we'll help them get to Red Hat Marketplace. That means they're running on OpenShift. And the advantage to the ISV is once they're in that marketplace running on OpenShift, people are then able to quickly and easily purchase that software from the ISV and run it, not just on IBM Cloud, but anywhere OpenShift runs, so all the different public clouds that OpenShift runs. Once they get their software to that point, that's a much greater market. And then obviously there's opportunities that if they're an IBM partner, you get the full strength of IBM to help market their stuff as well. So it is a great way to look for the ISVs, both small ones and big ones. Obviously there's big ones that we have and we know what the big ones are. But it's a great opportunity for the smaller and medium ones to have a way to really increase their reach and their revenue. And so they're motivated to do it. And the challenge is how do we reduce the friction? How well do they know Kubernetes? Where are they at in their Kubernetes journey? And do they already have their stuff in a cloud native form? Are they already running on OpenShift or not? So the nice thing about OpenShift is it provides guardrails to make sure that what you are creating is very secure and safe. So if you're a bank or an insurance company, the one thing you have to be careful with just vanilla Kubernetes is yeah, it's real, real easy in some ways, but you do a lot of things that you really shouldn't do. Your containers are privileged, your containers run as root. And OpenShift is that barrier to getting on the turnpike where we check things. Just like you want to check your car before you get on the highway and make sure the tires aren't deflated or that there's not a problem, the transmission's not slipping before you try and get out on that highway, or there's no oil in the car, let's make sure your container based application is really ready to run in a secure environment. And so it's going to keep you out of the box from doing things like, " Oh, I always ran as root as a privileged container." It's going to stop you and say, " No, you can't do that." Out of the box, it's going to make you make some changes to be more secure. So it's pay now or pay later. You could roll your own Kubernetes environment, think you've tweaked the security the way you think it's right, not enforce the container security mechanisms that are best practices, and then hope for the best because who knows who's going to show up and easily be able to hack what you do and cause a whole bunch of embarrassment later. Working with folks to make sure that once they've learned how to build things as containers and running Kubernetes, now let's take a hard look at let's building them so that they run securely is really a key thing that we're going to help our ISVs. Because we can run it on OpenShift. You got a couple different advantages, you've got the advantage that OpenShift runs everywhere. It's the same environment. So you get that benefit. But you do get that extra security and you can really feel comfortable that you're running in a very secure environment. And then of course you get the other benefits of OpenShift. That platform is really great with what we call day two operations. So when it's time to upgrade, it has built- in upgrades for both the underlying operating system and for OpenShift itself. If the cluster itself needs to be increased in size and new nodes need to be added, that's all built into OpenShift. So that's where you really see that benefit for folks on OpenShift. Beyond some things for getting developers up and running faster, that day two operations and being able to handle those seamless upgrades of both operating system and the platform and being able to increase the size of the cluster, all these built in features, people will really benefit from when it's time to get serious and worry about day two operations.

Joe Sepi: That's great. Something I wanted to dig in with you for a moment if you don't mind. And I see it's a question from YouTube as well. Which by the way, we're streaming to YouTube, Twitch. Anywhere else? Is that all the places?

Luke Schantz: Facebook and Periscope.

Brad Topol: Wow. Cool. Nice. Welcome everyone.

Joe Sepi: I'd love to dig in a little bit more into operators. I hear about it a lot, but I'm not really familiar with how they work and what benefits they provide. And maybe that's even a segue into the marketplace because that's something I think is also interesting.

Brad Topol: Absolutely. So if you look at Kubernetes out of the box, it's really good at scaling your application and making what we call replica sets with multiple copies of the application. And if a few of them go down, it can start up new ones. It can also roll out upgrades of the application. So if you want to roll out a new version, you can roll out slowly or quickly. Out of the box, Kubernetes is really good for what we call stateless applications. But when you get to stateful applications, and think databases, then the vanilla artifacts of Kubernetes, like deployments and replica sets, they're not so great. They try to do something called stateful sets, but they didn't quite... It isn't quite really flexible enough. So they needed an extensibility model so that users could create their own resources and they could then manage those resources, worry about the lifecycle management, worry about the upgrades, worry about making backup, whatever a human operator might do for software, they wanted a way to make that available and to do it in a way that it was extensible, so that when you were able to create your own stateful resource, that you could do it in a way that if you did it the right way, it felt like it was just part of the standard Kubernetes and you could interact with it using kubectl apply just like you did with standard deployments and replica sets. And that's what was really cool about operators. They give you the ability to build a custom resource, and your custom resource can follow the standard Kubernetes model, which is there is an observed state and in a desired state, and the operator will have its own controller and control loop, just like all the standard Kubernetes pieces have, all of the standard Kubernetes controllers, whether it's a deployment or replica set or pod, they all have a controller with a control loop that looks at the observed state and says, " Is that what I want? And if it's not, what do I need to do to get to the desired state?" This was the best practices that was built a couple years ago in Kubernetes. And so you can actually build essentially your own custom resource in Kubernetes, and you can add all the bells and whistles you need. So if you've got databases and you need to worry about lifecycle management stuff, you can build it all into your own custom controller. And when it's deployed, it feels like it's just... It fits seamlessly into the rest of the Kubernetes platform. And why it's called operators is a lot of times what the piece of code is doing is what a human operator would normally do, and that's why they call them operators. Because essentially, the control loop is automating what you would've normally had a human operator do. So if you had to worry about doing database maintenance or upgrades, and before you had some script from a human do it, now the controller can do it. And what's great about the operators is that there's a toolkit called the Operator SDK, and it allows you to build three types of operators. If you already had Helm Charts laying around, it's got a plugin that allows you to wrapper the Helm Charts. If you were using Ansible, it's got a plugin that allows you to reuse your Ansible. And then if you don't have either of those or you need to do more advanced capabilities, it allows you to build a custom operator controller in Go. And that's very powerful. Now that's a little bit more work. So if you build the operator for Helm and Ansible, that's much more straightforward. The challenge with the Go Operators is there's a little more you needed to learn. You needed to know some of the basic Kubernetes and you needed to know some of the APIs so that if inside your operator you needed to create a deployment or a service, you need to know those V-one APIs to do that. And so that's where the friction occurs and that's where we need to reduce the friction. Fortunately, I got a great team of content developers that are building operator courses, operator workshops that are going to... Basically what we've done is a lot of great information about operators is out there, but they're out there in, say, 20, 25 different sources and you've got to go pull it all together. So for example, operators build on top of another technology from Kubernetes called Kubebuilder. And if you haven't looked at Kubebuilder and know what Kubebuilder gives you, it's a little hard to understand how it works in operators because it reuses some of those technology. It's got some annotation technologies. So what we're doing is building content that pulls all that information from those, say, 20, 25 different sources to help you have it all in one stop shopping so that you see some good patterns, repeatable patterns of how you could build those Go operators. So my whole focus is how do I reduce the friction for somebody who's, say, new to Kubernetes, you're an ISV and you've got some folks that have played around with Kubernetes, but maybe they're not somebody that's worked on the internals of Kubernetes for a couple of years, they're just new in using it. So maybe they don't know those APIs, maybe they're not real familiar with Kubebuilder. How can I deliver content and Coursera courses that will help ISVs and others to get their operators up and running? And if I can help them get their operators up and running, I've also got a team that's going to help them to get it running on Red Hat Marketplace and get their operators certified. And now we're talking Benjamins. Now we're talking money. Get it to Red Hat Marketplace and instead of Joe having one guitar in the background, now he's going to have six or seven guitars in the background.

Joe Sepi: Those ones are out of frame. So if I understand you correctly, the functionality exists in inaudible Kubernetes, but if your application gets more mature, more complex, then there are operators that are already built for Helm or Ansible or whatnot, so that would be the next step. But then if you need even more control out of how the lifecycle of your pods and such, then you would write your own operators in Go? Is that how inaudible?

Brad Topol: That's right. If you've got to do some really advanced lifecycle management or if you want to do some analytics and insights where... When they talk about operators, there's five levels of maturity. Level one is just install. And a Helm Chart can typically do that. Level two is upgrade. Helm Chart can typically do an upgrade. So level one, level two, if you've got Helm Charts, they can typically handle that and you can use the Helm based plugin for the operator to do that. The advanced stages 3, 4, 5 are more like you're doing insights, you're maybe doing some backups, a lot of advanced features. And then to get to the level three through five, you can use either Ansible or you can use Go. So both Ansible and Go will get you those five levels of maturity. And that's where things will get interesting. And like I said, if you've already got the Ansible scripts to do it, that's fine, but Go is another good way too. And whatever the ISVs want, we're going to help them to do that. That's what I do. I'm called in, we meet with the ISVs after they get excited about the market opportunity. Now let's talk to the technical team and let's see where you're at. And so where you're at is do you already have your stuff as containers? Let's hope for that. If not, we got to worry about getting you the containers. And then it's well, do you have Helm Charts? Do you want to use Helm Charts or do you want to reuse your Helm Charts? Well, let's try and do that. And then it's well, have you tried to run securely before? Did you worry about making sure you were able to run in an environment like OpenShift? And my job is to gauge those ISVs, see where they're at in the process, and then pull in the right resources to, again, reduce the friction of getting them to where they want to be, which is a high quality operator that's Red Hat Marketplace certified, and then can get them where they want to be, secure and then ready for that whole OpenShift market across all the platforms.

Joe Sepi: So let me just ask you one more question on this kind of thread that we're working through. Can you just give me an example of this path that we're going? And when you talk about the OpenShift marketplace, what does that look like? What's an example of something that they would put on the marketplace in this path?

Brad Topol: Sure. In fact, we have even IBM softwares on the marketplace. So Cloud Pak for Data is an example that is on the marketplace. And if you want to use all the operations of Cloud Pak for Data, there's some database stuff in there and analytics stuff that's available on the marketplace. But if you just go Google Red Hat Marketplace, you'll see there's just a large amount of software that's there. And again, the beauty of it is now somebody who wants to try your software, that's an easy place for them to try it and then easily pick where they want to try it. Do they want to try it on IBM Cloud? Do they want to try it on AWS? Do they want to try it on a Azure? It's going to make it real easy for them to do all that, purchase and try in one- stop shopping location.

Joe Sepi: Cool.

Luke Schantz: This really resonates with me personally. And I have a little story here which maybe some of our listeners may relate to. I was in these exact same shoes. I was a startup ISV, and we got a pilot with one of the biggest media companies in the country/ world. And it went pretty well. I'd like to say we failed forward. It led to a lot of other good things like me getting recruited by IBM. But I must say, we spent so much time and energy on our traditional infrastructure and solving these problems and keeping those plates spinning, when we weren't focusing on the real value differentiator that we were trying to. And I know it seems as we talk about these higher level and more advanced topics, I think I'd like to bring it back to that thing of why are we doing this. And it's were doing this so we can create the most value for our customers, we can use our time the best. And we don't have to do it now, but I feel like this also ties into that history of how did we get here? And I remember what it was like. Seven years ago, I was still doing training on traditional infrastructure and load balancing and highly available pairs, and so much has happened.

Brad Topol: Yeah. And we should go down that history path real fast because as we talked about the more advanced stuff, we glossed over the basics. And so let's go back in time to, say, 2012. Where were we at? People were delivering software as virtual machines. They'd run it on cloud platforms that supported VMs. And that was okay. And I believe it was 2013 at PyCon, Solomon Hykes came and gave a demo and said, " Hey, most of you all are deploying your stuff as virtual machines, which kind of works and it's faster than just on bare infrastructure. But let me show you how I can take a process and I can now use some advanced features in Linux so I can isolate the process. And so now the process is isolated and I can give it its own little file system and its own little piece of networking. And now when I run my application and I provision it, it starts up just as a process, not as a whole virtual machine. And look how fast my stuff starts up. Look how fast I can snapshot my stuff." And we all started playing with this technology, and what it ended up being called was containers. And Solomon started the company called Docker. And so here he had had this wonderful new advancement because it was like VMs, you could snapshot them, you could provision them, but they were a lot faster and a lot smaller and you get a lot more on a server. And so that was great. And then there were companies like Google who said, " Oh, we've been playing with containers for a large number of years and we know how to orchestrate and provision them. We've got a software called Kubernetes. And so with Kubernetes, what's great is we can get multiple copies of your application up and running. We're going to have built- in load balancers. We're going to have built in auto scalers." A lot of the previous cloud- based virtual machine infrastructures didn't have that all built in. You had a roll your own load balancing, roll your own auto scaling. And that stuff is tough. And what helped Kubernetes is we started a foundation, the cloud native computing foundation. So now it was Kubernetes is for everybody with multi- vendor open source, level playing field, solid governance. Now we had enterprises willing to give Kubernetes a try and see the benefits. And so they could take their application and it would use a declarative model. I would always use the same joke when I give my Kubernetes presentation. I'd say, " Anybody here worked directly for a vice president?" And a few hands would go up. And I'd say, " When you work directly for a vice president, here's your job. Here's what happens. The vice president says, 'Hey Brad, here's what I need. Go make it happen.' And they just do this wavy hand,'Go make it happen.'" And that's what happens with Kubernetes. It's a declarative model. You get to declare what you want. I need eight copies of my application running, or I need eight copies of my application running, but if the CPU gets to 80% utilization or more, scale that up to 12. And so what I like to say is when you use Kubernetes, you get to be the VP because you get to say, " Hey, Kubernetes, here's what I need. Go make this happen." And Kubernetes takes care of it. It worries about the copies of your application running across the cluster. If they crash, it recognizes that some of them crashed. And so if you say I need eight of them running and three crash, it will realize that and say there's only five, he needs eight, let me spin up three more. Which is a lot better than me getting paged in the middle of the night by saying, " Hey, the apps running really slow." And that was the beauty of Kubernetes. Building into it from the ground up, great support for recognizing when your applications are failing, so it's got the ability to identify if things are failing and the ability to restart new copies of the application and the ability to autoscale in a variety of different ways. And since your application now could be crashing and crashing on one node, being restarted on a different node, your standard load balancer like Luke used will not work. Because low balancers aren't that smart. Kubernetes built into it a capability called the service resource, which is that built- in proxy load balancer that's going to worry about being able to find all the copies of your application no matter where they've moved. And so you put all that together, and the combination of being very lightweight, with being container based as opposed to VM based, and then really great technology for provisioning your application, multiple copies, being able to upgrade, you could easily say, " Hey, upgrade it really fast, blue- green deployment, upgrade it really slow," and having multiple copies is what has really sold people like Luke on, " Hey, I don't have to worry about all that stuff anymore. I can worry about my application, build my application, build my value, and reuse all that stuff that we were all writing ourselves over and over again, my own load balancing, my own auto scaling." You don't want to write all that stuff. And nobody else can understand what you did unless they go read your scripts. In the declarative model, it's real obvious. Luke says he wants eight copies and maybe go up to 15 if the CPU utilization gets high. That's what Luke wants, that's what Kubernetes is going to give him.

Joe Sepi: That's great. I like the VP analogy too. And I think we talk a lot about DevOps in this sort of world, but I just wanted to highlight, and this is probably preaching to the choir as well, but from a developer standpoint, it's been amazing. I used to work at the New York Times, I used to do production and development work before joining IBM. And I remember at the time, everybody had PHP on their machine, essentially a lamp stack to do development on nytimes. com, and you had to keep track of who had what version and things would break and you're like, "I don't know, it works on my machine," that kind of stuff. And I remember my colleagues starting to dig into Chef and Puppet and trying to get us all on a standardized containerized environment. And it was heavyweight at the time, but I got to see that progression to more lighter weight solutions with Docker and Kubernetes, where we are these days. So from a developer standpoint, too, just to have a consistent environment that you develop on, everybody's on, you move that to staging and testing and QA and all that stuff, and it's really, from a developer's standpoint, it's really been a great improvement.

Brad Topol: Yeah, and let's dig into that a little bit. That's a great topic and it's one that we cover in the book with continuous delivery. But if you look at the model with containers, one, you've got, say, a docker file that describes how to build your application. And then you have all the deployment files that are YAML based, that are declarative and, say, Kubernetes. So now all your information about your configuration is not in a procedural script, it's more in declarative files. So now you can source code all of that. You can keep that and you can now start thinking about everything's in source code, so if there's a change to those things, it's in the source code. And now from the repository, being able to do your builds. And now you can start thinking about maybe I can do my builds and deploys. And now you can really start thinking continuous delivery because they made that switch to everything being either in a docker file or in a declarative YAML deployment file. This is way better than trying to manage a script. So you'll see it in the books, but we go into a lot of detail about how wonderful it is to be able to do the continuous delivery. And another advantage of containers, you're packaging everything up your environment needs. So we all know the classic one, I'm sure Joe's used this a hundred times, " Worked on my laptop." I'm sure he said that at the New York Times a hundred times. With the Kubernetes and the container technology, the environment that you have on your laptop is really darn close now to what production is because you're carrying everything with you in the container. So there's a lot less surprises that occur when you move from your development to staging to production. It's really wonderful.

Joe Sepi: Yeah, it really is. And I've got some funny stories from the Times. And I have a lot of friends that are still there, I love them. But the first piece of code I worked on, I finished it, I tested it, and I asked my manager, " Okay, how do I deploy?" And he said, " You just FTP into the server and then you just copy that file." And I'm like, "Oh my God, I'm so scared."

Brad Topol: Because if you don't keep it all in source code, you've got five different people with what they think are five different sources of truth. And so I push mine and mine's fine and Luke's going to push his and he's already tweaked something or forgot one of the tweaks that I did. The worst thing is people go... And we talk about continuous delivery. You never want to go into the production thing and go tweak on there. You want to go make the change in the configuration and then redeploy nice and clean. Because what's going to happen? We Luke, some four months ago, he made some wonderful change that fixed everything, but he didn't put it back in source control. It was late at night and he wanted to go out drinking with his buddies. So he just left it on the production machine. And now guess what? Now we redeploy, it's gone. It's gone. And now what do we do? And so yeah, the books talk about it really well, but following those continuous delivery principles and the DevOps principles will get you really far in life.

Joe Sepi: And just to wrap up that point, too, by the time I left, there was a modern situation there and everything was running pretty smoothly, but I was shocked in the beginning. It was amazing.

Brad Topol: Fantastic.

Luke Schantz: Brad, it's amazing that you said that because I think that actually happened, that scenario of me going out and forgetting the thing, that definitely happened in my startup days. And I remember back in 2013 exactly, my lead developer coming to me and saying, " Containerization, this is going to be what we're going to do." But looking at it then it was like, " How?" we kept doing it the way we were doing it. And I had one other thing here. I have a sci- fi analogy, if there's any sci- fi fans out there about how I like to think about traditional infrastructure, Kubernetes, and OpenShift opinionated versions. I think of the old way of doing things, and even vanilla Kubernetes, almost like the way Dr. Who navigates his ship. He is just running around throwing dials, and he's genius so he knows exactly what he's doing and he does everything perfect or he doesn't, but it works out in 45 minutes. But I'd like to think of OpenShift more like the way Tony Stark does it with Jarvis, where that declarative, where you ask for the thing, you're like, " Hey, run this, do this," and you get that thing you want back without having to be this super genius running around. And I think the whole thing with Dr. Who, too, was he was supposed to have a crew, but he was the last of his kind. But anyway, that's my little analogy.

Brad Topol: Yeah. And to fill in your analogy, so what happens with vanilla Kubernetes, if I want to run something, I got to be an expert at a whole bunch of things. So I need to know how to build my container image. So I got to go read those docker commands and understand the docker commands and be able to build my image. And I got to figure out where I can go push it to a registry. So I got to go make sure I have access to Docker Hub or inaudible or whatever. So I got to figure out that piece as well. And so there's a whole bunch of steps. And then the security, I got to go learn all the RBAC in Kubernetes, and that RBAC is not easy to learn and there's a lot of complexity there. And then if I compare that to OpenShift, OpenShift has a model called Source to Image. So literally it's going to be smart enough to build the image for me. It's going to have the registry already set up, so it's going to be able to deploy my image to the registry. It's got capabilities called Image Stream, so it's keeping track of whether the images have changed over time and if it needs to be redeployed. And then on the security, it's got what's called security context constraints where they're going to give you these profiles and I just need to know what profile will fit into. And each profile will map to a whole bunch of RBAC settings. So they're trying to do a lot to accelerate getting, say, even Java developers to be able to be cloud native and run in a cloud native computing environment. Because a lot of those steps, yeah, if you worked in Kubernetes for the past four years, yeah, you can do all those steps and you're an expert on doing all of them. But what about that person that just showed up? Now they got to go learn all this docker stuff. Now they got to go learn how to publish the image. Whereas in OpenShift, I got all these universal base images ready to roll for whatever I need. Node, for example, like Joe would love, it's ready to roll. I pick the base image, I got my code, I just do a push to the Git repository. It knows through sourced image how to build the image, get it deployed for me, and then worry about security for me, and not let me run with scissors. So OpenShift is not going to let me run with scissors. First time I used OpenShift, I tried to install a container that was running privileged, running as root, which would've been a really bad idea. And it said, " Nah, you're not going to do that." And so was that a little bit of annoyance? Yes, but I can fix that and keep my job as opposed to pushing something out into production that's not secure and now the whole world has everybody's credit card numbers or something.

Joe Sepi: Yeah, that's a great example of a guardrail kind of thing. That's what you want.

Brad Topol: Yeah, absolutely.

Luke Schantz: And you had mentioned content. I believe there's a conference that you put together or that you were involved with that has a lot of the open source Kube practices in there. And then you also have sort of a dream team you're working with that you're actually doing this consulting and helping people get this done, right?

Brad Topol: Yeah. So one of the things we did last year is inside of IBM, we embarked on training internally folks to become stronger contributors to open source projects, open source communities. So if you want to teach somebody to become an open source contributor, you need to teach them Git, you need to teach them some Go, say for Kubernetes, you need to teach them all the pieces of Kubernetes, and you need to teach them how to behave in the open source environment. So we started doing the internal training. And the original plan was we were going to travel to all the labs and start doing this in person. And then COVID hit. And so that plan went out the window. And so what we thought about was how are we going to train all these internal IBMers? We said, "You know what? We're going to go and do digital courses, we're going to make things available. And we'll do the courses in a way that there will be some live interaction." It was a really neat model. And so we had a whole bunch of courses that were somehow related to Kubernetes, even the basic stuff. So basic stuff like how to do Get, how to set up your Kubernetes development environment, all the basics. And then we moved on to everything you need to know about Kubernetes. And then we did deep dive topics on things like scheduling, networking, storage, operators. So we had all the content. And we came up with a really cool model, because we had to keep it to a digital conference, digital course they you could, say, do in about two days. But here's the problem, nobody can give you two full days. So what do you do? You have to prerecord the content. We had all the content available. But that's not enough. We then actually rerecord the content where we have live interaction. It's actually teaching a digital course, and everything with the people asking questions is recorded. So the beauty of it is you get the best of both worlds. People can go show up to the conference, and if they missed a piece because they had to step out for meetings, they can go catch up on their own and watch the earlier presentations. But then for the later presentations, they could actually join it live or see the other people who joined it live and see what questions they asked. And so we were smart enough, we do it in a session where they'd be like one day, week break, one day, week break, one day, week break. And that gave people a week to catch up and really learn the material and be ready for the second phase and third phases. And so when we did all this, we said, " You know what? Hey, we don't have to just keep this internally." We actually invited some customers and invited a few of them to be involved in the session. So they got to participate, got some free open source training. And then we said, " We can be even more generous." So we're giving people, we made it all available to the world. And so if you go there, all of that great training is available for free. And it covers three days worth of material. You'll learn the basics of Git, to some Go, to setting up a Kubernetes developer environment, to all the advanced stuff, and learning how to become a committer, a maintainer. What do you need to be doing to be a leader in an open source community? So some of those soft skills, some of those review skills. So it's not just learning the code, but learning how to do proper reviews and learning how to behave in the community, which the golden rule is don't be a jerk. Be nice to people, be helpful to people. Do the things like we call, chop wood and carry water. Go do the non- glamorous stuff. The things that you do in a community, the non- glamorous stuff, it's not necessarily writing all the coolest code that gets the credit, but being willing to do bug triage, being willing to be the person that says, " Oh, these docs are terrible, I'll go fix them," all the little things. And that's how I got for the documentation team, the localization chair. We have all these wonderful teams working on Kube documentation in 20 different languages, and we needed somebody to hurt all the cats. They should all use common tooling, they should all use common best practices. " Hey, who can take ownership of that?" And I just one day said, " Me. I'll do it." And that's one of the things that you learn about in our training. In an open source, people won't come tell you what to do. You have to be proactive and say, " I'm willing to do that." And that's a different model. If you work at a company on proprietary software, we're all used to having a technical lead that tells us what to do and what to go work on, " Oh, this is your job, this is your job." In open source, it's much more of, " Hey, I'm willing to volunteer to go do these things." And sometimes poor people are just waiting for somebody to tell them what to do and you got to shake them and go, " No, that's not what happens." You need to go say, " I think I can do this and I've done all the little things that you all trust me to now do something a little bit bigger." So start small, build up that trust in the community, and then try and get the bigger pieces, level up, if you will.

Luke Schantz: That I love that advice, because that is such a great way, even for the developer's career, for them to make a name for themselves and get known throughout the community. And like you said, you're working for a company, if it's proprietary, that's one thing, but working in open source, now you really get to know the community, you get to know folks in the industry. We are approaching the top of the hour, so if you have any more questions, please put them in chat. But I believe we actually have a special guest who's going to come in who has some questions and wants to chat with. Here we go, we got JJ.

Joe Sepi: Welcome, JJ.

JJ: How is everyone?

Joe Sepi: Great.

JJ: Did you know it's my birthday today?

Joe Sepi: Wow. Wow. That's not where I'd be on my birthday, but happy, welcome. Thank you for coming.

JJ: It's good stuff. But yes, hi, I'm JJ. I'm a developer advocate for the IBM Cloud. And I was listening to your all's conversation, and there's something you touched on and I wanted to expand on that a little bit, if you don't mind. I've been recently working with this one company, and they came to me and they're like, " Hey, JJ, we're going cloud native. We took our inaudible, we turned it into container and put it on Kubernetes. Aren't you proud?" And I was like, " Wait, what?" So hold on. They had a built- in scheduler to the app. So the first question I had was like, " Okay, so you have this Java app, it's a monolithic inaudible file. You have a scheduler built inside of it. You know Kubernetes is designed to do scheduling for you? So you have two schedulers now for your application." They're like, " Wait, what?" " And also you made this as one pod and you're running on a three node cluster. Why is only one machine always pegged at a hundred percent CPU and nothing else happening on the other ones? It's because your pod is only sitting on that machine." " Wait, it doesn't do that for us?" " No." Okay, let's talk about microservices. " Oh, does that just mean I put more containers in with more of the inaudible file?" I'm like, " Okay, sure, you could. But there's a little bit more complexity to that." They're like, " Oh. So wait, are you telling me I'm not cloud native?" " I'm not saying no. I'm just saying you're got a longer journey to go. You've got to really start thinking about, the term is a strangler pattern, where you can start pulling off portions of your application into its own little service so then you can let the scheduler of Kubernetes, or OpenShift for that matter, take care of that for you." And it became a really interesting conversation because I saw the light bulbs start turning on, being like, " Oh, so I don't have to worry about this anymore. Oh, I don't have to worry about that anymore. Oh wait, hold on. You're telling me now instead of having this monolithic file that I've got to teach everyone how to use, we could write a little small app in Go with these new developers learning how to use Go or Node or whatever. They can use the language they want to. And as long as they give me that container and there's a rest endpoint that we can talk to it and make communication of getting a inaudible blob or whatever back and forth, and they can own that portion." I'm like, " Absolutely." So you're seeing where we're going here. It was that moment that the architects were like, " Oh." So it's a paradigm switch. It's not a technology like you just dump technology on the problem or give people money and things happen. You look at it in a different way and start learning how to build it that way and things become more successful. Does that make sense?

Brad Topol: Oh, absolutely. If people don't fully grasp the concept of microservices and the benefits, that's where you're got to have some tough talks about, " Listen, if we can break these things into little pieces, now each one of these things can be maintained themselves, and that's a lot easier than trying to maintain that big thing." So splitting up into little pieces and then helping them to understand. But the other classic is when you ask, " Okay, okay, I got a web server and I got a database. Well, I'll just put them all in the same pod." But wait a minute. The odds are the web server you're going to want to scale way higher up than the database. So you're going to maybe only want three databases, but you're going to probably want 10 web servers or application servers. So if you shove them all in one pod, they don't scale. Whereas if you break them up into separate pods, now they can run on different machines and you can now say, " I want to scale that web server up way high, and I don't need to really scale the database up so far." Asking them the questions like that, that was one of my favorite questions when I do an intro to Kube is, " Hey, you think you understand pods. Do you put the database in the web server in a single pod or do you put them in separate pods?" And then you can feel whether they really are understanding the concepts or not. And yeah, the light bulb goes off when you can finally say, " Look, I'm going to now be able to say this piece, we can now do an upgrade independent of this other piece, and this piece we can now scale independent." And fortunately we've got great people like JJ who can have those conversations in a gentle fashion. The JJs of the world are able to do that in a non- threatening way and say, " If you could just follow me on this journey, and it's easy to follow me because I got this purple beard, I can get you where you want to go." Hey, everybody's saying happy birthday to JJ.

JJ: Thank you.

Brad Topol: Can you believe he shared his birthday with us? He just turned 23.

JJ: 24. Thank you.

Brad Topol: 24. 24. All right, cool. What else, JJ?

JJ: Yeah, I just wanted to make sure I brought this to the table. This conversation's been amazing. It really has. And thank you for allowing me to be a part of it.

Joe Sepi: Yeah, thank you for joining. It's great to have you. I saw something similar at when I was... At Adobe, I was on the inaudible team and we were really pretty modern in how we were trying to approach inaudible and everything. We had this whole queuing system for Git and we were pushing out 80 releases a day on average. It was amazing and just felt really empowering. And we got elevated within Adobe as the team that was really doing this right. And one of the first teams that came to us had this Java monolith, and they sent us a script, literally a text file of all the steps that they had to go through to deploy their Java application. Took them over a day to do it. And it was just amazing to see this night and day DevOps situation. It's incredible.

Brad Topol: Fantastic.

Luke Schantz: So Scott, maybe we could bring the link to Brad's book back up on screen. And I think we're getting to the end of our time here. I'm happy to, if there's any other questions coming through, we could slip a few of those in. If not, please put them in the comments and we will monitor those and try to answer them after the fact. And yeah, does anybody have any closing thoughts? Anything we didn't discuss? I think there was so much more we could discuss. We'd love to have you back on another show to maybe even talk more about the history and sort of how we got here. I think there's a lot more we could unpack there. And JJ actually, I could also see, I would love to have you on a future show as a special guest as well. But yeah, any other closing thoughts from folks? Anything that we should include that we didn't?

Joe Sepi: No, I'm good. I think this was great. I really appreciate Brad and JJ as well for joining on. It's great.

Brad Topol: Well, thanks for having me. This is a lot of fun. It's always fun to sit and talk and talk about what's happening, old and new, is a lot of fun. I get nostalgic and then get excited about talking about the new stuff.

Joe Sepi: Yeah.

JJ: I just want to say one thing to the viewers. This stuff is hard. It is. If you get frustrated, that's why people like us exist. We are there to be helpful. And please reach out to us. We want you to succeed. So you will get frustrated. This stuff will sometimes not make sense to you. Please, use us as resources. Find us on Twitter, find us wherever you need to.

Brad Topol: Absolutely.

JJ: Yeah. And we are here to help you. That is our job.

Joe Sepi: Yep. My DMs are open. Feel free to hit me up.

Brad Topol: Look for our content. We're going to have a lot great content, particularly on operators coming out in the near future. We do our best. My superpower is to just forget a lot of things. And so when I go read the content that's created, I'm like, " But you didn't explain this and you didn't explain that, and how do you expect them to know that?" So we really do try and do a good job of filling in the things and not assume you've got four years of Kubernetes development underneath you as you try and learn what we're trying to do. Like JJ said, we're all available on Twitter. Please reach out. And go check out the developer conference content if you want to learn about how to become a Kube contributor and learn a little bit about operators. And know that more content is on the way. We're trying to build it as quickly as we can.

Joe Sepi: Yeah. I know. JJ J's got to go. Thank you for joining us, JJ.

Luke Schantz: Happy birthday.

Joe Sepi: And Speaking of content, I just want to make sure we mentioned developer. ibm. com. We've got so much stuff up there and more and more coming every day.

Luke Schantz: Absolutely. I'd like to also mention, we just dropped a new podcast this week, the conveyor. io community podcast with James Labaki. That's up there on the IBM developer site. You can check it out. We've also got DevOps talks from the IBM Z team. It's been a pleasure having you, Brad. Thank you for being here, co- host, Joe Sepi. Developer. ibm. com., lots of great stuff there. Thank you so much. Thanks for watching.

Brad Topol: Hey, thanks for hosting me. Appreciate it. Luke, Joe, appreciate it.

Joe Sepi: Yeah, thank you. Talk soon.

DESCRIPTION

What you are about to hear is a new podcast and live stream show entitled, “In the Open with Luke and Joe”.  In this series my cohost Joe Sepi and I  bring you conversations with community and technical leaders from the world of open source and enterprise tech.   We do this live twice a month on Fridays at 12 noon eastern time.  You can catch us on a variety of streaming platforms or here as replay on your favorite podcast app. To find out all the details go to ibm.biz/intheopen. There you will find our show schedule, an embedded the live streaming video player as well as embeds of past video episodes.  Or you can link directly to the podcast page with ibm.biz/intheopenpodcast

In this inaugural episode, Luke and Joe are pleased to bring you a conversation with Dr. Brad Topol. Brad is an IBM Distinguished Engineer, developer advocate and CTO for Open Technology. We’ll be discussing Kubernetes and OpenShift as well as his upcoming O’Reilly book Hybrid Cloud Apps with OpenShift and Kubernetes.

Brad has extensive experience in the open source space and we are excited to have him on the show.

Today's Guests

Guest Thumbnail

Brad Topol

|Distinguished Engineer, IBM