Andrea Crawford | DevOps & IBM Garage Method | In the Open with Luke and Joe

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Andrea Crawford | DevOps & IBM Garage Method | In the Open with Luke and Joe. The summary for this episode is: <p>In this episode, we are pleased to bring you a conversation with the Distinguished Engineer, Andrea Crawford.&nbsp; Andrea serves on the IBM Garage Cloud DE leadership team and is a DevOps expert.</p><p><br></p><p><strong>Key Takeaways:</strong></p><ul><li>[02:53&nbsp;-&nbsp;04:13] How real AI and DevOps are related with cloud native</li><li>[04:28&nbsp;-&nbsp;10:02] Why Andrea considers DevOps as the umbrella that everything falls under</li><li>[14:44&nbsp;-&nbsp;19:11] How Andrea begins the approach to moving towards a more modern DevOps approach</li><li>[19:37&nbsp;-&nbsp;22:22] What is the Garage?</li><li>[22:47&nbsp;-&nbsp;26:31] The AI aspect of the modern Devops Manifesto</li><li>[26:47&nbsp;-&nbsp;29:00] QUESTION: "Does your[Andrea] idea of static analysis here go beyond continuous integration?"</li><li>[29:37&nbsp;-&nbsp;33:43] How to digest AI in DevOps predictive techniques</li><li>[38:59&nbsp;-&nbsp;41:18] The evolution of breaking silos</li></ul><p><br></p><p><strong>Connect with those in this episode</strong>:</p><p><a href="https://www.linkedin.com/in/andrea-martinez-crawford/" rel="noopener noreferrer" target="_blank">Andrea Crawford</a>, Distinguished Engineer (DevOps), IBM Garage&nbsp; @acmThinks</p><p><a href="https://www.linkedin.com/in/joesepi/" rel="noopener noreferrer" target="_blank">Joe Sepi</a>, Open Source Engineer &amp; Advocate, @joe_sepi</p><p><a href="https://www.linkedin.com/in/lukeschantz/" rel="noopener noreferrer" target="_blank">Luke Schantz</a>, Quantum Ambassador, @IBMDeveloper, @lukeschantz</p><p><br></p><p><strong>Resources</strong>:</p><p>The Modern DevOps Manifesto: <a href="https://medium.com/@acmThinks/the-modern-devops-manifesto-f06c82964722" rel="noopener noreferrer" target="_blank">https://medium.com/@acmThinks/the-modern-devops-manifesto-f06c82964722</a></p><p>IBM Garage: <a href="https://www.ibm.com/garage" rel="noopener noreferrer" target="_blank">https://www.ibm.com/garage</a></p><p>IBM Garage Methodology: <a href="https://www.ibm.com/garage/method" rel="noopener noreferrer" target="_blank">https://www.ibm.com/garage/method</a></p>
How real AI and DevOps are related with cloud native
01:20 MIN
Why Andrea considers DevOps as the umbrella that everything falls under
05:33 MIN
How Andrea begins the approach to moving towards a more modern DevOps approach
04:26 MIN
What is the Garage?
02:43 MIN
The AI aspect of the modern Devops Manifesto
03:45 MIN
QUESTION: "Does your[Andrea] idea of static analysis here go beyond continuous integration?"
02:12 MIN
How to digest AI in DevOps predictive techniques
04:02 MIN
The evolution of breaking silos
02:18 MIN

Luke: We are excited to bring you a conversation with Andrea Crawford. Andrea is a DevOps expert, distinguished engineer, and part of the IBM Garage team. Before we welcome our guest, let's say hello to our co- host Joe Seppi.

Joe Seppi: The weather is a little weird today and we're leaning into a weird rainy, cold weekend eh Luke?

Luke: I'm looking forward to not having to water my garden and nature's going to take care of it. And it's interesting, we've had these very hot days here in Connecticut but then very cool nights. A swing of sometimes 25 degrees in a day.

Joe Seppi: My son has started playing cricket, and we have a tournament this weekend, not far from you, but I'm trying to figure out if it's rain or shine, or what's the plan going to be. That's what I'm doing.

Luke: That is quite the extracurricular. That is going to get him into Oxford or something. Lacrosse, water polo.

Joe Seppi: Cricket, scholarship.

Luke: Most popular game in the world I think I heard it is.

Joe Seppi: It's pretty fun, he's really enjoying it.

Luke: Anyway, we digress. Let's welcome our guest. Hello, Andrea.

Joe Seppi: Hello.

Andrea Crawford: Hello. Coming to you live from sunny and rainy Florida because that's the way we do weather in Florida, sunny and rainy.

Joe Seppi: The rain is intense in Florida. We were lost once trying to get to the airport and all of a sudden it just started pouring like crazy. So not only were we lost but we couldn't even see five feet in front the car. I won't digress too much here, but I put the airport into my phone maps and I didn't realize there were multiple airports around, I think it was Orlando, and we went to the wrong airport. It was ridiculous. Anyway.

Andrea Crawford: Nice. Florida, very confusing. We're sunny and rainy all at the same time.

Joe Seppi: It's amazing. Thank you for joining us.

Luke: Before we dig into our discussion I just want to mention to folks who are watching, if you have any questions feel free to drop them into the chat on whichever platform you happen to be watching and listening on and we'll try to get to your questions later in the show.

Joe Seppi: My son watches a lot of YouTube. Don't forget to smash the like and subscribe button. Anyway, I'm sorry.

Luke: Your son. You watch a lot of YouTube too, what are you talking about, Joe?

Joe Seppi: Davie504, it's amazing. Anyway.

Luke: On our agenda today we're going to talk a lot about DevOps. And I was reading a blog that you had posted, and the thing that really struck me and that I wanted to maybe kick this conversation off with is how real AI... What's happening? And DevOps are related with cloud native. I would like to say chocolate peanut butter but maybe it's more like a s'more because there's three ingredients there. It's not really a question. To start the conversation off.

Andrea Crawford: I always love a food analogy, Luke, so right on, man, I'm already smashing the like button here. The modern DevOps manifesto is really DevOps for the ages per se, and AI is no exception to that. The classical sense of DevOps was really geared towards the application layer. And now that we are in the cloud native era, we are really starting to see how we can give the benefits of DevOps not just to the application layer in the stack but also to the platform, to the infrastructure, and even in places where we may not have expected it before. MLOps, or Machine Learning Ops, and data ops, and applying DevOps in the context of AI. It really is something that probably I would say has been refreshed or recast in terms of how DevOps can apply to modern cloud native and even containerized apps.

Joe Seppi: MLOps, that took me a minute to process what you were saying there.

Andrea Crawford: ML for machine learning. It almost sounds like a cool band or an obscure city in Florida. I don't know. That rains and shines all at the same time.

Joe Seppi: Amazing.

Luke: With all these different varieties or flavors, if you will, of DevOps, is it a Venn diagram where we're like" DevOps is the big circle and all of these things fall within it, they're not really a separate thing." Is this the right way to approach it?

Andrea Crawford: Now you're going to hear a very impound opinionated view of DevOps, and this is Andrea's opinion here. But I really see DevOps as the sort of big umbrella. I have alluded to before the 587 flavors of DevOps and that sort of being tongue in cheek for what permutation have you not heard about DevOps? So there's DevSecOps, there's GitOps, there's AlertOps, there's DocOps, there's BizDevOps, there's ... Insert your permutation here. I still refer to it as DevOps because it has always included security. I prefer not to say DevSecOps because we've never not thought about security so I wouldn't want to propagate any misperception that we are just now thinking of security now that DevSecOps is a buzzword. I really do see DevOps as a very large domain that is continuing to expand. And really our clients are sometimes really struggling to understand how they can achieve DevOps benefits in the context of cloud native, in the context of containers, Kubernetes managed containers. And in the context of AI and platform, and infrastructure, and really trying to understand how those benefits can extend to those other areas.

Joe Seppi: It's interesting. As you're framing this, DevOps is all of that. I've read your blog post and I like the idea of this sort of focusing on different areas of expertise and isolation of sorts, but really security should be a part of all of that. Is that kind of what I'm getting from you?

Andrea Crawford: Spot on, Joe. Totally. Really it's an opportunity for our clients to recast the way that they think about the personas and the roles that they have within their own enterprise. I'll give you an example. In the modern containerized app paradigm, right, you have this notion of a base image that your app would run in. It calls into question, okay, what is this base image thing, right? Really it's an asset that your enterprise is using to really post an application. Don't you want to make sure that base image is somewhat trusted and has gone through some sort of vetting governance or certification process to ensure that there are no vulnerabilities baked into that image? Wouldn't you want to instrument that base image with the conventions that your enterprise has in terms of security and compliance? So maybe you have conventions around thou shalt not run anything as root. You shall have these certain file systems or directories configured in your base image so that we know on Day- 2 where we can find things if we need to. And so that whole notion of having this trusted asset, which is the second pillar in the modern DevOps manifesto, really calls into question okay, you have this base image, who's responsible for ensuring that base image is trusted, and certified, vetted, scanned, that sort of thing? So enterprises may want to consider creating an image engineer role where these are people who understand what base images are, where they can and cannot download the enterprise- based images from, where these vetted governed- base images are stored within the enterprise. It should be a private repo registry by the way. And this sort of spurs on a new organizational set of personas, roles, new ways of working, perhaps new digital enablement opportunity. For that image engineer role that is responsible for creating these base images, don't you think they should know about some of these really important scanning tools maybe at OpenSCAP? Maybe they should understand the way that the private registry is set up within their enterprise. Maybe they should be understanding some of these base conventions that need to be baked into these assets called base images. And don't we need to have digital enablement or badging perhaps to certify that these people know what they're doing? It's an opportunity to take this whole cloud native containerized app cloud adoption and really examine new ways of working, new personas, digital enablement, and bringing together what Howard Boville, our cloud BU leader, says is the combination of silicon and carbon where yes, you're doing the IT transformation but don't forget the humans, don't forget the carbon in the factor.

Joe Seppi: It's fascinating. I've watched this space grow and grow over the years for a while now and it really seems... It's at the point like that you need a robust team and people with specific skills to really manage an enterprise- level operations team. And it's not okay to just throw a bunch of engineers at it and assume they'll figure it out, people really need to have specialty skills in these different areas.

Andrea Crawford: You're spot on Joe. When you really think about it, the heritage apps of yester year in the Clinton era and so forth, they really had certain characteristics. They were very, what we call now, monolithic and these classify as your mainframe monolith programs, but also includes the sort of typical three- tier dot com RDBMS on the backend, app server in the middle, with load balance web servers on the front end. But the applications tended to be quite monolithic in addition to their databases too, by the way. The mechanics of actually building, deploying the complex release management cycles with all the interlock, and dependencies, and war rooms for production deployment, that is, I would say, a stereotypical archetype of what a heritage app is, right >. Now, if you look at containerized apps, you're talking about much smaller pieces that are being coded, that are being delivered, and deployed up through a promotion path. It's the whole pet versus cattle thing, right? It's the difference between a 10- ton gorilla and a hive of bees. A 10- ton gorilla takes a lot to get through the gate. A hive of bees, if you let one bee out at a time it's not a big deal. But the key is really in the coordination of all those bees together because they got to communicate. And so I'm trying to figure out how to change this into a food analogy but I just can't. Maybe wedding cake and cupcakes, I don't know. They don't talk to each other.

Joe Seppi: No, it's interesting, and I've mentioned this on previous live streams too. I worked at a team before IBM, and we were a small team, and we were able to really trial and experiment with modern techniques for delivering our applications. We're really ahead of the game in terms of containerization and orchestration. We were recognized for that work and promoted within the larger company, and then we were having to help these other teams modernize. And they would have a script, a two- page script with check boxes and names, and it was a two- day process to deploy this Java WAR file, and all the checklists and all the people had to be in the room. On the flip side, what we were doing was 50 to 80 deploys a day because we had that automation, and we had a process, and everybody knew what they needed to do. And we had tests and we had procedures for rolling back, and it was a fairly well- oiled machine. It's interesting to take these modern approaches to these sort of enterprise monolithic applications and how do you approach that?

Andrea Crawford: And Joe, you've definitely lived through the heartburn of going through those more heritage or vintage deployments. I think you well know then that doing things manually, and with those checklists, and applying that to the cloud native context, it's not going to work, it doesn't scale. How many fat finger commands do you need to go through to screw up deployment 23 of 56 in the day? I feel you on that one.

Joe Seppi: I've lived both sides of that and gone back and forth to where I've had modern and then I go somewhere else, and I won't name names, again, I have in previous episodes, but I did this critical change and I asked how I deploy it and my manager was like" You FTP into the server and you" ... And I was like" Wait a minute, what? No." Oh. The pain is real. I imagine that you have experience coming into these sorts of situations where a team is less modern. How do you start to approach that conversation and conversion and get them moving towards a more modern DevOps approach?

Andrea Crawford: How do you eat an elephant?

Joe Seppi: One bite at a time, is that what they say?

Andrea Crawford: One bite at a time, one bite at a time. Really the tried and true proven way that I've seen really work, and I've seen a number of different approaches... I'm coming at you from the IBM Garage and we are all about linking arm- in- arm with our clients and tackling bite- size chunks of that elephant in ways where we can move the ball down the field. I'm mixing up all these different analogies here. So we go in and we collaborate with our clients as part of a Garage engagement and understand what their overall outcomes or goals that they're trying to achieve. We'll talk about the heartburn, the pain points, the challenges that they're having today. The typical things we find are, my IT organization is too slow, the quality of my applications isn't where the business needs it to be and we're risking damaging the digital reputation of our company. Or it could even be things like I'm really not quite sure what's going on between the business dictating requirements to what actually gets deployed and I'm not really quite sure what happens in between. Not an exhaustive list by any means. Those are the kinds of things we typically hear. And what we do in the Garage is we help... To use one of our viewer's phrases here, herd the cats a little bit, and really start to establish with the client what is the delivery path of source code to something lands in production runtime? What security requirements do you have? What are your performance requirements? What are your compliance requirements? That sort of dictates promotion path of dev, test to performance QA stage or whatever it happens to be, and understanding what the client needs in terms of the outcome of what actually gets delivered in a runtime environment. And if they say, " Look, I really need to be super careful about what I'm deploying out there because I'm in a regulated industry and I can't have just willy- nilly things popping out all over the place." Then we might take a little bit more time, care, and effort in understanding what the security and compliance requirements are for that software delivery path. That's going to be a little bit different depending upon what client you go to. Clearly, DevOps is not new. DevOps tools typically are already in a client's enterprise. What we try to do is we try to define the governance of what needs to happen from source code commit all the way through to production deployment, okay, and understand what needs to happen in between so that the business can have trust and confidence in the IT organization that what it is that is being delivered at the end of that pipeline is something that can be relatively well trusted. Okay. And that means identifying the process, the activities, the scanning, the testing, the notary, the attestation, the logging of what's happening in the pipeline. Getting a warm fuzzy about that then mapping it to tools so leveraging existing tooling investments where we can, augmenting a toolchain with tools that they may not have, okay, and then actually build the darn thing. In the Garage, that's all about what we do. We're not esoteric, we're not about delivering a 157- page document, we build stuff, right? So in the context of DevOps, in the Garage we can build pipelines, we can define the developer experience in terms of what's going to happen in the pipeline. We can define that experience of that new image engineer that our client's enterprises might need to have. And that's all about what we do in the Garage.

Luke: I was just going to mention, maybe we could take a little tangent. We've definitely discussed the Garage in past episodes, but I think it might be worthwhile for our listeners just to give a quick overview of... People always talk about the full stack developer. I always like to think of the Garage as really a full stack of everything, right? It's got design through building and consulting. So maybe just give us that quick little overview if listeners aren't familiar.

Andrea Crawford: I describe Garage as the modern way of delivering modern apps on modern platforms. And truly what it is, a combination of Agile, DevOps, lean startup, cloud computing, design thinking. And taking the best of all of those things, cherry- picking some of those best practices, and putting them all together. And what we do is we work with our clients in squads where we actually understand, truly understand, what the business value is that a client is looking for. Because we never do IT for the sake of IT so we can't just be all nerded out. But we have to understand that whatever it is that we build is going to have value to a client. We use Garage, and our Garage methodology, to understand what the right thing is to build, and then we use our Garage methodology to build the thing. So we have a way of describing what the right thing is and then we have a methodology for building the thing the right way. And that's really what Garage is all about. It's about Agile, DevOps, Lean Startup, and combining all of those methodologies, the best of, in ways that are going to benefit our clients. And in the case of using Garage through the lens of DevOps, we can help our clients co- create, and co- execute, and cooperate by understanding what it is they're trying to deliver, how they want it to be delivered, and then actually build it. That's what we're all about.

Joe Seppi: That's really interesting. And I like the comment that somebody made, full cycle versus full stack. I think that really rings true. There we go.

Andrea Crawford: I like that. That's totally nailed it right there. It is full cycle versus full stack because full stack is a little bit more myopic, isn't it? And knowing that full cycle does have to include the silicon and the carbon, and then it also includes the Day- 2 operations as well. Once your app lands in its production runtime environment, now what happens? So you got to make sure it's up and running, and you've got all your health check endpoints put in place, and your monitors put in place. How are you going to take care of it once it lands there? And what are you going to do when you need to redeploy it?

Joe Seppi: Or rollback or what have you. So this is interesting. We've talked about lot of the DevOps stuff that I'm familiar with, or the people and the processes and such, the routine. Something that's fascinated me and I haven't really been able to dig into, and I know it's one of your points on the modern DevOps manifesto, is the AI aspect. Can you talk to me more about that? I find that fascinating and want to know more.

Andrea Crawford: The fifth pillar in there is applying DevOps to everything. And I am by no means proclaiming that I'm an AI expert. AI really is driven off of bedrock of machine learning. Machine learning models... Again, not an expert here, but require quite a bit of data engineering, working with data sets, and ultimately training models, inference models, so that they can create the insights and predictions that the business is looking for. The whole notion of training an inference model and doing that data engineering... When you talk to a data scientist, they often work with other Jupyter Notebooks or with Python, and that's code man. Wherever we have code we have an opportunity to version control in a source code management system. And wherever we have that, we have an opportunity to lint, to do static analysis, and to govern the process of the code going through the process of vetting, and testing, and being pulled into a model that can be trained with these data sets. That's a process too. And so there's no reason why machine learning models shouldn't have the same bona fides as the classical sense of DevOps at the application layer.

Joe Seppi: That's interesting. I want to see if I understand you correctly. Are we talking about applying DevOps to machine learning or are we talking about applying machine learning to DevOps? If that question makes sense?

Andrea Crawford: It does for the latter. That's really about one of the 587 flavors of DevOps called AIOps. And that is really about taking the lens of applying automation to Day- 2 operations where we can actually start to get predictive about incidents that have yet to occur, AIOps. So having a set of predictions where we might gain insight in terms of problematic or flags, warning flags, that we might have early insights into, that's about AIOps and that is really apply... And when you understand that there's a problem... It's like a pot of boiling water, right, so you're boiling. And then if you have the lid on it, all of a sudden the lid will start rattling around, and then all of a sudden steam's blowing out and all that. AIOps all about getting that royal and toil and getting insights into that and then being able to automate remediations to simmer it down, right? To simmer it down. And that can mean a whole bunch of different things, okay? So that is DevOps applied in the context of Day- 2 operations. But we also have MLOps, or Machine Learning Ops, which actually applies DevOps, the CICD nature of DevOps, to automating the building and the training of the inference models in more of a day one type of contact. So that was a good question because the answer is yes and yes.

Joe Seppi: We have a question here. Let me pull this in. I don't know if this makes sense here. So does your idea of static analysis here go beyond continuous integration?

Andrea Crawford: Okay. So just a level set for those who may not be clear on static analysis. I'm sure you know this, but for the benefit of the others in the audience here. So static analysis testing is really all about taking a look at source code and analyzing the source code for potential configuration, misconfigurations, potentially for vulnerabilities, basic code conventions so things like hey, you declared a global variable but you didn't set it, right, it's null. That could be problematic. Just a whole bunch of different things that looking at source code could help you head things off at the pass in terms of errors popping up in a runtime context. Typically, static analysis testing is done in the build or the CI process. When we check out code from Git, we tend to run static analysis tools on that source code to understand a little bit more about the source code which is we call static. And for compiled languages, when it gets compiled into object or executable, we call it dynamic because it's compiled it's not human readable, and there's a different set of tasks that you can run on dynamic code as well. But typically static analysis is done in CI because that's really where you're really guaranteed that your source code is still in its source form. Typically, when you get out of the build process in your pipeline your code is in an executable stage. The exception to this is with runtime languages like JavaScript, Node. js, those are not compiled languages. Python is not a compiled language. So your source code is actually interpreted at runtime so there's no object or executable code per se. So static analysis is typically done in the earlier stages of your build.

Joe Seppi: It's interesting too. So I think it gets to the question that I was having and I'm still trying to wrap my head around this AI and DevOps. To go with what you're saying there. In a typical pipeline, you would do this sort of static analysis and make sure that there aren't like you said, global variable set but not... Or, called but not set and whatnot. In terms of the predictive stuff, maybe you could give me an example. And I know you're not an expert here so don't let me put you on the spot, but something that can help me really digest AI in DevOps predictive techniques and stuff.

Andrea Crawford: Clearly, writing code is very humanistic, a very carbon- based thing. And nobody's perfect, nobody writes perfect code, although I'm sure there may be some out there that think they can, but we have automated pipelines that mitigate any risk of fallibility. So I'll give you an example. I'm a developer, and I'm writing some code, and I commit my code, and I get past a peer review, and I merge my code into master, and it kicks off an automated pipeline, and I do some static analysis testing. And some of the tools that I use in static analysis are let's say test code coverage which is basically a way to determine what parts of your code have an associated unit test. So are you testing your code as a developer? You should be writing unit test cases. One of the Garage practices we have is something called TDD or Test Driven Development, which you write a test, then you write the code to satisfy it so it's a whole thing. But anyway. Let's say I commit my code, it runs through the pipeline, my static analysis comes back and it says, " Your unit test passed at 80% test execution passage." Okay, cool. But let's say you don't have a very high target on your test code coverage. And let's say I'm a developer and I'm in a rush and I'm writing crummy code, and I don't write my unit tests, and maybe my test code coverage comes back at around 30%, which means only 30% of my code is actually being tested. Now, depending upon the thresholds and the parameters in your pipeline, that might be a sign, right? Oh, this developer added a bunch of new features into their code, they didn't write unit test cases to actually test it. That might indicate that this might be a little bit of a risky feature set, right, and so wouldn't it be nice if we could have headlights into that? And that's just one super simple example. We could be examining deployment configurations for fields that have certain values. For example, memory requirements. And if they're really super high that might be a red flag. Imagine if you could gain insight into the characteristics of your source code or the characteristics of your deployment configurations and be able to head things off at the pass. Hey, maybe you should go take a look at your deployment configuration because some of the fields you have in there might be problematic when they land in runtime. You could make suggestions to cinch up some of your configs or to remediate some potentially problematic things in your source code before they become problems. You could potentially make recommendations, right? So maybe you should set your global variable to zero or maybe you should have a maximum memory of two gigs. That is an example of how you could use AI or intelligence to remediate risks before they actually land in production. And then in terms of AIOps, you could actually take a look at the runtime characteristics of workloads that are already running and you might be able to see we've got a runaway memory situation going on, let's not let that boil over the top of the pot, right? Let's send out a Slack message or create a problem or an incident and notify the right people so that they can get on top of this and head it off at the past before it becomes a problem.

Joe Seppi: And that last bit really, I think, brings it home. And in your DevOps manifesto, the heading for the AI for pipelines is everything is observable. And so I guess that's the key there. Everything is observable and you're watching for outliers and things like that that would be potential red flags.

Andrea Crawford: And that becomes more of a challenge in the cloud native context, doesn't it, because we are deploying that hype of bees. How is it that you can keep track of all of these moving parts that need to be coordinated with each other and understand what's going on? So we have health check endpoints, we have monitors we can set up. We've got this notion of ChatOps where we can monitor some of the runtime conditions and bring together with... Through ChatOps and Day- 2 operations, bringing together the right people at the right time around the right problem and with the right tooling data. Joe, you probably remember back in the day of heritage apps where you would have WAR rooms where you had a situation where an app went down and you bring together everybody because you're not sure if it's a hardware problem, a software problem, a network problem, or an application problem. And, of course, who does everybody blame? Everybody blames the hardware guy, the network guy, the software guy, the application guy. Having that type of observability in terms of who deployed what, where, and when, and understanding the context of which an application is running in a production environment, the resources, it's using, the APIs it's consuming, the calls that it's serving, the intercommunication between other names, spaces, and pods, that sort of thing. It becomes critical. Not just in the context of Day- 2 operations but also in terms of security, and compliance, and audits. We need to understand where the chattiness is, where the new threat vectors are. You cannot measure what you cannot see, you cannot detect what you are not monitoring. Observability is key.

Joe Seppi: And all of this stuff is interrelated, right? I'm thinking about your examples there, and the WAR room, and bringing everybody together. Like you say in the manifesto, trusted resources and lean into privilege. If you're observing and you see where things are going wrong, you might be able to narrow the scope of who you need to talk to based on those sort of a separation of concerns.

Andrea Crawford: Absolutely. Leaning into least privilege is really an extension of the Zero Trust security model. It's really thou shalt trust no one, and you shall check ID at every step of the way type of analogy. And that really is what it has to be, at least as a goal anyway. And principles of least privilege really do speak to... Again, going back to the example of the image engineer, right? If you have that role, maybe an enterprise does not want the image engineer role and a developer role to be one and the same person because maybe that's too much control over the full stack. Being able to understand a client's aversion to security, and compliance, and separation of duties, and GDPR, and all of this stuff, right, will absolutely influence the way things that... The program in terms of the way people are organized, their personas, their roles, their enablement, and what they can and cannot do.

Joe Seppi: That makes sense to me. I think it would make sense to a lot of engineers in the sense that... Especially I work primarily on the Node. js space and we are a big fan. I know a lot of people are but different platforms. Encapsulating just what this piece of code needs to do, you want to keep it as encapsulated as possible and then that way you can hunt things down a little bit easier and have test coverage. And in a way, this really applies to these roles too, right? You want to encapsulate what this person is able to do, and what they are responsible for, and what damage they can cause. Contain that sort of stuff not-

Andrea Crawford: Isolating risk and you are limiting the blast radius. So these are absolutely the kinds of things that we see a lot in our site reliability engineering practice as well. And I often say that DevOps and SRE are two sides of the same coin, right? So one is rightly or wrongly more focused on day one development. SRE being more focused on perhaps Day- 2 operations but they've got to mesh. And they do reach back and reach forward.

Luke: You already touched a little bit on this idea of expanding the definition of everything and that there's many different ModelOps, GitOps. There's a line in there that I wanted just to bring out was this. It's the evolution of breaking silos. Because I feel like everyone loves to say breaking silos, but what does it really mean? I like how it's the evolution of breaking silos.

Andrea Crawford: When you think about the classical definition of DevOps we were breaking down silos between development and operations and that's where we coined this word DevOps. But at its core, it was really about bringing two distinct worlds together. I don't know what a good analogy is of the dark side and the rebel... No, never mind. A Star Wars analogy. And I won't tell you who the dark side is. I'm a developer, I won't say that. Okay. So it was really about breaking down organizational silos. But if you take a look at what's going on today, why do we even have terms like DevSecOps? It's because security groups and teams, ITSEC, AppSec, NetSec, they're all viewed as the team of no. Why is that? It's because they're viewed as adversaries. Stop that, stop it, bring them in on day one. Security needs a seat at the table along with compliance, along with audit. Okay. And they need to be a part of the story from the very beginning. This is the next evolution of breaking down silos. Okay. From a purely carbon point of view, we need to include other stakeholders that have a major stake in the IT delivery game. Okay. The other notion of evolution of breaking down silos is thinking of the classical sense of DevOps applying to applications, and extending it to platform infrastructure data and AI, and being able to bring those same benefits to those processes as well. And in fact, encourage those in the enterprise that are in those areas to have an engineering mindset. To yes, view everything as an opportunity to code. And yes, view the world in pipelines. Because with the automation you can get predictable repeatable results whether you're deploying an application, whether you're deploying a cluster configuration, whether you're deploying Terraform scripts to provision infrastructure on a cloud service provider. DevOps applies to all.

Joe Seppi: I find this interesting because it's just going both ways in a way. We're talking about defining these roles that are specific, right, and can be considered a silo in a way, but we're also talking about breaking down those silos. So it's yes, we're defining these roles and the principle of least privilege, but we also need to make sure that these different roles are communicating well and are all in the process. So it's both ways in a way it seems to me.

Andrea Crawford: You go back to the Agile manifesto and really having a product- focused mindset. Get over yourself operations, get over yourself developers, get over yourself security. We need to have a product- focused mindset. And to use a concept that is often talked about in our SRE practice, blameless. Have a blameless culture. It's not about who screwed up or who broke the build. In fact, I have a shirt that says, " Every time you break a build a kitten dies." Don't break the build. But it's about being blameless and about doing what's right for the business and not about having all these little organizational inaudible and silos. And I don't talk to them, and I don't even know what they're doing, and I don't even know who that is, and I'm not going to invite them, and they're going to be a blocker. That's got to stop. It's got to stop because it's the business that loses in the end.

Joe Seppi: And I feel like the blame is more of an opportunity to improve the process, and the system, and the coverage, and what have you.

Luke: Which-

Andrea Crawford: Spot on.

Luke: And this blameless reminds me too of... In the Deming Management Method they talk about that where it's a few... I think they use the example of putting a tray with holes in a bucket filled with white and red beads. Some of those holes are going to be filled with white beads, some of them are going to be filled with red beads, and it's a chance. The solution was if you need to stop that assembly line, fix the problem versus adding QA at the end or blaming this person and firing them. The reason that that accident happened was the system allowed that to happen and it's going to happen. The other thing I wanted to mention along this lines is... I've actually started an MBA program, and I read this article this week, Marketing Myopia, and it's Harvard Business Review in 1960, and you'd think it was written today. And it's exactly like you're talking about, taking away your assumptions and breaking down these silos. This exact methodology, if marketing is done right, it has that same focus and that same feedback loop that we think about in continuous delivery or in Agile or in Lean methodology. It's funny, none of these ideas are new they're just applied and they're evolving. And now I think we really are seeing this evolution with DevOps where it's a very inclusive methodology. And like Joe was saying, it's not your... Maybe your job isn't my concern but I need to know about your job if we're going to work together. These silos, maybe it's like they start to overlap and we have to have this concern and this understanding, especially with everything being code now and everything being able to be programmatic.

Andrea Crawford: It just emphasizes and underscores the cultural transformation that has to go along with this. So having empathy for the security team, having empathy for an auditor. What? Yes, yes, yes. We're doing an academy study around audits and auditors and just gaining a greater empathy and appreciation for our auditors and what it is that they have to do. They're just looking to protect the enterprise. We need to work with our auditors, we need to work with security, we need to work with the other stakeholders to ensure that what it is that is being delivered is right for the business. Egos aside, arrogance not needed, let's just do the right thing.

Joe Seppi: All on the same team and striving for the same goal.

Andrea Crawford: Absolutely.

Luke: Last year in our Security Digital Developer Conference, I interviewed one of the VPs over in security, Bob Kalka, and he said exactly what you just said about... Or a very similar notion about how people low... Your inclination is to oh, I don't want to talk to security or I don't want to talk to that auditor because they're going to say no, they're going to be the blocker. And Bob's background it's interesting because he has a degree in small team dynamics so he gets in there. And it is, it's you got to be proactive. And I think we're used to these things with say our own family. Or if you've ever been on any team, right, you have to come together and sort of force that conflict in order to get over it. It's just when you said that it really resonated back to what Bob was saying where it's don't avoid that, lean into those relationships, and the product, and the company, and ultimately your role will be better for it.

Andrea Crawford: It's interesting when you do lean in and start to be inclusive on day one or day zero. Since I'm a programmer we always start counting at zero. You'll find that you actually start to get proponents, right? Security can be your friend. Imagine that. How great would that be? They could go to bat for you, man. It could be like that. It doesn't have to be the land of no, it doesn't have to be like that. We need to recast and rethink the way that we collaborate with others in our enterprise so that we can get the outcome that the business needs because that's really what it's all about and then it's high fives all around, right?

Joe Seppi: I'm thinking back to previous places where I was delivering production applications, and the more that the teams can be working together that benefits everybody. I look at the Venn diagram where it's just a larger circle around a smaller circle and it's not two separate circles. The more we can work together the more we can be successful. We are getting near the end of the hour here. Are there any things that we want to make sure we touch on before we start to wrap things up? I can't think of anything.

Luke: I think we covered all the points in the manifesto, albeit maybe not in the same linear fashion. But if you want linear go read the blog.

Andrea Crawford: We're very non- sequitur here but that's the spice of life, right?

Joe Seppi: For sure.

Luke: I did have one thought. When we were talking about bees and food analogy, I feel like honey is the natural analogy. And also, I thought it was interesting about the bee analogy especially for enterprise systems and with the whole Paul Rand design from the'60s and IBM. I feel like it's an amazing analogy. It's what are all these people in the enterprise doing? It's their making honey clearly.

Andrea Crawford: I believe DevOps is quite sweet, I'll tell you that. Oh, I like your little bees. Very good, very good. This was a lot of fun.

Joe Seppi: I really appreciate you taking the time and chatting with us and everything, it was an excellent conversation.

Andrea Crawford: Thanks for having me. I look forward to being in the open with you again.

Joe Seppi: Welcome back anytime.

DESCRIPTION

In this episode, we are pleased to bring you a conversation with the Distinguished Engineer, Andrea Crawford.  Andrea serves on the IBM Garage Cloud DE leadership team and is a DevOps expert.

Today's Guests

Guest Thumbnail

Andrea Crawford

|Distinguished Engineer, IBM