Episode 6  |  43:08 min  |  07.09.2021

Building a Modern SOC: In-house vs. MDR/MSSP vs. a Hybrid Approach

00:00
00:00
This is a podcast episode titled, Building a Modern SOC: In-house vs. MDR/MSSP vs. a Hybrid Approach. The summary for this episode is: <p>Today's session about building a modern SOC is a discussion between Pete Silberman(CTO of Expel.oi), Todd Weber(CTO of Optiv), Scott Lundgren(CTO of VMWare Carbon Black), and Christian Beedgen(CTO of Sumo Logic). Topics include what they all think modern SOC really is and if the human element is crucial, and how they feel teams should work together in a modern enterprise, and if that's realistic. </p>
Takeaway 1 | 03:19 MIN
What is Modern SOC?
Takeaway 2 | 00:55 MIN
Attacker Efficiency From the Attacker Perspective
Takeaway 3 | 07:22 MIN
How, In A Modern Enterprise, Teams Should Work Together and Can That Be A Reality?
Takeaway 4 | 05:22 MIN
How Do You Think About Service Providers - From the Perspective of a Startup Through Fortune 5 Company

Today's session about building a modern SOC is a discussion between Pete Silberman(CTO of Expel.oi), Todd Weber(CTO of Optiv), Scott Lundgren(CTO of VMWare Carbon Black), and Christian Beedgen(CTO of Sumo Logic). Topics include what they all think modern SOC really is and if the human element is crucial, and how they feel teams should work together in a modern enterprise, and if that's realistic.

Guest Thumbnail
Todd Weber
Chief Technology Officer, Optiv
Todd has worked with some of the largest companies in the world developing and deploying Information Security strategies and architectures. The Office of the CTO spearheads Optiv’s efforts on technology and integration, testing security technology solutions to help clients make better informed decisions in selecting the correct technology suite.Todd works with technology manufacturers, Clients, Venture Capital, Private Equity firms and leading research institutions to help develop Optiv’s overall strategy with for incubating new and innovative cybersecurity solutions.
Connect with Todd
SL
Scott Lundgren
Chief Technology Officer, VMWare Carbon Black
Scott Lundgren is VMware Carbon Black’s CTO. As CTO, he provides technical vision and strategic direction. He has experience across the security space, including technical leadership positions in offensive security research, development and operations.
Connect with Scott
Guest Thumbnail
Pete Silberman
Chief Technology Officer, Expel.io
Peter Silberman is the chief technology officer (CTO) at Expel. Peter splits his time between ensuring Expel has a robust detection and response strategy for all its technical partners/integrations and making sure that Expel is pushing the envelope with new ways to improve analyst experience and efficiency. He works to achieve this while also driving new service capabilities built on top of those integrations. Expel’s exceptional team of engineers get to do all the code writing (the fun stuff!). He gets to write docs and draw pictures for others (the not-as-fun-as-coding-but-necessary stuff).
Connect with Pete
Guest Thumbnail
Christian Beedgen
Co-Founder and Chief Technology Officer, Sumo Logic
As a co-founder and CTO of Sumo Logic, Christian Beedgen brings 18 years experience creating industy-leading enterprise software products. Since 2010 he has been focused on building Sumo Logic’s multi-tenant, cloud-native machine data analytics platform which is widely used today by more than 2,000 customers and 50,000 users. Prior to Sumo Logic, Christian was an early engineer, engineering director and chief architect at ArcSight, contributing to ArcSight’s SIEM and log management solutions.
Connect with Christian

Chris: Here we go wrapping up our modern SOC Summit. And I can't wait for next year. We're already planning the postmodern SOC Summit for 2022. That'll be super interesting. In the meantime though I have my esteemed panelists here, and we're going to have a good chat about building more than SOC to wrap this all up and hopefully provide like an informative session here for everybody that's dialed in. So to start it off with, let's go around. And I would like everybody to sort of just briefly introduce yourselves then maybe give me sort of in addition, two quick takes on what does modern mean for you? And what do you think today is sort of the biggest threat that is kind of floating around in your head? And we start with Todd Weber from Optiv.

Todd Weber: Hi. Great to be here. Todd Weber, CTO of Optiv. Been with Optiv here about 16 years and was also the head of our managed service group here for a long time as well. And sharing all these challenges around modern SOC is very near and dear to me. And the first aspect of what modern means to me is, can it cover all the components that enterprises deal with? Meaning you have so many different vectors and those vectors are always changing and they morph. And as we consume in different ways and as we move in different ways, the modern component has to and the tooling or the people all have to follow all of those components. And then also in dealing with modernization, I also want to deal with the automation component. We've traditionally always thrown people at things and that can't necessarily continue to exist because we just don't have the people, and that's assuming you can afford them. So those are my two main takes and as the number one threat right now, I just have to go with what my clients are telling me, and they're just absolutely scared to death of ransomware.

Chris: Right on. Thank you. So Peter, maybe you can go next.

Peter: Sure. Hi, I'm Peter. I'm the CTO at Expel. Expel's five years old. I've been there five years. So I've been there for the journey. Prior to that, I held many positions at Mandy. And then by way of acquisition fire, everything from developing endpoint to being on the malware analysis team and leading data science groups. As I think about modern, I'm going to keep it at modern security. And then maybe we can go to security operations and kind of dive into a bit. When I think about modern, I'm going to go with the anti- pattern of what has been historically, and sometimes created a bad name for security, and that's security through obscurity. That's being the department of no, this notion of not really having a grasp of metrics or quality. And when I think about modern, it's the opposite of that. It's a culture of humble inquiry or putting another way, instead of being the department of no, how about like the department of what about trying it this way? It's about being overly transparent with your stakeholders, creating champions within other teams, kind of moving away from that department, and then having a quality controls metrics. We have an understanding in a program that you're communicating to others and getting their inputs with your stakeholders, because as a modern security team, you recognize that you have stakeholders in other business units or other teams that you have to service. And so, kind of culmination of those things bring to mind modern. To think about the biggest threat, I think ransomware is a great one and it's definitely the tactical problem we're seeing in the news. I think the speed at which things are evolving is the larger, bigger risks that's stomatic in that keeping track of everything. If you're not thinking about communication and stakeholders and process and operations, you get overwhelmed, then you end up with ransomware. You end up with credentials are used and VPN accounts being pop, things like that.

Chris: All right. Thank you. Scott.

Scott Lundgren: So I'm number three. I'm Scott Lundgren, and I'm the CTO of VMware Carbon Black. And I was with Carbon Black since the inception in 2012. So Peter, I beat you by a few years, but just like you, it's been a great journey. I'm going to stick with one word for both my definition of modern and also my biggest threat, and the central word is efficiency. And so when I think about, I'm going to be a little bit different here on purpose, I think. But efficiency from a modern point of view, the obvious things around automation and such, and just like Todd mentioned, the challenge with labor, but it's actually, in my opinion, more focused on the efficiency of the attacker markets at present. And I have a concern. So my thread is I feel like kind of the speed with which attackers are able to get more efficient at hands- on keyboard to doing things. So it's less about like, does the technology change from day to day and more about just how fast is the speed of use to attack a business move that's been enabled by some market like forces, including specialization of labor and buying access and this sort of thing. My concern therefore is the volume. So my sense is that again, market forces with ransoms and such have enabled additional incentive for attackers to do a tax at scale, which makes us as defenders, we should be thinking about a wave coming at us. And so therefore efficiency is the word of title back together, which is how we're going to have to address them.

Chris: All right. Thanks all of you. So, we had to take this conversation. I think a lot that we... in our sort of cybersecurity space also sort of specifically coming in from the perspective of providing tools and software and so forth longer services. We have talked in the last few years almost like for a decade now about the advance of AI machine learning and so forth as sort of this silver bullet. To put my cards on the table, I personally feel that with the types of attacks today, obviously sophistication of the sort of human attackers, I should say, the sophistication of attackers with the funding that they have sits behind them, criminal organizations, et cetera, et cetera, that like, it still feels to me that ultimately the defense also needs to be coming from a human. I'm not bought in that the machine can necessarily defend against the sophisticated human attacker that certainly probably appointed, we could also debate. I'm not saying that it's not useful to have all sorts of machine intelligence bubble up and support the human analyst for them sort of firmly sort of in the argumentation kind of camp when it comes to that sort of debate. And to me, it always felt that we issue of a SOC is a fundamentally kind of human concept. So that's kind of the arc that I'm trying to create here. And sneak in maybe a little bit of controversy as well around how to look at these things from automated defense versus actually having properly augmented efficient humans. But let's just kind of sort of jump to the actual question here, and we're going to probably just go around again, but you guys should obviously also feel free to jump in and turn this into more of a cable news thing if you want. So, my question is, in the light of all of that, what is the modern SOC? What does it actually look like? Is it a physical thing? Is this a virtual thing? Is it actually a human thing? Or it's the soccer box with a bunch of algorithms in it? How do you guys look at that? Maybe we just throw it over to Peter first.

Peter: All right. I got to kick us off and then we can all disagree with me is how we're going to do it. I like it. I like it.

Scott Lundgren: Disagree already, Peter.

Peter: Perfect. Perfect. I wanted to start strong. 42. So a modern SOC I think of it as it's typified about a follow a few characteristics, if you will. So the first one that comes to mind is adaptability. And Chris, and I'm going to agree with basically everything you said. I'm just going to put some different words around how I think about it. Because I don't think technology solves everything as much as I love technology. So the first one is adaptability. Basically being able to adapt and evolve as your customers evolve, the modern SOC really shouldn't force a technology stack on a customer. The customer, if they want advice should seek it out, but they may be in the best position to understand their risk environment, their relationship with vendors, what they want to do. And a modern SOC needs to make that technology home. It needs to be making them efficient. And additionally, when we think about adaptability, customers want to know things that other customers don't want to know. And so, if you're forcing reporting, because you have no way to adapt what you tell one customer versus another, that's a problem. And so we think about that as adaptability. When I think about the human aspect, I think there's a blending that occurs in a SOC and I'm not going to really offer an opinion, is it virtual or in the office? I think if these characteristics exist in your SOC, it's kind of up to you as a business what you want to do. But in the past, you'd walk into a SOC, there'd be tier one, which is doing alert triage. Usually there'll be tier two, which is doing some investigation. Tier three is usually incident handlers. The work was bifurcated on decision boundaries. And that was because the humans were doing all the work. Technology wasn't there to help. There was no augmentation. With a modern SOC, I'd expect to see a blend that you have shift analysts working end to end alert through an incident. You might have another group that's doing some hunting, but other than that, you have a blended group. And the reason for that blending is the rise in technology. APIs first, the ability to do integrations that then allow for automation that's this. And then additionally, you have improved pattern recognition where you kind of might've heard of gamification of security where it's like, " Hey, there's more information available." So these patterns of like this and this is bad, make you more effective as a human, but then you can arm the technology. So I think you no longer have that bifurcation in a modern SOC of decision boundaries. And then the technology itself, technology has come a long way. The ability to fire a technology gun at a problem shouldn't exist in a modern SOC, a modern SOC needs to be really surgical. If you're run by metrics, you apply technology to a problem, then you measure it, rinse, and repeat. So you're not just saying, " Technologize the problem. I have a specific problem that I think I can take off of the human plate, apply technology, measure, iterate." And you think about that as like dashboards. You want a dashboard that is the marriage of your technology and people and process all in one view, that's a modern SOC view. It's not the threaten apple ways. And then lastly, there's a notion of quality. You need to be able to articulate a quality program, the quality controls the checks, these types of things. It's no longer acceptable to be like we're running technology people process. And we have no notion of quality. That's no more. And so being intentional about that. In the end, in short, the way we think about it is, we don't want humans making decisions on all alerts. We want humans making a good decision on the right alert and it's a different mindset. And what that does is it maximizes the human moment and allows them to keep quality and scale kind of intention.

Chris: Somebody want to jump in here?

Todd Weber: Sure. I'll jump in. Peter, I'm going to disappoint you by not disagreeing with you too much here. And this is also jumping on what Scott said, it's that efficiency structure the modern SOC is based upon. And this is just compiling all of what you guys said. And I agree with all of it, is that adaptability to build in efficiency and the human element is I think critical. I don't think we'll ever get rid of the human element, but that continuous operations to get to those efficiency structures of why do we have to go log into nine consoles to get to one decision point? Why can't that be enriched all to give me a quicker decision? And then continually do that. As far as like AI and machine learning, that's all like to me is at its root level, it's all trying to find patterns and massive amounts of data that no human can possibly find. So use it that way is, it's just making the human's job easier, not replacing the human, because I agree. I don't see that the humans necessarily will be replaced, but kind of continuing what Scott said, it's that continual journey to be more efficient, that continual journey to always be looking to improve and what you were also saying about the qualitative part of that really resonates with me as well. Because that's part of the problem right now is the efficiency function of how can you investigate 10 million alerts? The answer is you can't. So don't try. Try to figure out how to answer many of those that are the wrong ones. But get to qualitative decisions on them and then move them along in an automation framework and spend your human time on the ones that you know or have a good or much better probability of being something real that needs to be looked at by humans. And that's the use of technology that you were talking about is use the technology for what it was intended for, not for some buzzwords of using AI and ML structures in there. Everybody knows you can't get a term sheet these days unless you have those letters in there somewhere. Whether you use it or not doesn't really matter. But those are my feelings on how we use technology and how we help to develop towards modernization. And this is just tagging on to... I'm just plagiarizing what you guys said in different words, which is that continual journey of efficiency I think is always there. And that continual journey towards quality. And Scott, I will tell you, the efficiency that the attacker side is gaining, I will say is almost into the scary realm for how fast vulnerabilities are disclosed to how fast showed in results are coming in to how fast like exploits are coming out. That has changed a lot in just the past year. And it is to the point to where it's scary that the security through obscurity just, it's not a valid concept anymore.

Scott Lundgren: I don't have any way to tie these two things together. So I'm just going to do two bullet points. Sorry. I was trying to think of a clever way of tying it together, but I can't. So first, I'm going to jump on the AIML bashing bandwagon, but with a little bit of nuance. So the way I look at it, of course, is as a tool rather than an answer, tools can be useful. But in my opinion, the thing that has been forgotten, including by people both writing and signing term sheets actually is that we still have a garbage in garbage out kind of problem. And I think that sometimes in the security industry, we become a little bit enamored with our own data volume. And by that, I mean we say, " Look, we've got all this stuff that we have to deal with." This certainly means, implies that I'm sitting on a gold mine if I could just extract it. And I think the part that at least I'll speak for myself, I've been doing this for 20 years and I continue to have to remind myself that most of that is data and not information. The vast majority of it is absolutely meaningless from a security perspective. That's what makes it hard for all of us, including me on the vendor side. But by being humble about that and recognizing just because there's a lot of data and we're proud of our systems that we built to handle that data doesn't mean there's nuggets there to be extracted. If we could just have a little bit of better tech to pull it out. And it puts the onus on all of us to continue to reach further, I guess. And we still need more telemetry. We need better telemetry. We need better filtering. We need to get less garbagey and more value. And then I think that it can be less around bashing AI and ML and more about saying, " Hey, are we applying this to the right domain at the right time to get the right results?" So, I'm neutral on the subject in that sense. Certainly don't think it's a silver bullet for lots of reasons. One of which is the garbage argument. Now separate point, much quicker, hopefully, on the attacker efficiency, I'm not going to tie something that Peter, you said about how kind of changing to a shift model, if I paraphrase your words. I hadn't heard that. So, I appreciate you saying that. I'm going to think about that. But I think about that now from an attacker perspective, and it's almost like the specialization of labor is kind of the reverse in the sense that you can see these pretty complicated attacks that are technically strung together by components, supply chain. Components from people that don't know each other, have not worked together, and have these very narrow, simple interfaces by which they're connecting. It starts to sound like modern software. And so, that's the scary part because we've seen what can happen when modern software works at scale, it can eat the world and use the phrase, et cetera. There's no reason not to think that those same economics and I use that term loosely. Don't apply on the other side as well. And so, I'm with you on that.

Peter: I think the efficiency angle on the attacker side is interesting. Because then where it naturally lends you to start thinking about is where are the asymmetries you can create? Because the asymmetry that you would see with an attacker and I'm going to put the solar winds is the elephant in the room, lights slows, call it out, put it to the side and say, a motivated and highly financed group of really smart people will be successful most of the time, like that's just how it's going to be. So we put that one aside and we deal with the 90 plus percent of everything else and say, " Well, how do we create the asymmetries?" One of the interesting things is that the garbage data in and the garbage data out, especially when you think about NLI algorithms, I totally get that. One of the things we found is when you watch what your shift does as a group and the types of questions they ask, or they're asking themselves, that informs how to actually make that data that would be just audit logs actually useful. Because you can actually see how the human element is thinking about, " Well, if I could ask the question this way and aggregate in this way, you actually get a really compelling story." And the asymmetry there is we have data going back months, quarters, whatever, for this user, for this activity. And it tells us it's algorithm. I don't need a machine learning algorithm. Literally, one of the things we do is say, " Okay, what are the percentages of the areas of this user agents?" For example, it's just like a way to fingerprint. The attacker doesn't necessarily know what these are, and a user shifting from Mac to Windows suddenly, and also authenticating from a country you haven't seen before, those two things alone, harrowing backing up and here we go. And so, one of the things we've observed is like you if you think about the data and the way that what answer you want out of it, you can take garbage data and turn it into a useful story and then apply further technology over time. It doesn't get away from that human moment stepping in, but it does get to a potential asymmetry that you can leverage, especially because the attacker doesn't know all of the data you've collected and how the environment operates.

Todd Weber: It's important part, Peter is to that aspect of context and you were talking about what will be important to one client can be different to another client. Then that's the part where I think as people, we need to understand that context can also be applied to this. Meaning just where are the critical assets? What is this mean to that particular organization? And it can be different based on the organization. It can be different based on whatever legal constraints and regulatory complaints or regulatory constraints that they're under, or what vertical they're in, you know what I mean? Or what maturity level that they're at. So great points Peter and great Scott too. Sorry, I like listening to you guys. I might just be quiet and listen to you guys.

Scott Lundgren: I'd say Todd that you have set the tone, but it was after I think we all have regulatory complaints.

Peter: 1000 questionnaire design.

Scott Lundgren: Exactly.

Chris: This is great. I love the conversations, just flows like that just one more quick comment sort of on the kind of AIML stuff. I think in the bashing stuff and yes, Scott you're correct that it's becoming pretty hip to do that. I feel like we also, and frankly, I'm, to some degree, also guilty of it. I have my own story there, I've been trying to build unsupervised global scale anomaly detection thing and got my little butt handed to me because it just did not work at all. Sometimes there's more garbage in both than just the data.

Peter: I like that. I like that.

Chris: Sometimes as enticing as it might sound, it is sometimes also some amount of intellectual garbage involved, I guess, sometimes you don't know until you try. I would just like to clarify that I do think, the term that I've tried to sort of adopt is algorithmic approaches. And I do believe in those. At some point, we use spreadsheets and computers for automating large scale data processing and certain things around patterns and what have you. I know human analyst will on their own never be able to do. So, we're basically trying to sort of work with them. I think it seems to me like we're all pretty much on the same page on that. That's good. And it's going to make for more interesting or I say conferences, I'm sure just to sneak in another one to there if there ever will be one again.

Scott Lundgren: Sorry Chris, I was just saying like, I appreciate you sharing the story. I think you said you got your butt handed to you. I appreciate you sharing that story. I've certainly been there myself. And I think that's kind of what I mean by the humbling. It's very easy to kind of come to security and say, " If I had just had this and this and I put it together, I'm going to find everything," and it's way more challenging than that. And coming at it with humility I think is very helpful or necessarily starting place.

Chris: Got to try stuff. I think this is one of the sort of privileges that you have when you have an environment where like for example, Silicon valley, Bay Area environment where you can actually get a little bit of what we did one at a time. So somebody just need to try these things, and if they think it's quality innovation. It doesn't always result in anything, But you learn an increment. So, Peter there were sort of a subtext in both of your comments previously that let me come in sort of like I'm basically a developer like of that sort of age that remembers the time before and after agile. And I certainly remember before the forms. And sort of now we kind of go into like a little bit, maybe more in that direction of where does security fit in there. Is it DevSecOps or what? But there is something like there's an undated, not even an undercurrent. I think it's a very clear kind of like what you said, Peter, and specifically about breaking down the silos, basically getting people to be able to make end to end decisions versus having some sort of waterfall process where the shit flows downhill and guess who's at the bottom? It's usually you. And do you want it to want to talk about that a little bit? And both Scott and Todd as well, sort of the kind of philosophical aspects of how in a modern enterprise, teams really should work together and how real that actually is?

Peter: Sure. There's the world I want to live in, the world I see. And then the world that a lot of people live in, I think. So, we're talking about technology companies that are having to protect their application, their crown jewels, or an application, or data warehouse somewhere. The ideal state, I think you end up in is a partnership with security where security engineers, there's a couple companies doing it really well. I think Square comes to mind, at least what I've read publicly. And some of the folks I know there where they've got a partnership and engineers their code side by side, that type of thing. But it requires a lot of resources, a lot of skills, organization that is bought in. But there's an aspect where security could more easily fit in, which is the runtime and deploy side or build pipeline, deploy, run, where what you discover as a security team can create conversations with your counterparts on the engineering side. And some of that is just basic stuff like unifying tooling. Developers use Sumo Logic, security uses Sumo Logic instead of security using another tool for logs. It allows you to kind of bring them into the conversation and create intrigue, and you get some organic growth. But having them be stakeholders and want to engage in one of them is important. And one of the ways you can accomplish that is unifying tooling, which may sound silly because it's almost like childish, but it's important because it's meeting them where they are versus trying to pull them towards security. I think that that then leads to really interesting stuff around bespoke application detections as a risk mitigation. We had one customer that had alerts that if they ever went off, basically said, " The lights are off all the business." It's like, this thing is very specific to our application. If it ever fires, we're out of business. I remember having that conversation. I'm like, "ll right, well, you guys got that wired up. Cool." There's not really a response for that other than like, you guys are watching that. But that then enables you to have conversation with engineering and to pull it off, you have to understand your architecture. So you're having good conversations. And then as you move left, there's a bunch of like ideal state where like your build pipeline can be doing infrastructure is code, you're scanning that it's producing alerts. The security team is working with engineering team to define policy that enforces that and back and forth. But I think that is all really important to... You're having to weigh two things. I think that's where security runs into it is, when you're talking to engineers, you're talking about a revenue center. They're generating profits generally. Security is a cost center you're taking from and engineers will win and we should accept that. They want to go fast and they're going to make purchasing decisions. They're going to do things to increase velocity. When you look at what an engineer wants, like when I get to write code, I'm thinking of how quickly can I write code so I can do more creative stuff? And so, security showing up creates that friction. And so how do you enable without the friction is kind of the challenge. And so DevSecOps is nice, but it's probably pieces of it, especially if you have an established engineering team slowly over time so that you're not showing up and breaking stuff, because you'll get quickly kicked out. Not even a company I started just with the relationship and the relationship is key over time to, I think, a really secure product and company. It's a lot of words, I know. It's a fun topic. I love it.

Scott Lundgren: It's the right topic, I think. Sorry, Todd. You said the word relationship probably, I don't know, 15 times, but I think that's spot on. So I'm going to do something. You had a bunch of grander statement, but I'm going to make one like a very narrow statement, but I agree 100% with that focus on relationships. I'm just going to give a little a personal experience I think that may or may not be helpful, particularly if in your organization you can count on some degree of longevity of tenure, which is hard I admit. But if you can, I've found that a way to kind of accelerate the relationships is actually by promoting horizontal movement a lot between engineering and security, obviously there are different disciplines, but I think we, to go back to the humility approach, I think security can be learned. I think we all know that, just like software development can be learned and if you have some skills already in each, then great. But the fastest way to get to I guess relationships like that is to say, " Ah, now we have someone who actually understands what it's like to walk into the person's shoes embedded right in." Not because they're in the other function, they have a dotted line over and they're kind of here, but they're actually did that job for two years or something. I found that to be pretty effective. And then your point Peter, about the getting to common tooling and all the benefits that brings, that can become a natural outcome of an aligned view. We all recommend tool X to achieve the effect rather than the other way around, if that makes sense. So try to create the conditions to get to alignment on recommendation of tooling.

Todd Weber: Yeah. That's a great one, Scott, and as you take it and then you can apply it to different parts of not just software development, but taking the same approach to things like as Peter, you were talking about the different silos that we have. And if you think about vulnerability management is a key one for me because typically, the remediation side is usually not the security teams. It's usually the IT teams and they're following a different process. Change management processes for us old people, that was so necessary and so needed. And that's why we only meet every Thursday, and we only do like updates on every ninth, whatever. But those processes are kind of outdated processes. And to your point about the humility Scott, I love the aspect of that cross index component. And when you look at that cross and for those of us who are a little bit older, we didn't start off in security. We started off in something else. We started off in networks or we started off in applications. So you had some foundational understandings of that. And I guess in a modern path, people can join security as their first thing. So they don't have that underlying knowledge of whatever the underlying plumbing of the application or the network or in this case, cloud. So I'd love those abilities to kind of help build those relationships and walk a mile in another man's shoes approach. But like I said, just continuing on the vulnerability management, I just think that's so important. You have to have that common alignment and not just the common tooling aspect of things, it's a common taxonomy of things as well. Do you call things the same way? Do you think of things the same way? You look at your ERP applications and it's your developers who are maintaining that think of change in a very different way. Change is bad. So, they don't like change. Versus us, it's like, well, you need to change. And just having that come together and building those relationships and then using that whole process as CICD pipeline, you know what I mean, pipeline that out and actually just continue to use that and use your security methods as a CICD pipeline on how you break apart those processes. Okay. We identified the assets, this is what the assets are vulnerable to. I validated that component. In context again, is this a critical asset and how do I do remediation for it and pipeline that out as opposed to putting it into a change control method, which is complex and difficult. Anyway, great stuff and great conversation. Sorry. I think I use more words than Peter.

Chris: We're counting, we're counting. Just two quick comments from my side. And then I think I have one more topic that I was going to talk about before we wrap it up here. So, first of all, I think Scott, you said you were talking about sort of the cross- pollination and making mobility happened between teams. Actually, we've seen some great success with that. In Sumo, I can think of a person that went from development to security side and quite happy over there. Was happy before, is still happy. So, that's awesome to see. And we also have somebody who believe it or not went from finance to security to development. I've said for a long time, there's two types of people in the world, and this might be arrogant, but I don't really mean it like that. Like those who can actually use their hands to program computers and those who can't. And those that can, and to me, that's not just distributed systems PhDs from Stanford, but basically anybody or maybe hacked on a Commodore 64 when they were younger, or what have you, or who today can do basic scripting on a Mac or on an iPhone. To me, those are all programmers. And I think being able to move folks like that around the organization is very useful. And then secondly, to the change management thing, I'm not going to dwell on it because I think we've already kind of squashed that one. But just from personal experience as somebody who build a large size over the last 10 years and decided early on that one thing that we absolutely needed to do to get our customers comfortable with the fact that they sent their data to us was not just try to engineer the system in such a way that we had a good story on where security happens. And then make that visible and all of that. But also basically I'll get it audited by going into the various certification processes, SOC two PCI, FedRAMP now. And yeah, change management every single time is a huge topic. Because the auditors show up and say, " Okay, here's the sort of binder of documentation that you need to generate every time we need to release a new version of the system." And we're like, " Sir, but we're releasing a new part of the system. We don't even know how often with continuous deployment." And frankly, the system, nothing ever sits there for more than 10 minutes. There's no any given state of the system that like hangs around for longer than that. And we thankfully kind of like talk them into kind of understanding that. But I think this change management thing it's just really... You just stop using it as a pinch point. You just keep clobbering everything that's there over and over again. Sorry Todd.

Todd Weber: No, it's just a little bit of the mindset of it's such a negative consequence for everybody. That's why change management was put in place in the first place, because if I did an update and I brought something down, I'm fired. And then if you make those changes. It's changing that mindset and that holistic mindset that change is bad and that change brings things down and change means people lose jobs. I say this, I know it's aspirational goal. Those are the aspects we have to change. I know we still need change management. You still need process. You still need some form of structure. I get that. But it's the changing of the mindset of I don't want to update my exchange server because I don't want my users getting mad at me because I don't have email for 20 minutes. We have to get past that negative consequence that all change is bad because the attackers are preying upon that or they use that as a foundational. Anyway. Sorry. I didn't mean to interrupt.

Chris: I think now we finally hit on a topic that's worthy of cable news. On the society level, change, good change, bad. So, okay. Let me actually one more round here if you guys can bear one more. And that one should be fun because we have a pretty wide gamut of folks here, vendor side, services side, somewhere, all the way in between. So from the perspective of somebody who needs to think through the sort of SOC for their company from a startup all the way to probably fortune five company. How do you think about service providers? It's always been sort of my observation that there's obviously a lot of tools, a lot of software. But there is also lots of consulting, lots of service provider, depending on how you look at it, professional services, so forth involved. So, what's your guys' take on that just keywords around MSSP, MDR and so forth? Whoever wants to go first.

Todd Weber: I'll go first on this one. For me, it's an important decision to make, but it's an aspect and I'll copy Scott again, it's that aspect of humility of really being able to look at yourselves and look at what capabilities you have, what budget you have, what geographic location, and how can you hire? All of those kinds of componentries. And can I build an effective, modern SOC that actually is adaptive and can be efficient? And then look at the MSSPs out there and there's very different ones, and they do different things. And really have to do an iterative process. Am I looking for somebody to just do investigations or do I want them to respond as well? As do traditional MSSPs and I say this just ones from long ago, they would investigate components, but then didn't have access to the underlying architectures to actually do the remediation component. So do you want that, do you not want that? And then layering all those so that at least you have an understanding of what your expectations are and how those will be solved I think is just one that I continually see clients, they get frustrated at. Like, "Well, I didn't know." Well, they're like, " Thank you for telling me about these 400 alerts that I didn't know about, and I wouldn't have found if you didn't do it. But problem is you gave me 400 alerts that now I have to go chase down and go find out how to go fix." So, it's just understanding that and then approaching it also, to repeat Scott again, with that humility aspect of things of what can... look at yourself with a very honest eye.

Scott Lundgren: I'll jump on that, the honest I think I'm actually going to steal the AIML kind of idea, which is, I think ultimately, actually, Peter, you said this, the use of an organization, like you described up to you, like up to you ultimately what you're trying to accomplish. Can be great, can be terrible, but the biggest risks are actually not on, " Hey, I picked the wrong organization." Like that's actually not in my experience the biggest issue. The biggest issue is lack of expectation management, both directions, lack of understanding of what we're trying to accomplish. And then just being realistic. It's a business relationship on both sides. And that requires a certain level of discipline and effort. And from time to time, more often than not actually, harder conversations, " Hey, this isn't working out quite what I expect, or my expectations are not being met. Let's talk about why that is." Not waiting for things to get bad and then blowing up or et cetera. So like in a lot of ways, it's the stuff that, I guess that is geeks that stereotypically, that's not what we want to do. But it's also the stuff in my opinion, which is most important for success.

Peter: Yeah. I'll jump in as well. I think that definitely what you said there, Scott resonates with we're also on the vendor side. So it could be that as well. One thing I just when I talk to folks about like, should they outsource or build as an example, it roughly costs two and a half million dollars to staff 24/7 in the US and that's the hiring allowance of people who go on vacation. And if I made the following, if I asked you the following question, if I gave you five million dollars, so I'm going to give you double what you need. And I said, " The only caveat here is you have to provide ROI on that five million to the business." Is building a 24/ 7 function going to provide the ROI, or are you going to take that five million and invest it in other ways, training, enablement, other things so you can unlock new business initiatives? For most businesses, five million dollars if they have to prove that ROI is not going to be spent on building a 24/ 7 function. I can give real examples of businesses that where they will, and they've gone ahead and build them. But the majority of time, it's not that. And that's where you have to kind of guide some of the technical, like tunnel vision to say, " The business doesn't need me to build this function. I need to find the right vendor, work through expectation management, understand what they can do and go with them in a partnership." When you think about what you're looking for, and you're looking for a modern talk, like, how do I disintegrate MSSP versus MDR? Which can be challenging, especially when you put the marketing paintbrush on top of everything, it just creates a lot of confusion. There's some questions you might ask like, " Hey, can you talk to me about a time you're going to triage an alert and call me for help versus make a decision?" That implies that there's some investigative process where they understand, " Hey, for this class of alert, this class of activity, the customer may know best." And then the MDR is going to take the work back once they get an answer. How do you train your analysts to investigate? There should be a process of mind that they should be having that documentation. Otherwise, they're going to have a lot of talent challenges and capacity challenge when it comes to actually staffing. Our run book's a sequence of steps or questions. Steps are very dictated. They don't allow for creativity. They don't allow for the human to use what they do best, which is inference because you're following steps. Versus when this shows up, here's the questions, the mindset you should have from an investigation or triage perspective. It's a nuance, but it's an important nuance. And then I think a great question for folks to ask is like, " Hey, I understand you've got 12 SOC analysts. How many engineers do you have on staff?" Because if we believe that it's the marriage between engineers and SOC analysts, if the answer is, " I have no engineers," they're not building a lot of technology to make themselves modern, and that can kind of uncover the marketing veneer. So just leave that with folks is a way to some tactical takeaways.

Chris: All right. I think we're at time. So in summary, as much as we like technology, a lot of it is about relationships and ultimately humans making good decisions at the end of sticking with them and organizing to humans. That's quite interesting. Also excluding me, clearly, you demonstrated that great minds think alike. So Peter, thanks for sort of a consistent view on the world. I think that's actually pretty cool. Maybe we should have just made the topic XDR instead of SOC. But maybe we'll do that next year. Hey, thanks Scott. Thanks, Peter. Thanks, Todd. Valuable time here. I know everybody's busy. Really, really do appreciate you guys coming up here and helping us talk to some of these modern SOC sort of topics. Really do appreciate it. Thank you very much. Thanks very much on the sort of receiving and I hope you had a good time and see you soon. Bye.

Peter: Bye.

Todd Weber: Bye.

Scott Lundgren: Bye everybody.

More Episodes

Getting Started with Cloud SIEM

Using Expanded Lookup Functionality for Security Use Cases

Hunting for Threats

Introduction to Security Intelligence, Monitoring, and Analytics

Disrupt Your SOC or Be Disrupted

How Cloud-Native Journey is Changing the Role of CISO