The Role of AI in SaaS and Cybersecurity

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, The Role of AI in SaaS and Cybersecurity. The summary for this episode is:

Jara Rowe: All right, well, let's go ahead and get started because I know we have a lot to cover. My name is Jara Rowe. I am the content marketing specialist for Trava and I will be moderating the conversation on AI in SaaS and cybersecurity today with our fantastic panelists of experts, because I am not an expert. So we have our friends from EIG, as well as the Trava team here. So I'll let them all introduce themselves because they can do better justice than I can. So let's do it in alphabetical order. So it'll be Jake, Jim, Ramon, and then Rob.

Jake Miller: I never get called out first in alphabetical order. This is great.

Jara Rowe: Alphabetical order by first name, to be specific.

Jake Miller: Okay, nice. Okay, well I'll take it. Nice to be here with everyone. My name's Jake Miller. I'm the CEO of the Engineer Innovation Group, based here in Indianapolis, Indiana, where we help startups start up by building the first versions of their software platforms, whether they're they're startups out of VC backed studios, or out of corporate innovation labs. So I'm super excited to have this conversation today.

Jim Goldman: Hey everybody. Jim Goldman, CEO and co- founder of Trava Security here in Indianapolis. Our mission is to provide a risk and vulnerability management platform along with complimentary services, to provide comprehensive cyber risk management and compliance for our customers. And so today I'll be looking at AI from that risk management standpoint.

Ramon McGinnis: Howdy everybody. Ramon McGinnis, Engineering Manager at the Engineered Innovation Group. But succinctly, big nerd. I love building things and I love being able to bring things into existence.

Rob Beeler: Hi everybody. I'm Rob Beeler. I'm CTO and one of the co- founders at Trava with Jim. And I would like to say we've had many arguments about who's the biggest nerd on the call, but that's a good start. Yes, I'm responsible for building our assessment platform at Trava, that looks at company security and helps them assess where they stand. And so how AI is used in that and how AI affects security is really an important topic for us and, I think, for everybody going forward. So I'm really excited to be here.

Jara Rowe: Fantastic. So I can confirm that the Q& A function works, so if you all want to leave questions in there as we go through the webinar, that'd be fantastic. And we do have some time set aside at the end to answer them. So let's go ahead and jump in. And I will say that I did use AI to help with some of these questions. ChatGPT helped me with a few of these questions as we move through. So I would love to start with what is the current state of AI and SAS and cybersecurity? So Jake, I'll actually throw that to you first.

Jake Miller: Oh, great. Okay. So what is the current state of AI and SaaS and cybersecurity? Well, I will say super new. AI tools, machine learning specifically, has been around for at least a decade now, where it's actually being used by big companies, corporations, and even companies like the Facebooks and Googles of the world. But I think once we've moved from machine learning, which I think a lot of people feel is more structured and tangible, into this world of AI, things start to get a little more fuzzy and even fluffy. And I think this is where I am going to impose my own feelings onto other people here, but it becomes less clear and can become more scary if we're not formalizing how we think about AI and how it fits into the software that we're building, and really our everyday lives. So that's AIML. And cybersecurity AI, super new. How do you apply cybersecurity principles to AI? Oh my gosh. We're still learning how we do that as we go here. So I'm looking forward to learning from Jim and Rob, too, on where they think we're going.

Jara Rowe: Yes, Jim or Rob, do you want to comment on that? I guess more so from the cybersecurity side.

Jim Goldman: Well, yes, I think it's always good to look back before we fast- forward to the present and look forward. And all the hubbub these days seems, in the headlines, has been about ChatGPT. What's important to remember is ChatGPT is, I'll call it a product or an output, of an organization called OpenAPI. OpenAPI has been around since 2015. It was founded by, among others, Elon Musk, although he's no longer involved with it. And Mark Zuckerberg's highly involved now. So it's all the names you'd expect behind it, was sort of a think tank, research lab and, now they've moved to more of a for- profit. But it's really this ChatGPT thing. So to your question, Jara, about cybersecurity and its application there, one of the biggest problems that cybersecurity professionals have is they're overloaded with information. In other words, everything's about logs, everything needs to be logged and you need to examine your logs and look for anomalies, et cetera, et cetera. But what you don't realize is the bigger we build our networks and our cloud environments, et cetera, the number of logs you have, the amount of information in those logs you have, the amount of potential false positive you have in the logs becomes overwhelming, to cull through all that stuff and find out that the needle in a haystack that indicates a potential compromise or something like that. And so that's really where something like artificial intelligence can really shine once it learns, and it learns what does normal look like and therefore this is not normal, that's a tremendous benefit if we can harness that power for that. I think that's the single biggest potential benefit to cybersecurity, is when these log management tools or putting artificial intelligence and incident detection, that kind of thing.

Ramon McGinnis: Yes, just the filtering and consolidating of information that's coming en masse, because like you said, things like logs, there have been tools to go through and look at them and compile them for ages. But even then, as skill increases, it becomes a task that's just kind of beyond one person, that becomes beyond a group of people or a team. So being able to just have all that filtered down in a way that is reasonable, it's crazy, it's intense.

Jake Miller: Can I add one thought to that too, Ramon? Because this is something I've tried to articulate in the past, I've never been able to, and I think I can now. The difference between machine learning and AI. So like I said earlier, machine learning has been around, we all understand what that is. Artificial intelligence, well, what's really the difference? Because under the hood it's all statistics, right? It's the ability of machine to mimic a human. And I think that is what makes it even more interesting, is because now not only are we trying to combat real humans now we're combating machines pretending to be humans, in many different ways. But thank you for triggering that thought. I think it's an interesting one.

Ramon McGinnis: Well, that's one of those other things that's really interesting, to go off on a slight tangent, just the fact that as we're dealing with this, and it is mimicry, that people attach to things so easily. I mean, we anthropomorphize things very quickly as humans. That's the way that we see dogs having behaviors that they're not necessarily acting like they're human. It's really easy to do that when you're talking to something like ChatGPT. When you're talking to it, it's interesting and it's something to keep in mind when we're dealing with all of these things.

Jim Goldman: Well, so-

Rob Beeler: So, as several people have pointed out, AI's been around, and I think from a cybersecurity standpoint, it's been used in tools and to help both detect and perpetrate breaches. It's been used by both the good and the bad guys. And I think this latest trend, the latest developments in generative AI with ChatGPT and others, it's really a quantum leap forward in the kind of information and the kind of things that people can produce on both sides of that fence. So the bad guys being able to produce more realistic phishing simulation, as well as the security professionals being able to sort through information, look at more behavioral analytics to understand what's going on. That's where, I think, while AI has been around a long time, I think that's where a lot of people are stepping back and saying, " How are we going to deal with this? What are the policies going to be behind this? What's the privacy concerns of the data that we're putting into these tools? And how much are we exposing ourselves to a breach with the tools and with the data we put into them?"

Jara Rowe: Awesome. So just leading off what Rob was saying, which leads me to my next question. So with the increase of AI in these industries, what new cybersecurity risks should we be aware of and how can we mitigate them? So when it comes to privacy and compliance, and Rob, I'll actually just throw that back to you since you were just hitting on it.

Rob Beeler: Sure. Well, I mean, this is a topic that we hear about a lot. When we talk to professionals in the field, one of their top concerns is, are the people in their company using their tools and feeding confidential or private data into the tools to solve a problem, but ending up feeding it into the learning algorithms, maybe exposing it outside of their company? So that seems to be one of the top concerns. And a lot of companies are starting to look at, well, what's going to be our policy? What are we going to do? Are we going to say it's flat out banned or hey, here's a reminder of our privacy policy? Many companies have a policy that say you cannot share our company data externally. And for some it's just reminding people of that. I think there's going to be a lot of training involved, and I think you'll see a wide spectrum of how people handle that across that. Now what IT leaders have told us is from a security perspective, it's a reactive industry in many times. The security professionals are trying to stay ahead of the hackers, and they expect that that'll be a cat and mouse game. Each side will have some benefit and then that'll even out over time. But I think privacy and compliance are the biggest concerns today.

Ramon McGinnis: And we were talking about this just before we started, that this just happened with Samsung. As someone who is writing the code, it's really easy to say, " Hey, I'm just going to dump all of this in here. Will you check it out and see what's good? See what's wrong?" I think that's something that we're going to really have to think about going forward, that you don't want to just send the code, as is, over. You're going to want to redact things. You want to be a little bit more piecemeal about it.

Jake Miller: That's a great point, Ramon, because one of the things, when you look at this through the lens of developing the software, so like a SaaS lens, how are we, as leaders in organization, thinking about are the frameworks that we're using or the processes and policies we have in place to make sure that things like code review include the cyber risk security, risk management questions. We should be asking for AI models, which a lot of times we can be black box and maybe our teams aren't even trained on how they work, let alone the end user, let alone the end user even knowing their data is going to go into one of those models. So I think we're continuing to build this ecosystem of tools that you can use, that developers can use, and you're adding on these black boxes and it gets really hard to control. And the other thing that is a concern to me, is while security is top of mind for a lot of founders, now there's yet another thing they have to worry about because we're hearing across the board from our customers, that 95% are founders of startups is, " Hey, I'm being asked how am I going to use AI." Our answer is, " Well, don't just say you're using AI to use AI. But if you're going to, we really need to think about what are the implications of that, also, on your company."

Jim Goldman: So an interesting perspective to take on this is the dilemma we're facing with AI today, right, wrong, et cetera, good versus evil. This is not the first time we've had to face this. In fact, it's almost like a continuing cycle of new technology being introduced and it catches people off guard. People come up with all sorts of ideas about how to use it. And then comes the debate about good versus evil, ethical versus unethical. Does it need to be regulated? Yes or no? If so, by whom? A good example, to date myself, some of you may not even know what this is. When fax machines first came out and they were put into offices, everybody thought it was great. And if you walked by the fax machine and there was a fax on there, you'd just pick it up and read it and then you'd deliver it to whoever it was. But chances are there could have been some very private or confidential information on there, but it's like the technology just jumped ahead of the thoughts about how to use it in an ethical manner. And it's the same thing that we're facing right now. ChatGPT as a type of AI, there's definitely good things that can be done with it. But the other dilemma is good and evil is in the eyes of the beholder, it's relative. And whenever you get a relative topic like that, it's not easy to say this is the way it's going to be unless you are a governmental entity like the European Union, et cetera, that has the power to just put out a regulation about here's how it's going to be.

Jara Rowe: So leading on to the next question, some of you already hit on this a little bit, but how can businesses best prepare themselves for the implementation of AI? So when it comes to training and policies and things like that. Ramon, I'll throw that to you first.

Ramon McGinnis: Wonderful. I think that the preparation for this involves a lot of human behavior. Obviously there are things that we need to consider as far as the pricing involved because AI is not cheap to run. But when it comes down to usage for safety, we just want to make sure that we are delivering what we need, to get what we get out of it, and nothing more. We really want to filter down the information that we're delivering to these things.

Jara Rowe: Okay. So-

Jim Goldman: What I would comment, Jara, from a pure risk management standpoint, if you think about what risk management means, it's that equation of risk versus reward. And think of it as a sliding scale. How much benefit do you think you can get out versus how much risk is it to get that benefit out? And so it's going to be different for every entity. And so I really see it evolving into a policy. If you look at the 29 or so policies that we need to get a SOC 2 certification or an ISO certification, it's not like those just popped in overnight. It was like, over time, people said, " Well, we really need a policy in this area." And so I see this being the same thing. You would do a brainstorm of what are all the ways that our company or our organization could use AI or ChatGPT in a beneficial manner? And then where do we draw the line, where the risk all of a sudden overcomes the reward, and it's not worth, at least for us in our organization, to take a chance and do that? But all of these other things are okay, maybe there's a gray area that goes to a committee or something, I don't know. And then, very clearly, all of these uses are too high a risk, no question about it. We're not going to do that.

Jara Rowe: I know-

Jake Miller: I-

Jara Rowe: Go ahead, Jake.

Jake Miller: I would say I love that, and I would add taking it even a step further. So you have these policies, you have those frameworks in place added onto your, hopefully, program that you already have.

Jim Goldman: Exactly.

Jake Miller: And then one of the things that you should be doing, if you have SOC 2 compliance, is evaluating your vendors. And so much of this immediately is going to be a company like mine or a company like our customers', integrating with other companies. Understanding where are their models hosted, how transparent, do you actually know what's going to happen to that data when it goes into that model? Do they have their own compliance and regulatory frameworks that they comply with or are certified in? I think due diligence is probably the one most important thing you could do right now to prepare yourself and your organization. Aside from asking yourself like Ramon and Jim, we're also saying, what's the minimum use case? What's that minimally viable thing I need to do with this information?

Rob Beeler: Yes, I'll add one more thing. And as we've seen, if you look throughout security, it comes down to people. And you think about today, and I think everybody's been through enough security training, people have seen security training to know, hey, I'm looking at an email. This looks fishy, I got to check the link, I got to do this. I think a lot of people don't understand what's happening with the new AI technology, so they don't understand maybe, hey, I'm putting this data in and it's going out and I could be exposing myself. So I think training is going to be really critical. So as a company's thinking about how do we deal with this, they need to be looking at their training tools and making sure they're educating people, just because they could be inadvertently taking steps that hurt the company. They don't know it. They're not aware of what impact that has.

Jim Goldman: Absolutely.

Jara Rowe: So more specific towards the bias that AI can bring a little bit, what concern, with using AI in cybersecurity, is the potential for bias or discrimination? How can we ensure that AI systems are fair and unbiased?

Jake Miller: Yes, so I'm not going to claim to be an expert in modeling and training data. We have those people on staff here. But one thing that I have learned, anecdotally, that raises big, red flags for me, or things just to be concerned about, is the Amazon HR model. I think this is about four or five years ago, where there was a bias in the HR system that was automated towards men because of a specific type of writing pattern that males typically exhibited. And Amazon said they didn't use that as the whole part of their heuristic, but it certainly was part of it. And you can extrapolate that problem even further to things like the justice system, those sorts of things. And that's a major concern. And so understanding the model, how does it work? Understanding what data you're putting in and is it clean? Is it prepared? And testing the outputs of those models. I think at a high level, those are the things that we need to do.

Jim Goldman: That brings up a really interesting point, Jake. In the example you gave, the system was in place. We saw the results over some period of time, and then a human being said, " Wait a minute, something's fishy here. There's something going on." And then it had to be investigated, right? So there, again, the harm-

Jake Miller: Trailing indicator

Jim Goldman: ...the harm was done. Yes, exactly. It's closing the barn door after the horse has gotten out kind of an algorithm, right? And that's really the crux of this issue. Bias can be seen in the results, potentially. But if you were to be developing a ChatGPT or an AI- based system, how could you test for potential bias before the product hits the streets, so to speak?

Jake Miller: That's a great question. And I think those are things that we, as an industry, will have to learn. So in linguistics, we already know that different regions in the US, for example, have different speech patterns. It's becoming less and less, I guess that's a whole other webinar we could do. But it seems like those are the types of patterns that we could implement to test the input data before we run it through inaudible

Jim Goldman: Yes.

Jake Miller: And so, to your point, we're learning that now how do we do that? We'll never be perfect at it, but maybe that is a way to mitigate that concern. I'm brainstorming on the fly here.

Jim Goldman: No, I think you're absolutely right. Absolutely right.

Ramon McGinnis: I mean, to me, it kind of comes down to two real simple things. Diversity of incoming data, diversity of the data that's being used by the models, and really strict oversight all the way through the process. I think it's one of those things where we're just getting out what we're putting in. And, like the Amazon model, they predict that men are going to be good at this job because look at all these men that are in this job. Well, that doesn't mean that... That's obviously the wrong conclusion, but from a machine perspective, it makes perfect sense. So that does require a lot of oversight from the jump.

Jim Goldman: And I think that's the fear, the underlying fear, that maybe people are reluctant to articulate, but who is doing that oversight? All of this information's gathered, who's keeping this fair, unbiased, not being used for surreptitious, evil purposes, that kind of thing. And I don't think there's a clear answer.

Rob Beeler: Yes, I mean-

Jake Miller: And I wonder, that might have to be something that's just legislated because when you think about all these models that are being chained together, one person's responsible for their model, but what happens when you have 10 models working together, all created by different people, then working together to create another result. There's no oversight in that.

Jim Goldman: Correct. That's why, I think, we talked about the European Union and GDPR, that same organization, potentially looking at OpenAPI and so forth. I think that's really the model that you're referring to, Jake, because they're super clear on responsibility. If you're doing this and you're using this data or storing this data this way, that belongs to individuals, you will have a person that does this and this. You will have a person that does this and this. I could see that being the kind of model that you're talking about, that could work, might work.

Jake Miller: Yes. Sounds like another company. I'm writing it down.

Jara Rowe: All right. So one of my favorite things about the panel that we have here, it's a great crossover between engineers, SaaS companies, helping SaaS companies, cybersecurity experts, all the things. So how does AI, SaaS and cybersecurity work together? And Rob, I would love to have you answer that first.

Rob Beeler: Yes, it's a great question. So I think, from a SaaS company looking at AI, there are many, many use cases. And we've said before, there's good and bad, there's positives and negatives. It can be used for good or evil. But I think we all have to accept that it's here. And I know there was a question posted about the push to maybe put a pause on AI. I personally think you're not getting that genie back in the bottle, it's going to be very difficult-

Jim Goldman: No. No chance.

Rob Beeler: ...to stop, outside of regulation. But even then, it will be happening. But anyway, I think, today it's being used in a lot of different ways by SaaS companies. I actually think we're going to see a boom in opportunities with SaaS companies, with generative AI. You're already seeing a lot of new products and new companies pop up using the technology. So in general, I think it's very important to embrace the idea and see how you can leverage it. But, of course, and Jake touched on this in terms of coding and what data or what content you're using, you have to be very careful. You have to understand limitations, you have to have review processes in place and human oversight in place. So I think it comes together, from the SaaS and AI, of hey, this is an opportunity that we have to manage carefully going forward. But from the cybersecurity perspective, another area where there's opportunities to improve, but we need to understand that the bad guys will have those, too. And it will be an arms race for some period of time, maybe forever. And how do we combat against the new power that everybody has?

Jake Miller: That's a great point, that everybody has. It's democratizing AI, which adds a whole other layer. Privacy and compliance concerns.

Rob Beeler: I think democratizing it is a great way of putting it. And I've always looked at, throughout my career, the development of tools for coders has just made the development of process easier. I've always found it to make the easy things easier and to lower the bar for people to be in the field. But it still requires, to make things of meaning, things that have impact, it requires a deeper level of thinking. So I think we all have to accept how do you leverage those things and move people into roles where they're doing the deeper level of thinking? You have to be competitive. I think we're going to have to do that. Otherwise, it's like you're going to be coding with one hand behind your back because everyone else will be using the tools.

Jim Goldman: Exactly. Exactly.

Ramon McGinnis: This is a paradigm shift. I mean, it's something that we can expect out of anybody coming forward in the industry, they will need it to be AI wranglers, so to speak.

Jake Miller: Prompt engineers. Yes, absolutely, Ramon. I was going to say, I write a lot of other CEOs a Friday note, and one of them was about I have identified AI tools as an existential threat to the Engineered Innovation Group. If we do not think about how we can leverage these tools to be better coders, be better product people and just better employees, that is a threat to us being a software company. So maybe that's a little melodramatic. Kind of my style, but it's true, I think.

Jim Goldman: I think it's pragmatic. And another example, without revealing any trade secrets, at Trava we sit between a variety of different companies in a variety of different industries who are looking to get cyber insurance, and therefore they need to be able to measure their cyber risk. And on the other side is the agency's wholesalers and insurance carriers that want to write cyber insurance but don't want to lose any money. And so the question in the middle is, how do we take all this data about every risk out there and every breach that ever happened and what led to what and what caused what, et cetera, et cetera, and come up with a single number or something, an inaudible that says, " This company's a good risk, this company's a medium risk, this company's not so good a risk in terms of writing cyber insurance." That tremendous amount of data is exactly what AI and perhaps ChatGPT, but certainly AI, is meant to look at. To look through disparate data from a variety of sources and somehow cull it down to a meaningful, actionable piece of data that goes from one source to another.

Jake Miller: Yes. Yes. Jara, I'll add one more thought, too, on your question about SaaS and cyber security and how do those things go together. This is more of a recommendation. It's a framework I use every day. If you are running a software company, it's very helpful to think about product and operations as two different things. You have a product, you're trying to build AI, machine learning into that product. Maybe, maybe not. Your security and compliance programs should cover both product and operation, but you can separate those out. Operations are things like, how does my team make sure they erase their whiteboards before they get onto a webinar? You can't read that. How do they vet tools that they're going to use? How do we off board someone, make sure that they don't have access to our resources anymore? Those sorts of things. Thinking about it that way is very, very helpful because it's not just the software out of the SaaS it's also your organization.

Jara Rowe: Awesome. So Jim, you were just touching on cyber insurance and things like that. Can you talk about what impact AI tools may have on the usage of cyber insurance coverage?

Jim Goldman: Yes, that's a great question. I mean, the short answer is, I don't know, we have to wait and see. But there's certainly implications. We've had a debate and we're doing some research right now, reaching out across the industry to try to get a read on what will insurance companies, cyber insurance companies, reactions be to ChatGPT. In other words, if you have evidence of using it for nefarious purposes, does that somehow nullify their need to pay off a claim for a cyber incident, that type of thing. Or they could just simply exclude all claims being paid if the evidence points at the fact it was a ChatGPT generated attack, that type of thing. It's a lot. It introduces a whole new wrinkle in an industry that, right now, has more questions than answers.

Rob Beeler: And we were talking to a large agency this morning on this exact topic, and they said, " While the questions haven't come, they're starting to formulate." And I think that's what you'll see first. If you look at insurance applications, cyber insurance applications have got longer and longer and longer, more and more topics covered. I think you'll see next, questions like do you have a policy on AI usage? Do you have mechanisms in place to look for content that was AI generated or do you have processes in place to validate that kind of content? And I suspect those will be the first steps, is looking to see who's thinking about it and who's putting some energy into defining the policy. And then we'll see what the loss behavior is over time and that's how insurance companies will react.

Jara Rowe: Cool. So Jake, I would love for you to answer this question first. What tools should businesses be leveraging when it comes to AI? And then are there some that people should stay away from for the time being? And you don't have to answer that last one if you don't feel comfortable.

Jake Miller: Well, honestly, the whole question's a hard one. And where my mind immediately goes is for product development, what should we be using? And I could share a couple things, and Ramon's probably an even better person to answer this question because already inaudible second. But Copilot, for example, when we're coding, it's phenomenal in helping us code faster. Those sorts of tools, ChatGPT, of course, people use it in marketing and whatnot. Hopefully we're using it as inspiration, not as plagiarism or plagiarism tool, so those tools. For me though, it really comes back to what is the problem you're trying to solve and what is the use case? And then that will dictate the tool that you choose. And part of the reason I'm skirting around directly answering your question is there's so many out there I've not vetted. Like most other people, I'm trying to consume from a fire hose as well, all the new things are popping up. But Ramon, do you have other thoughts?

Ramon McGinnis: I mean, that's the thing. It is pretty overwhelming that there are so many tools out there that we can be using for different purposes. So it really comes down to what are you doing over the course of your day and what's going to make her life easier on that? Personally, I use Copilot and ChatGPT, and that's pretty much the deepest I go when it comes to just engineering product. However, I know people who are over on the marketing side who will, say, use ChatGPT and Midjourney, Stable Diffusion, something to just generate placeholder images, things of that nature. It always comes down to making sure that you're doing something that isn't plagiarized, something that isn't generic, something that's still you. So you just want to use some of those tools, don't let them just be the thing that gives you the answer.

Jake Miller: Yes. Here's a specific example. We use what are called Terraform scripts when we're staying up environments. So, basically, we create a script and that says, " I want a machine here, and I want a load balancer here." Whatever it is, right? And one way we can move much faster is we say, " ChatGPT, create me, for Google Cloud, a Terraform script that does X, Y, and Z." Boom, spits it out. Are we going to just copy and paste that in? Absolutely not. We're going to check to make sure it's going to do what we need to do. It's going to go through testing, environments, all those fun things. So that's a very specific use of a tool. And someone in the Q& A actually asked here, " Three years from now, how do you anticipate AI will change the security industry?" And I think let's start, just from a generative AI perspective, how's that going to affect us? Democratizing AI means anyone can be an attacker, a very sophisticated attacker because they have tools at their disposal to say things like, " Get me into X, Y, and Z system," or, " Write me a script that can do X, Y, and Z." So I think we're just going to see it be a wild west in the next three years. That's my prediction inaudible

Jara Rowe: Awesome. I wanted to end with one of my final questions before we answer the questions from the audience. What do we have to look forward to? What predictions do you all have or what are you excited about or kind of hesitant about when it comes to AI tools? Rob, let's start with you first.

Rob Beeler: Yes, so I think there's going to be a lot of exciting advancements. I think you're going to see more predictive tools. And Jake gave a really good example of tools to help you automate things you're doing. I was reading about tools to secure Kubernetes or tools to secure your cloud environment, things to help you come up with your security policies or help you with those. So I think you're going to see more predictive technology and it will just be integrated in our tools. As it's been growing, it will not be so, hey, I'm going to ChatGPT to do this. It's just every tool you use will start to recommend things and kind of see what you're doing and say, " Hey, do you want to do this? Here's a suggestion." But I think maybe the most interesting thing to me, which I see coming, is just going to be a change in the expectations. This type of interaction and customization, it's just going to be part of every product requirement in the future. It's kind of raised the bar on everybody, that it's not just take some input and show data, it's really understand your user and your customer and tailor what you're showing them.

Jara Rowe: Awesome. Ramon, you want to answer next?

Ramon McGinnis: Sure. I envision a producer consumer scenario going forward, wherein you have some people who are working on the models and you have some people who are using those models like we are right now. But as far as the consumers go, you're going to see a lot more of a higher level thought. You're not going to have to worry so much about the nitty- gritty. So that's just going to change the industry as it is. Once people don't have to worry about this... And I have a fun aside for this. I was in a hiring process, not too distant in the past, and the first thing that comes up is the regular conversation and the second thing comes up is the code test. And I'm like, " The code test means nothing now. That means nothing." You can just run that into any of a thousand different solutions and just get the response back. That's not really a good determiner of who is going to make the right fit. Anybody can do that. It's beyond rote memorization at this point. So it's going to change the nature of engineering in general. But there's also the other piece here, that is the environmental impact of using all of this AI, because it is an expensive thing to do. So we are going to have to work on making it more eco- friendly if we're going to continue to use it to the scale that it looks like it's going to be used.

Jara Rowe: Interesting. Jake?

Jake Miller: So putting on my creative hat here, every once in a while lately I think of Star Trek: The Next Generation, and Dr. Crusher talking to the computer and saying things like, " Hey, computer, can you take that virus and break it down into the X, Y, and Z?" And then do all sorts of stuff that I couldn't recite here. I think that's where we're really headed. I know that's not specifically related to the cybersecurity question here. But when we talk about AI, that's what makes me optimistic, is being able to interact way more fluidly with the technology that we have at hand.

Jara Rowe: And Jim?

Jim Goldman: Well, I would pick up on the points both Ramon and Jake were making. There will be increasing layers of abstraction layered on top of what we're talking about, ChatGPT. And I picture almost like an upside down triangle and that the number of people that will be working at the ChatGPT level will pale in comparison to the number of people using abstracted tools that do that lower level work for them. And then the other thing that I think is going to happen, like any other new technology, I think checks and balances will come into place from both the technology industry itself and from governmental and quasi- governmental agencies. Jake found a publication from the National Institute of Standards and Technology here in the United States, that was just published in January, called Artificial Intelligence Risk Management Framework. And so governmental and quasi- governmental agencies are getting involved. The EU commission that gave us GDPR, they're not just looking at this, they are actively working on producing regulations that would compliment GDPR.

Jara Rowe: I have to ask the question for those other people that may not know, that join this simply from a SaaS side. Jim, can you explain what GDPR is?

Jim Goldman: Yes. It really came from a desire on the part of the European Union to protect private information of private individuals, far beyond any protection that we have nationally in the United States. In the United States, right now at least, we're taking a state by state approach to protecting private information with California and CCPA being kind of a leading example, but many other states are starting to look at that. But that's really what it was, is that your personal information is your property and you as an individual should have control, own it, decide how it can be used, where it can be used, when it can be used, et cetera. And the European Union said that is absolutely correct. And if you're a company, anywhere in the world, that wants to do business with European citizens, you will abide by these rules.

Jara Rowe: Perfect. All right. So now as we are wrapping up, I would love to get into some of the questions that the audience submitted. And I personally really like this first one. So with AI able to learn, how do the good guys stay ahead of the bad guys when it is constantly changing in real time?

Rob Beeler: Well, I would say-

Jara Rowe: Who wants to take-

Rob Beeler: I'll jump in. I would say the good guys have to be using AI as well. We have to be using similar tools and leveraging that kind of technology. And you can use these tools to predict risk, to not only say this thing happened, look at behavioral analytics and understand where would you expect something to happen, because a vast majority of breaches start with human behavior. So being able to monitor that at a large scale and correlate events across different things and say, " Hey, here's a potential risk, let's head that off." So I think you got to fight fire with fire.

Jim Goldman: And, also, I think, don't bury your head in the sand. So in regards to what the bad guys are doing with it, again, we have systems set up, law enforcement agencies, et cetera, that for years have kept on top of what bad guys are doing with technology all around the world. I was a task force officer on the FBI Cyber Crime Task Force, I can assure you that they have highly qualified people that are keeping close tabs on what the more nefarious uses of ChatGPT are.

Jara Rowe: Awesome. Anyone else?

Jake Miller: Last thought.

Jara Rowe: Go ahead, Jake.

Jake Miller: Yes, I would say, last thought, it's sort of like how is any arms race fought, right? It's just happening a lot faster. The sword, then there's the musket, now there's a cannonball. I don't know how these weapons were developed in what order. But point being, just staying ahead of the game.

Jara Rowe: Perfect. All right. This next one, I believe Rob was touching on it a bit earlier, but what do you all think about the open letter calling for the pause in the development of AI? Just your general thoughts.

Jake Miller: Pointless. Rob said earlier-

Ramon McGinnis: I understand it. I just don't think it's the word.

Jake Miller: I think we'll all-

Ramon McGinnis: I totally understand it.

Jake Miller: Yes, I think all it's going to do is it's going to take... Actually, just to play off the last question, all it would do is stifle others who may be doing good, right? It is just contradictory, unfortunately. As much as want to say, "It's a great idea and we should do that." I just don't think it's practical.

Ramon McGinnis: Yes, I think that the big concern there is that we're just outpacing ourselves as humans, that we are creating Frankenstein's monster. And that's pretty valid. I think there's a lot of validity to that. But, like everybody said, I don't think that we're going backwards from this. I don't think we're going to be able to hit the pause button, so we're just going to have to become better as humans, which is scary.

Jake Miller: I love the optimism.

Jara Rowe: Oh, man. All right. So are any of you aware of a company that does pre- bias work when it comes to AI?

Jake Miller: Not right now. Also sounds like a great business opportunity.

Ramon McGinnis: It sure does. It sounds necessary and huge. If something exists, then it's going to take over.

Jake Miller: Yes. If you need help building that contact the Engineered Innovation Group.

Jim Goldman: Yes, it's almost like knowledge management meets ChatGPT or something like that.

Jara Rowe: So it looks like someone is wanting to identify a way to create career pathway training based on job descriptions. Are there any modeling risks? I think that's what the question's asking.

Ramon McGinnis: Honestly, that's one I would need to answer offline after a little bit of thought. That's a good question.

Jara Rowe: Yes, that's-

Jake Miller: That's a good question. That's a really good question.

Jara Rowe: We will be sending a follow- up email after this, so if you all want to take some time to look it up, I can include that in the email so we can get Barb's question answered. So I think we answered that one. So how does AI potentially help MSPs when it comes to SaaS and cybersecurity things?

Jim Goldman: Well, a managed service provider and a managed security service provider uses a combination of different tools from different companies in order to manage their client's infrastructure and protect it. And so I think the benefit, it's sort of a downstream benefit, as those tools, that variety of tools that they use in their toolbox, become more sophisticated and better at filtering out false positives. Because of their use of AI and ChatGPT, then the benefit will kind of cascade down to them because they'll be able to better serve their clients, catch more actual problems, and eliminate more false positives.

Rob Beeler: I would add, for the MSPs, they're generally monitoring a large number of customers, often. And so I think they can use some of the tools, advancements in tools, to better understand trends across their customers as well as trends across the industry, making sure they're understanding what are the common vulnerabilities that are coming up and are they seeing them across their customers or what kind of behavior are they seeing over a larger group versus just one single company. I think to Jim's point, all the tools will advance, and as they can predict more and show trends across a broader set of data that'll help them.

Jara Rowe: Great. So can AI make our gadgets completely transparent and vulnerable if it will run on quantum computers and what may be solution?

Jake Miller: Add that to the offline conversation.

Rob Beeler: Wow.

Jara Rowe: That's a big question.

Ramon McGinnis: Yes inaudible

Jim Goldman: It's a very valid point. All of these things that we're talking about come down to computing power. Not to change the subject, but you're probably familiar with blockchain.

Jake Miller: I'm just thinking that, Jim.

Jim Goldman: Yes. Where's the power from blockchain come from? Massive amounts of computing power. And why have encryption schema like AES 256, which we thought was unbreakable for years and years, why is it now breakable? Because of massive amounts of computing power. So it's a very valid point. It's a very valid concern. No, right on, the more computing power you have, the more you can overcome. It's just that simple.

Jara Rowe: All right. And then, I believe this is the final one, and it's specifically about Trava. So they're asking how is Trava different from something like Burp Suite?

Rob Beeler: I'll jump in on that one. So, Trava is really, we're focused on comprehensive assessment of risk. So you're looking at all areas of a company versus just looking from the outside or from a website or from a penetration level. We're able to look inside your network, we're able to look inside your cloud environments, in your Microsoft 365 environments, on particular machines, really have deeper ability to scan and assess. That's one thing. Number two is we roll all that up, we normalize all that and classify it and help you understand what's really important to fix and how do you fix it. So you don't have to have the expertise to run a particular tool. We make that easy for you. And then finally, the last piece, and a really critical piece, is we've got a service organization behind that to help you get from where you are today to where you need to be. So it's one thing to say, " I need to do these things." It's often very difficult to navigate that and come up with a roadmap of, we're going to do X, Y, and Z, and here's what's most important. So we help customers do that. We think that's a really important feature for our customer base.

Jake Miller: And Trava's really good at it.

Jara Rowe: Thanks, Jake. So as we wrap up, we have a final few minutes. Do any of you have any final thoughts on AI and SaaS and cybersecurity?

Jim Goldman: Yes, I have one anonymous attendee, said that Jim is still in 1970. I just want to say, for me, 1970 was a great year, and I'd be happy to still be there.

Jake Miller: And let's not forget, many of these AI models and machine learning models were actually embedded in the'40s and '50s, we just didn't have the computing power to do anything with them.

Jim Goldman: Oh, no. I used to program in Lisp, L- I- S- P. Yes.

Jake Miller: Yes, popular linguistics actually. Yes. No, nothing else from me.

Jara Rowe: Ramon or Rob. We're good. All right. Well thank you for attending the webinar. We will be sending a follow- up with a webinar replay and some other things. But don't go just yet because our friends at EIG here, they do have another webinar next week on data science. If you want to scan the QR code, it'll take you to the registration link and you can attend that.

Jake Miller: Thanks for sharing that.

Jara Rowe: Yes, no problem. All Righty. Well everyone have a fantastic afternoon.

Jim Goldman: Thanks, Jara, great job.

Ramon McGinnis: Thanks, everyone.

Jake Miller: Thank you.

Rob Beeler: Take care.

Jake Miller: Bye.