The Core Pillars of AI Governance

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, The Core Pillars of AI Governance. The summary for this episode is: <p>The rapid adoption of AI brings opportunities, yet new risks. Strong governance enables organizations to remain innovative while maintaining trust and protecting data.</p><p><br></p><p>In this episode, host <a href="https://www.linkedin.com/in/jararowe/" rel="noopener noreferrer" target="_blank">Jara Rowe</a> welcomes <a href="https://www.linkedin.com/in/jigoldman/" rel="noopener noreferrer" target="_blank">Jim Goldman</a>, Co-Founder of <a href="http://www.travasecurity.com" rel="noopener noreferrer" target="_blank">Trava Security</a>, to discuss how clear oversight, board engagement, and high-quality data enable the creation of ethical AI that aligns with business goals.</p><p>They outline practical steps, common blind spots, and proven frameworks for trustworthy automation.</p><p><br></p><p><strong>Key takeaways:</strong></p><ul><li>AI governance versus compliance in plain language</li><li>Why data quality shapes reliable machine output</li><li>How leaders and teams share accountability from policy to practice</li></ul><p><br></p><p><strong>Episode highlights:</strong></p><p>(00:00) Today’s topic: AI Governance</p><p>(02:46) Governance reaches the boardroom</p><p>(04:09) Big shifts in NIST CSF 2.0</p><p>(06:01) Governance versus compliance explained</p><p>(07:45) Data quality risks in AI</p><p>(10:55) Core parts of a governance framework</p><p>(13:32) Roles and ownership across teams</p><p>(14:55) Designing ethical, transparent AI</p><p>(19:06) Proving accountability in decisions</p><p>(23:37) Easing public worries with openness</p><p>(26:41) Criminal abuse and law response</p><p><br></p><p><strong>Connect with the host:</strong></p><p>Jara Rowe’s LinkedIn - <a href="https://www.linkedin.com/in/jararowe/" rel="noopener noreferrer" target="_blank">@jararowe</a></p><p><br></p><p><strong>Connect with the guest:</strong></p><p>Jim Goldman’s LinkedIn - <a href="https://www.linkedin.com/in/jigoldman/" rel="noopener noreferrer" target="_blank">@jigoldman</a></p><p><br></p><p><strong>Connect with Trava:</strong></p><p>Website - <a href="http://www.travasecurity.com" rel="noopener noreferrer" target="_blank">www.travasecurity.com</a></p><p>Blog -<a href="http://www.travasecurity.com/learn-with-trava/blog" rel="noopener noreferrer" target="_blank"> www.travasecurity.com/learn-with-trava/blog</a></p><p>LinkedIn - <a href="https://www.linkedin.com/company/travasecurity/" rel="noopener noreferrer" target="_blank">@travasecurity</a></p><p>YouTube - <a href="https://www.youtube.com/@travasecurity" rel="noopener noreferrer" target="_blank">@travasecurity</a></p>
Governance reaches the boardroom
01:43 MIN
Big shifts in NIST CSF 2.0
01:50 MIN
Governance versus compliance explained
01:48 MIN
Data quality risks in AI
03:14 MIN
Core parts of a governance framework
02:41 MIN
Roles and ownership across teams
01:27 MIN
Designing ethical, transparent AI
04:10 MIN
Proving accountability in decisions
04:36 MIN
Easing public worries with openness
04:19 MIN
Criminal abuse and law response
05:45 MIN
What is AI governance
00:21 MIN
The difference between AI compliance
00:32 MIN
Why AI governance is necessary
00:31 MIN
AI governance should be top-down
00:32 MIN

Jim Goldman: AI governance is not just important for those companies that are selling AI- based products. It is just as important for companies adopting those AI- based products because the risks are similar. The risks don't care whether you're producing the product or using the product, the risks are the same.

Jara Rowe: Gather around as we spill the tea on cybersecurity. We're talking about the topic in a way that everyone can understand. I'm your host Jara Rowe giving you just what you need. This is The Tea on Cybersecurity, a podcast from Trava. AI is a huge topic right now and we've already talked about AI compliance on the podcast. But now, we're diving into AI governance, what that really means, and why it's becoming essential for any organization using AI. On this episode, we'll explore how governance differs from compliance, who should own it inside a company, and how businesses can build ethical frameworks for AI use. Plus, we'll talk about the real risks and challenges companies are facing right now. As we know, I am not the expert, but I have my favorite cybersecurity expert with me today, Jim Goldman. Hi, Jim.

Jim Goldman: Hey, Jara. Good to see you again.

Jara Rowe: Nice seeing you. I know you've been on the podcast a couple of times, but just in case this is someone's first episode with you tuning in, can you go ahead and introduce yourself?

Jim Goldman: Sure. I'm Jim Goldman, co- founder of Trava. Formerly CEO, I've stepped aside as CEO. Now I'm doing what I love best, which is working with current and prospective customers on their security and compliance needs.

Jara Rowe: Fantastic. Okay. Let's go ahead and dive right on in. Just to kick things off, can you explain what AI governance means in simple terms for cybersecurity newbies like myself?

Jim Goldman: You bet. It's almost like we have to take a historical perspective somewhat, Jara, on this. We'll get to AI governance, but we need to talk about governance in general first. Here's the thing. This notion of security GRC, security governance, risk management, and compliance has been around for quite a while. But what I would say is, subjectively speaking, the R and the C, the risk management and the compliance, have been more widely adopted, are more mature than the governance part, than the G part. The reason I say that, in my experience, the governance part of security, cybersecurity, et cetera, fell mostly to the people in charge of information technology or security and compliance. Where it hadn't gotten to is it hadn't gotten to the senior leadership of the company, so the board of directors, to the risk and audit committee of the board of directors, et cetera. It was like this exception. So cyber governance was this exception over here that the senior- most leadership, I don't want to say washed their hands off, but it wasn't something they were staying on top of on a regular basis. That's now all changed. I think as a reaction to the realization that when there is a large cyber incident, a large data breach, it's not just an IT problem. Now it's a reputational problem. Now it negatively affects their stock price and they're a publicly held company. Now it's got the attention of the board of directors, and regulatory agencies or agencies that put together security frameworks and so forth. Saying, " Okay, we've been too casual about this, we haven't been prescriptive enough. Now we're going to get it a little bit more prescriptive." The best example of that is with the NIST cybersecurity framework, the NIST CSF that we've talked about in the past. The original NIST CSF version 1. 0 was created in 2014. NIST CSF 2. 0 came out in 2024 and significantly, they added this whole chunk about governance. That was a big difference. Okay, we're not going to just leave this to chance, we're not going to assume everybody's doing it correctly. No, this has to be done this way. They're very specific about the requirements for senior- most management to be involved. It's not just a technical problem, it is a corporate governance problem. AI, to get to your question, AI piggybacks on top of that in saying AI is not a data science problem. It is not a problem of the engineering team to worry about solely. Certainly, they have responsibilities, as they should. But governance is a top- down function. What they're saying, AI governance has to start at the very highest layers of organizational leadership.

Jara Rowe: We actually just talked about that on a different episode about how everything starts top- down and it's not just on one person to care about, it's a team effort.

Jim Goldman: The most successful, whatever you want to call it, cybersecurity, information security organizations are the ones where everybody buys in and they don't just say, " Oh, the nerds over in IT take care of that. I can just go about my business and not worry about it."

Jara Rowe: All right. As I mentioned at the beginning, we've already had an episode about AI compliance. But can you tell us how AI governance is different from AI compliance?

Jim Goldman: Absolutely. As I alluded to previously, a lot of it has to do with the organizational aspect of it in that senior- most layers of management are responsible, I would say even primarily responsible for AI governance. The buck stops with them. Whereas the compliance end of it is more with organizational layers, the doers if you will. In other words, we set the philosophy at the highest level, but then the implementation of that philosophy has to be done by the people actually doing the work. The AI experts, the machine learning experts, the engineers, the deployment people, et cetera. They're the ones that have to gather the evidence to say, " Here's our policy, here's our process, here's our procedure, here's our control. Here's the evidence that we're doing these things according to our policies, procedures, and controls."

Jara Rowe: Yeah. I will admit, especially since I'm still learning this industry, I always got confused by governance and compliance. But you putting it that way-

Jim Goldman: Yeah.

Jara Rowe: ...definitely makes more sense so I understand.

Jim Goldman: Right, right. The other way to think of it is, almost like a public sector context is governance is where our representatives pass laws, but then law enforcement is the compliance part.

Jara Rowe: That is super helpful. I just wrote that down.

Jim Goldman: Yeah. I just made that up.

Jara Rowe: Well, it's helpful. Okay, next. Why is AI governance necessary for organizations even using AI?

Jim Goldman: That's a very good distinction. AI governance is not just important for those companies that are selling AI- based products. It is just as important for companies adopting those AI- based products because the risks are similar. The risks don't care whether you're producing the product or using the product, the risks are the same. Then there's the obvious ones, that risks are the antithesis of the desirable characteristics that we want from AI. For example, the thing that most people are worried about is because of a lack of quality control on the data that AI is based on, and this is fundamental... AI is not magic. AI is this super intense processing, but what's it processing? It's processing data. There used to be an old expression back when we called it computer programming, it was garbage in, garbage out. Your output is only as good as the quality of your input. I think in the rush to implement AI, and this is just my opinion, I think that there's been a lack of clarity on the importance of data management, and more specifically data quality that is the foundation, is the fundamental layer of any AI system that's going to sit on top of it. Getting back to my previous point, the worry about poor data quality is you're making bad recommendations. Those bad recommendations could even lean into bias, et cetera. Because if you think about how AI works, it builds on itself. If you've got some kind of incomplete data or something in your data layer, that first iteration that goes through it produces a biased or slightly wrong recommendation, or something like that. That path now becomes data for the next generation. It's like the bias or the error perpetuates and could potentially even accelerate.

Jara Rowe: Yeah. We want to make sure that there's no biases.

Jim Goldman: Yeah. It's got to be unbiased, it's got to be ethical, it's got to be accurate. That's really what all of this comes down to. Then there's many different risks that could potentially contribute to that, but that's really what we're trying to avoid. That in the end of this, the recommendations that come out any AI- based system are responsible, accurate, unbiased, ethical.

Jara Rowe: Okay. I know earlier, you started talking about NIST and things like that. But what are the key components of an AI governance framework?

Jim Goldman: Really, what it comes down to, and we'll use NIST as a good example, the governed function on the NIST AI Risk Management Framework, basically it wants to outline the policies, the processes, the procedures, and the practices across an organization top to bottom. From the leaders to the doers kind of thing. Really, what we're trying to do is we want to map, and measure, and manage the risks. In other words, risks are inherent. We need to start there. You're never going to eliminate risks, there's never going to be a risk- proof or a zero- risk system. There's always risks. Risks have to be properly managed. How we manage risk is not unique to AI. You can mitigate the risk, you can accept the risk, you can transfer the risk, that type of thing. That's effective risk management. The key here is that you're doing all that risk management not off the cuff, not in any kind of ad hoc manner, inconsistent manner, but you're doing it consistently according to the policies, and the practices, and the procedures that you've laid out. If we look at, for instance, govern in the NIST AI Risk Management Framework, it starts with the legal and regulatory environment in which you operate, that's the Govern 1.1 area. What that's talking about is there's a difference between standards and legal requirements. In the EU, there's the EU AI Act, that's a law. This is not voluntary like, " I really think we ought to be ISO 42001 compliant." That's not law, that's a choice. Same thing for, " I really think it would be smart if we were compliant to the NIST AI Risk Management Framework." You have to understand, that's standards, but then there's the law. There's the EU law. In the United States, we already have AI laws, either already ratified or in the process from California, Colorado, Illinois, Maryland, New York, Virginia, probably others. What's interesting is there's also a law at the federal level saying we need a 10- year moratorium on states being able to pass their own AI laws. I'm not going to weigh in on which is which, or which is better, or preferable, or whatever. But governance starts with you better understand the legal environment in which you're operating.

Jara Rowe: Speaking of that, who is responsible for all of these things at an organization when it comes to AI governance?

Jim Goldman: It's a really good question. It goes back to something that we said earlier on with what's the formula, the secret formula to doing anything in GRC successfully, and that's that it's a total team effort. Every layer of the organization knows what their role is, knows what their responsibilities are, and they're managing and monitoring that, and being able to produce evidence they're following procedures. The managing and monitoring comes in that, if they're not, those exceptions pop up. In other words, the bottom line to governance is things don't go undetected. In other words, if anywhere along the line, something is not compliant with a policy, a process, a procedure, a control, or whatever, it doesn't go unnoticed. An alarm goes off, a warning email comes out, something like that. That's the key to governance is nothing goes unnoticed, but everybody's responsible. That's really what it comes down to.

Jara Rowe: Absolutely.

Jim Goldman: Yeah. No one person is responsible top to bottom, but everybody has a role to play.

Jara Rowe: Right. Okay. I feel like you were going over this briefly before. How can companies develop ethical guidelines for AI use?

Jim Goldman: Fortunately, both ISO 42001 and the NIST AI Risk Management Framework have a very strong perspective on ethical development and ethical use. In some ways, it's like any other software development lifecycle. It starts at the very beginning with ethical AI development. Right from the beginning, it has to be designed with respect for human rights, and privacy, and dignity. That's that ethical use, unbiased, truthful, et cetera, kind of thing. No ulterior motives I guess is one way to say it. No purposefully introduced biases, that type of thing. Then the other key part of that is, and this sometimes gets missed. I think this is maybe the wrongly placed trust that some people are putting in AI in that transparency and explainability is a key desirable characteristic of AI systems that both ISO 42001 and NIST AI Risk Management Framework highlight. If you stop and think about that, if you and I were having a conversation and you said something to me, and it was obviously that you felt like this was a fact, it was absolute truth, we would probably have a conversation that says, " Well, why did you reach that conclusion? Where did you get the data to come to that recommendation or that opinion, or whatever?" Well, we need to be able to do the same thing with AI systems. You're telling me, " Here's your answer to this query." Well, show me how you reached that conclusion. I don't know how many AI systems are equipped to do that, but that's exactly, that transparency and explainability characteristic is absolutely key to us being able to use AI in the future in an effective and fair manner. And trustworthy manner.

Jara Rowe: For sure.

Jim Goldman: Beyond that, it's the accountability. We have to have clear lines of responsibility. There's no passing the buck. This goes back to what I said before, the clear policies, practices. And from our other talks in compliance, any time you have a control, there's a name next to it. Here is the person responsible for this control, here's the person responsible for that control. It's never like a department or a role, it's unambiguous. It's a person's name. That's the accountability part. Then we talk a lot about fairness and nondiscrimination. We have to guard against any inherent or purposeful biases that got introduced. Again, where's that start? It starts with the data. What if data was only from one part of the world, and then someone wants to ask a question and make a recommendation, and the system extrapolates it and says, " Across the world, here's what's true." Well, that was based on data from only this smart of the world, so how can that be true? That kind of thing. Then finally, it's risk management. Again, you know this because we've talked about risk management so much. Risk is inherent. We can't eliminate risks. What we have to do is if there's no excuse for not knowing what your risks are, that's why risk assessment is so important. You have to assess the risks. Risk assessment on an AI system is a little bit different. There's some similarities, but it's a little bit different than we do in ISO 27001 risk assessment or a SOC 2 risk assessment, or something like that. It's different, but it's the same in that effective risk management always starts with a thorough and objective risk assessment.

Jara Rowe: How can an organization ensure accountability in AI decision making?

Jim Goldman: We've talked a lot about GRC in these podcasts, and GRC platforms, et cetera. In some ways, what we want to assure people is just because it says AI in front of it doesn't mean you have to forget everything you knew about security governance or security compliance, et cetera. A lot of those same practices still apply. It's just instead of looking over there, now we're looking over here. Those same kind of things where well- defined policies, and processes, and procedures are put in place. Control owners are identified. Then really, you need that platform that provides that managing, and monitoring, and evidence gathering, and doing the internal audits, and so forth and so on. For some companies, it's going to make sense, probably largely driven by their customer demands, that they get certified. They get an external audit against the ISO 42001 standard. The interesting thing, Jara, is as new as that standard is, we're seeing quite a big uptake in companies, large and small, going through that audit process and getting ISO 42001 certified. I think it's going to be a big deal.

Jara Rowe: Yeah, for sure. Especially, AI is everywhere now. Even when I go to some fast food places, there's a little robot-

Jim Goldman: It's scary.

Jara Rowe: ... that's takingmy order now and not a person. Totally makes sense.

Jim Goldman: Well, it's probably a pendulum swing. Hopefully it swings back a little bit. But, as you know, we have customers now coming to us saying that their customer, because they're putting the big push on their publicity, " Hey, we're AI- based, we're introducing all this new AI stuff." Well, our customers are saying, " Hey, our customers or potential customers are coming to us asking about AI risk management." There's a growing awareness that's like a timeout. Again, that's a good thing. That's that governance thing because somewhere high in that organization, that potential customer organization said, " Don't be just adopting AI systems willy- nilly. We need to know is this system that you're going to start basing our business decisions on reliable, unbiased, et cetera."

Jara Rowe: Let's go ahead and dive into implementation a little bit. What are some common challenges in implementing AI governance?

Jim Goldman: It's almost where we started the conversation. I think the biggest challenge that isn't spoken about enough is the lack of focus on the importance of data management and data quality. It's almost like the chief data officer role is a fairly new role, but in my experience, well, where does the chief data officer report to? Is it a business function, is it a security function, is it a compliance function? Who do they report to? If you're a multi- billion dollar enterprise and every little department and every acquisition has owned their own data for the past 15 to 20 years, how do you corral all that and tell people, " Well, we got to get rid of all this redundant data and we have to have a single data catalog." It's a big deal and it costs a lot of money. People say, " Well, what's the benefit? There's nothing wrong with our data, everything seems to be working fine. If it ain't broke, don't fix it." Well, it actually is very broken, especially when you try to sit an AI system on top of a data layer that has never been cataloged, managed, data redundancy hasn't been taken care of or data discrepancy hasn't been taken care of, that kind of thing. There's a saying in the Bible about a house built on sand versus a house built on stone. If you don't take care of the data, you're building a house on sand.

Jara Rowe: Yeah. Don't want that.

Jim Goldman: Especially with the rain we've had in Indiana lately.

Jara Rowe: For sure. Oh my goodness. Again, I know with AI, there are some people that love it, and there are some people that are side- eyeing it a little bit. How can AI governance help address societal concerns about AI?

Jim Goldman: Yeah, it's very interesting. Earlier this morning, I was looking and I found a survey from the Pew Research Center. It was pretty reliable. They've been doing public sentiment on a variety of topics. I found this pretty interesting. It did US adults in general. The question was, " What does the public have to say about will AI have a positive effect on the United States over the next 20 years or not?" The general population of US adults, 35% negative, 17% positive. AI experts, and it has a definition of who those are, only 15% negative, and 56% positive. It's almost like you have two views of the world that don't seem to align. Where does the truth lie?

Jara Rowe: Where does the truth lie?

Jim Goldman: Yeah, I don't know. That's the question. Where does the truth lie? If you think about it, in any new innovation or issue, or whatever, how do people feel comfortable? I think it starts with transparency. Interestingly enough, transparency and explainability, we just talked about it. That is one of those key traits, key desirable outcomes or something they call it objectives that both ISO 42001 and the NIST AI Risk Management Framework put out there as a priority, as a priority outcome of any AI system. I think if companies producing AI systems want to lower the anxiety, then they need to go full gas, sometimes we call it an over- correction or something like that, over- compensate on the transparency and explainability.

Jara Rowe: For sure. Yeah, it totally makes sense. Yeah, I'm skeptical of it sometimes as well, but there are also some tools that help me get through the day or make tasks a little easier for me.

Jim Goldman: Yeah. I usually use it as a starting point, Jara.

Jara Rowe: For sure.

Jim Goldman: I'll ask a question, I'll get an answer. I'll say, " Okay." But then I'll dig after that. Because the transparency and the explainability isn't in that answer, I'll go look for that transparency and explainability myself afterwards.

Jara Rowe: Yeah, for sure. All right, Jim, we covered a lot of information here. But is there anything that we missed or do you have any other tips for our listeners before we wrap the episode?

Jim Goldman: Well, not to end on a negative note, but like any new technology that gets introduced, the criminal element of our society, and by society I mean global society not US society, is always going to do their best to take advantage of that new technology. Take advantage of the fact that it is new, people don't understand it, et cetera, et cetera. To use it for their own nefarious purposes. It's just human nature. We are not going to be able to stop that. But that's where the challenge in my opinion is, having been in the FBI and so forth, is for law enforcement to quickly get up to speed on, in this case it's AI. 20 years from now, it'll be something different. 20 years ago, it was something different. But law enforcement has to, and I think you could say corporate leadership has to get up to speed on both the positive and potential negative impacts of this newest technology, which in this case happens to be AI. We cannot be naïve about that fact. At the same time, we don't want to slam the door on it. For example, with the amount of data that is available in the healthcare field, medical, diagnosis, et cetera. You think about how there's a shortage of doctors in many locales, not just in the United States, but around the world. Think about the impact that a well- designed, well- built, fair, unbiased, ethical AI system could have if it was able to look at all the medical information that's available across every healthcare provider in the entire world. And provide that to somebody in a remote part of the United States or the world. Just saying, it could be great. What I'm trying to say is we shouldn't clamp down on this and try to regulate it or stop it. There's no stopping it, but you know what I mean. We shouldn't stifle the creativity and the ingenuity because there are criminals out there that might use it for criminal purposes. It could be a fantastic thing that could make a huge positive difference in society.

Jara Rowe: Absolutely. I don't think that we're ending it on a negative note, I just think we're ending it on food for thought.

Jim Goldman: Yeah. Yeah, exactly. Well, we're all about trying to raise awareness and trying to raise consciousness, that's why we do these podcasts. What we're trying to say is don't freak out, but ask good questions.

Jara Rowe: Exactly. All right, Jim, I appreciate your time and expertise. Thanks for joining me on another episode of The Tea on Cybersecurity.

Jim Goldman: As always, Jara, it was my pleasure. Thank you very much.

Jara Rowe: Now that we've spilled the tea on AI governance, it's time to go over the receipts. Jim covered a lot of information here, but there are a handful of things that really stuck out to me. The first being what is AI governance. AI governance, like security governance, is a shared responsibility across all layers of an organization's leadership. I feel like my next receipt also does a pretty good job of explaining that. Which is what's the difference between AI compliance and AI governance. I really think that Jim did a great job putting this in simple terms for me to understand. If we want to think about it as AI governance is like setting a law or passing a law, and then AI compliance is really that law enforcement and those people that have to be in the day- to- day activities of that, similar to a police officer. The next receipt I have is why AI governance is necessary. As we all know, AI provides many benefits, but with those benefits are also a lot of risks. Those risks are there for the people that create the AI system, but also the people that use the AI systems. Some of those risks include poor data, biased information, inadequate data, and other things that could potentially make the AI system untrustworthy. The final receipt I have for this episode can probably sum up a lot of the episodes lately. Like cybersecurity, and compliance, and anything else, it all starts top- down. Leadership really needs to be the ones that are driving this effort. However, it is a team effort to make sure that our policies and things like that are in place. We all need to do our share in making sure not only our personal data, but our employer's data is safe as well. Thanks for tuning in to another episode of The Tea on Cybersecurity. That's The Tea on Cybersecurity. If you liked what you listened to, please leave a review. If you need anything else from me, head on over to Trava Security. Follow wherever you get your podcasts.

DESCRIPTION

The rapid adoption of AI brings opportunities, yet new risks. Strong governance enables organizations to remain innovative while maintaining trust and protecting data.


In this episode, host Jara Rowe welcomes Jim Goldman, Co-Founder of Trava Security, to discuss how clear oversight, board engagement, and high-quality data enable the creation of ethical AI that aligns with business goals.

They outline practical steps, common blind spots, and proven frameworks for trustworthy automation.


Key takeaways:

  • AI governance versus compliance in plain language
  • Why data quality shapes reliable machine output
  • How leaders and teams share accountability from policy to practice


Episode highlights:

(00:00) Today’s topic: AI Governance

(02:46) Governance reaches the boardroom

(04:09) Big shifts in NIST CSF 2.0

(06:01) Governance versus compliance explained

(07:45) Data quality risks in AI

(10:55) Core parts of a governance framework

(13:32) Roles and ownership across teams

(14:55) Designing ethical, transparent AI

(19:06) Proving accountability in decisions

(23:37) Easing public worries with openness

(26:41) Criminal abuse and law response


Connect with the host:

Jara Rowe’s LinkedIn - @jararowe


Connect with the guest:

Jim Goldman’s LinkedIn - @jigoldman


Connect with Trava:

Website - www.travasecurity.com

Blog - www.travasecurity.com/learn-with-trava/blog

LinkedIn - @travasecurity

YouTube - @travasecurity