Don’t Overtrust the Robots: The Real Tea on AI Compliance
- 0.5
- 1
- 1.25
- 1.5
- 1.75
- 2
Marwan Omar: Do not over- trust AI models, think about all the different aspects that you mentioned, the transparency, the bias, the fairness, the security, the data privacy regulation acts, because if business organizations fail to think about and address all of these aspects before plugging in these AI models into their production environments, they could be getting themselves into a lot of legal and ethical issues. So, it's important to be aware of these things. And the good news is that help is available.
Jara Rowe: Gather around. As we spill the Tea on Cybersecurity. We are talking about the topic in a way that everyone can understand. I'm your host, Jara Rowe, giving you just what you need. This is the Tea on Cybersecurity, a podcast from Trava. Hey everyone and welcome back to the Tea on Cybersecurity. I'm sure in all of our conversations nowadays AI is always mentioned, and that's exactly what we will be talking about on this episode, specifically AI compliance. So, we'll be diving into what it is, why it matters, and how businesses can stay ahead. AI is truly changing everything, and with great power comes great responsibility. So, we need to make sure that we all stay on the right side of the rules, so let's go ahead and dive into it with our guest, Marwan. Hi Marwan.
Marwan Omar: Hello.
Jara Rowe: So, can you go ahead and introduce yourself to our listeners?
Marwan Omar: Absolutely. Thank you, Jara for the opportunity. I'm Marwan Omar, I'm the chief AI officer for Insight Assurance, where we help companies with AI strategy, governance, and risk, and we also do AI inaudible, so anything related to compliance and security aspects of AI, we can do that. I have a PhD in AI, and I'm super excited to be here today.
Jara Rowe: All right, so let's go ahead and just dive straight into it. In simple terms, what is AI compliance?
Marwan Omar: Absolutely. So, in simple terms, AI compliance basically means we are going to follow laws and regulations and also ethical standards and guidelines to ensure that AI systems or AI models are being used in a safe, fair, and transparent manner. This may involve meeting the expectations of the legal, ethical, and also security standards so that we can prevent harm or bias or misuse from using AI systems and technologies.
Jara Rowe: Yeah, we don't want any harm coming our way, right?
Marwan Omar: Definitely not. Yeah.
Jara Rowe: So, why is AI compliance important for businesses and organizations?
Marwan Omar: Absolutely. AI is very critical for business organizations of all kinds of industries. First of all, they need to make sure that they're avoiding legal risk, for example, somebody could be fined, somebody could face, some companies or business organizations they could face lawsuits or penalties for non- compliance with AI regulations and rules, as you mentioned. And we have to ensure fairness and trust so that we can reduce the AI bias and discrimination. Imagine there's an AI system that is in charge of approving mortgage applications, and if this AI system is not fair, is not trustworthy, and not reliable, and makes the wrong decisions on behalf of mortgage officers, then companies could face legal issues. And also we have to protect consumer privacy when we handle the data. As you know, these AI systems, we interact with them on a regular daily basis, and they could be collecting and handling our personal private data, as they call it, PII, personally identifiable information, so this is why it's also important. And finally, maybe just the business reputation, because again, AI decisions have to be ethical and transparent simply because these AI systems, in many cases, they could be making decisions on behalf of people, whether it's, again, finance industry, mortgage industry, even healthcare industry. So for all of these reasons, it's important for companies and business organizations to stay in compliance with AI.
Jara Rowe: For sure. For me, being a novice when it comes to AI, before I knew very much about it, I didn't know that anytime I would throw in information that it was just really collecting it, and so that's one thing I want other people to be aware of as well, that the more you give it information, it collects it, it learns and it grows, so don't throw out the PII like you mentioned, and things like that.
Marwan Omar: Right. And just to your point, Jara, whenever we are using an AI system, whether it's ChatGPT or any other AI system or Chatbot, especially if you're signing up with them, as a piece of advice, we need to read their policies. I know the policies are usually in fine print and nobody wants to read all of these pages, but in their policy, they will tell you how they store your data, how they handle your data, whether or not they will use your data for training their AI system. So always be mindful of that, and do not over- trust these AI systems, just because it's an AI system, we should not over- trust them and just give them our PII.
Jara Rowe: Absolutely. So, you were just mentioning things about different industries and laws, so what are the key laws and regulations currently governing AI use?
Marwan Omar: So, there are some major AI regulations and guidelines, even though this field is evolving and more regulations, and you might even see more laws and acts coming up, especially from the US, seeing we are here in the US. One notable one, which is outside of the US, interestingly, it's the EU AI Act. So, EU for the Europe. So, EU AI Act, this is the European Act for AI, and it's actually the very first, this is considered the very first comprehensive AI law in the world, believe it or not. And this classifies AI risks and sets the strict compliance rules. We know that many other people might know this as well, Europe is way more stricter when it comes to privacy regulations, customer data, customer privacy, and so this is not a surprise that Europe is the first one to come up with an AI act, it's called EU AI Act. There's also another act which has been around for a while, but it's also relevant here and current to this conversation. It's the GDPR, GDPR stands for General Data Protection Regulation Act, again, came out of Europe, and this regulates AI data processing. We just talked about AI data processing, these AI systems, ChatGPT, how do they handle your data? Do they use it for retraining their models? Do they encrypt your data when they communicate it? And there's also the US AI Executive Order, and NIST, National Institute of Standard and Technology, so there's US AI Executive Order and AI risk framework. So, this is a framework for managing AI risks specifically, and we also have China's AI regulation, which is less relevant here because that's a different part of the world.
Jara Rowe: Fantastic. You mentioned some that I actually was not even aware of, so definitely learning a lot so far. So, how does AI compliance differ from traditional technology compliance? Or is there even a difference?
Marwan Omar: Great question. Yes. So, traditional IT compliance focuses on security, how to make our systems more secure, data protection, how to protect customer data, business data, business plans, and software regulations, as you mentioned, like GDPR regulation, we also have the HIPAA. So, HIPAA is mainly for the healthcare industry, it's a law from the US Congress that basically requires companies to protect our customer data. As you know, if we are patients and you go to our healthcare provider, they have to abide by HIPAA, HIPAA is a big deal here in the US. So, that's traditional IT compliance and auditing. On the other side, for AI compliance, AI compliance goes one step further because they have more things to address. Now, we talked about bias, if an AI system makes a decision and denies my mortgage application, the AI system could be biased. Maybe it's not fair, maybe it was just trained on limited number of data. So, in this case, AI bias is a big aspect of AI compliance. So, this differs completely from the traditional IT compliance, as we mentioned, again, fairness, the AI system, or the model has to be fair. Imagine an applicant tracking system, if I apply for a job, most of the companies these days, they use AI to pick the best candidate. But how do we know that this AI system is fair? Maybe it just picks male rather than female, because maybe it was trained on more male than female, or people with certain skill sets. This was a case that actually happened with Amazon a few years ago, Amazon designed an AI model for applicant tracking, to recruit applicants, and it turns out that it was very unfair because it was trained on more males than female, so it was picking more male actually for the interviews than female, and they figured this out. So, this is a huge aspect of AI compliance, it talks about fairness. So, bias and fairness are very, very important aspects when it comes to AI compliance. And of course, the ethical risk, again, automated decision- making is a big deal these days, as you mentioned. I could go to a doctor, and the doctor might just use an AI system to tell me whether or not I have pneumonia. So, relying on an automated decision- making process is important, so it's very important to understand the ethical aspects, the fairness, and the bias aspects of AI models, and to make sure that we are in compliance with them.
Jara Rowe: Yeah, definitely want everything to be fair, right? That totally makes sense. All right. So, what are the basic steps to start implementing AI compliance in an organization?
Marwan Omar: So, the first step is understanding the laws and regulations that are relevant and current to our industry. We already mentioned the EU AI Act, we already mentioned the GDPR, and the US Executive Order and the NIST, so these are some of the regulations and laws to be aware of. Number two, I would say conduct an AI risk assessment, understand what is the risk on AI system. Because these AI system, as I mentioned, many companies, they just over- trust them. Just because there's an AI model out there for free on Hugging Face or any other resource, it doesn't mean that we should trust them. We should actually do our own diligence and test, stress test these AI systems to make sure they are transparent, they are robust, they are fair, they're not biased... Remember the Amazon applicant tracking system? It turned out that it was not fair, it was biased towards male. So, if Amazon had done their due diligence, and you think that a company like Amazon would do their due diligence, unfortunately, sometimes companies, they just go with the market pressure, they don't take enough time to stress test their systems, and create a framework for AI compliance. And then, also maybe equally important, train AI employees. We just mentioned that there are a few things that we just learned in this conversation, many employees may not be aware of fairness issues, bias, transparency, ethical aspects of AI, automated decision making, so it's important to train our employees on AI ethics and the requirements for AI compliance.
Jara Rowe: So, one thing I've learned through hosting this podcast is that cybersecurity is like a team effort, but when it comes to AI compliance specifically, what roles or departments should be involved in these AI compliance efforts?
Marwan Omar: So, AI compliance is a cross- functional effort, that means there is more than one department that needs to be involved. We just mentioned the AI regulations, so that means the legal and compliance team needs to be involved to make sure that they are adhering to the laws because if we break laws, then we could go out of business, and that will hurt our reputation. So, we need the legal team, we need data scientists and AI engineers because without these people, we will not be able to develop an AI model that is responsible, that is ethical, that is not biased, and that is fair and transparent, and you just mentioned cybersecurity. These are people that we definitely need, I'm one of them to ensure, again, data protection to ensure that our models are robust against attacks. And you also need the HR and the ethics officers, those people, again, along with the legal department, they need to monitor the model to make sure it's not biased, the decision that's making, that they are fair and transparent and explainable. And of course, the executive leadership team, upper management definitely needs to be involved in all of this because without support and buy- in from upper management, none of this will be possible.
Jara Rowe: We've talked a couple times on the podcast as well, it's like leadership has to buy in first before anyone else under them. So that also makes sense. So, how can companies ensure their AI systems are transparent and explainable?
Marwan Omar: Excellent question. So, the good news is that there are ways for us to ensure that transparency, the ethical aspects, and responsible aspects of AI systems. So, there's a field called explainable AI, this field involves a lot of techniques that we can use, one of them is called SHAP, and I know this because I'm in academia, and I use this myself, and I teach that to my students as well. This is a framework that we can use to understand the explainability of AI systems. If an AI system tells a patient that they have pneumonia, how did this AI model make that decision, based on what? Is there a justification for this, or it is just a model that is just maybe hallucinating and just telling us, okay, this person has a disease, or this person should be denied for this loan application, or the stock market is going to go up tomorrow? So, all of these aspects should have explainability, so there's an entire field in AI called AI explainability or explainable AI, for short we XAI, transparent AI, so this is one aspect. We can also look at audit logs, so if an AI system denied an applicant or denied a loan application for an applicant, a person, we need to go back to the logs, again, how did this AI model make this decision? Maybe it was the wrong decision, so we need to go back and look at the log files. And we have to have, again, user- friendly explanations again for end users so that they can understand why these decisions are being made. Sometimes you also need to conduct third- party assessments, independent assessments or audits. This is where a company like Insight Assurance can come in, and they can independently with their AI experts audit the AI systems to make sure that they meet all of these standards.
Jara Rowe: Fantastic. So, I do know implementing things like that typically have challenges, so what are common AI compliance challenges that organizations face?
Marwan Omar: First of all, the regulations. Remember talking about regulations and laws, EU AI Act, the US AI Act Executive Order? Unfortunately, some companies may not be aware of these laws and regulations, and this is where a problem might arise. So, if a company does not have a clear understanding of the regulation landscape, the legal landscape, they don't know about the bias, fairness, explainability issues of AI that we talked about, if they don't understand data privacy risks, where is the data coming from? If I have an AI model like ChatGPT, for example, I need to train it on data sets, millions of data points. Where do I get that data from? Do I get it from open source, from out in the internet? If I do, then how do I ensure that that data is actually secure? Because just because it's on Hugging Face on some other platform, it doesn't mean that it's safe and secure, this is where these issues might come from. So, companies need to be aware of the legal landscape, the bias in AI models, and the data privacy issues.
Jara Rowe: Great. So, you're just mentioning data privacy, so how does data privacy factor into AI compliance?
Marwan Omar: Right. So, AI must comply with the rules and regulations and laws that are related to data privacy. We just mentioned GDPR, General Data Perfection Regulation from Europe, and then in California there's a US version of that, it's called CCPA, CCPA, which stands for California Compliance Protection Act, and of course we have HIPAA. So, AI systems must comply with these regulations to ensure that data is handled in a manner that is secure and responsible. Remember, we mentioned a company like OpenAI that is the creator of ChatGPT, they need to have policies, and these policies need to be communicated with users and consumers, telling them how do they handle my data? Again, if I type in my personal data, or PII, with ChatGPT, maybe looking up things or processing something, do they encrypt my data? Do they send it to a third- party? Do they send it to the cloud, or does it stay locally here on my system? So, this is why it's important for AI to comply with those regulations, again, GDPR, CCPA and HIPAA, and all AI models should be secure against data leaks and also adversarial attacks. Adversarial attacks where somebody can come in and do prompt engineering and see what the model can reveal. So, AI systems should be secure against those attacks.
Jara Rowe: Definitely. So, how can organizations stay updated on these evolving AI regulations?
Marwan Omar: This is a very dynamic field, it's evolving rapidly, so organizations can follow government agencies, we just mentioned the US Executive Order and the NIST framework. So, NIST is part of the government here in the US, so companies can follow what NIST has as a regulation body. We also have the EU commission, for example, that's another body in the European Union, and they can also, companies can also engage with AI ethics organizations. For example, they can partnership on AI, there's also IEEE AI standards. So, IEEE is an engineering organization that has their own AI standards, they can participate in AI compliance conferences, in workshops, they can go to training, and they can also subscribe to any legal or tech policy newsletters for AI regulations and updates.
Jara Rowe: Fantastic, those are great resources for us all to dive into a little more. So, Marwan, we definitely covered a lot of information about AI compliance, but before we wrap the episode, do you have anything else you would like to mention or drive home to our listeners?
Marwan Omar: Thank you, Jara, I really enjoyed this podcast conversation. One thing I would say, maybe just again, do not over- trust AI models. If you are going to use that AI model, either as a company or as an individual, think about all the different aspects that you mentioned. The transparency, the bias, the fairness, the security, the data privacy regulation acts... All of these things are important aspects, especially for business organizations, because if business organizations fail to think about and address all of these aspects before plugging in these AI models into their production environments, they could be getting themselves into a lot of legal and ethical issues, not to mention the reputation. So, it's important to be aware of these things, and the good news is that help is available, and Insight Assurance is one company that can help with AI security and implementation. And thank you, Jara.
Jara Rowe: Yeah, thank you. I appreciate your time and expertise.
Marwan Omar: All right. It was great having this conversation with you. Have a wonderful day.
Jara Rowe: Now, that we've spilled the tea on AI compliance, it's time to go over the receipts. Obviously the Tea on Cybersecurity is full of experts, but Marwan was so incredibly knowledgeable on AI, it was a great conversation for me. So, let's go ahead and dive into receipt number one. Marwan mentioned several times during the episode that we should not over- trust any AI models. It's important that we know and understand that AI should be transparent, but there are times where it may exhibit bias, so we should not over- trust the information that it is given us. So, I asked about implementation of AI compliance, and when it comes to implementing AI compliance, it's important to first understand your industry's regulations, and then second, you should perform an AI risk assessment. And a part of that Marwan stressed how important it is to train our employees on using or even creating AI models to ensure that it's transparent and unbiased. And the final receipt I have for this episode is about who should be involved in AI compliance. And Marwan emphasized that it is cross- functional, which we've talked about several times, that cybersecurity in general is a team effort. So, when it comes to implementing AI compliance, it's important to have the legal and compliance team involved, leadership, of course, the data scientists and AI engineers, as well as the cybersecurity team. Again, Marwan covered so much information during this episode, so you might want to go back and listen to it again. But that wraps another episode on the Tea on Cybersecurity. And that's the Tea on Cybersecurity, if you like what you listen to, please leave a review. If you need anything else from me, head on over to Trava security, follow wherever you get your podcasts.
DESCRIPTION
Businesses rely on AI for everything from streamlining communication to managing hiring and forecasting trends. It’s fast, efficient, and deeply embedded in daily operations. But as AI becomes more common, one critical piece is often overlooked: compliance.
In this episode, Jara Rowe sits down with Dr. Marwan Omar, Chief AI Officer at Insight Assurance, to talk about the growing need for AI compliance. They explore what it really means, why it’s not just a concern for tech giants, and how overlooking it could expose your business to legal, ethical, and reputational risks.
Key takeaways:
- What makes AI compliance different from traditional IT compliance
- Where to start with AI risk assessments
- How real companies have gotten AI compliance wrong
Episode highlights:
(00:00) Today’s topic: AI compliance and why it matters
(05:23) Key laws shaping AI compliance today
(07:25) The nuances of AI compliance
(10:14) First steps to build AI compliance internally
(13:26) How explainability strengthens trust in AI models
(15:32) Challenges with regulations and data privacy
(18:24) Staying informed as AI laws evolve
Connect with the host:
Jara Rowe’s LinkedIn - @jararowe
Connect with the guest:
Marwan Omar’s LinkedIn - @dr-marwan-omar
Connect with Trava:
Website - www.travasecurity.com
Blog - www.travasecurity.com/learn-with-trava/blog
LinkedIn - @travasecurity
YouTube - @travasecurity
Today's Host

Jara Rowe
Today's Guests
