Building Responsible AI at Scale
- 0.5
- 1
- 1.25
- 1.5
- 1.75
- 2
Simone Consigliere: Hello everybody and welcome to the Making AI Real podcast series, and today we'll be focusing on ethics and governance. My name is Simone and I'm the Communications Leader for IBM Consulting Asia Pacific. We're extremely privileged to have Heather Gentile, who is the Director of watsonx.governance to talk with us today. Welcome, Heather.
Heather Gentile: Thank you, Simone. It's so nice to be here.
Simone Consigliere: Pleasure having you. Let's go through, I think I have about five questions for you. So let's start off with the first. Some of the most common questions asked, how can businesses leverage AI in the most ethical and responsible manner? And what are some of the checks and balances that they need to implement to ensure transparency in AI?
Heather Gentile: I'm glad that you asked because this has been globally an area of a lot of activity, I would say over the past year at least, where AI governance has become more strategic to the organization. And I see one of the driving reasons for that is the innovation opportunity, but also the risks and the costs associated with generative AI as a newer technology that many organizations are aligning business strategies with AI to adopt and execute. And so with that, you really need an enterprise-level AI governance program that's going to consider key factors, risk management to the organization, compliance with new and emerging regulations, as well as industry standards and your own internal policies and procedures. And then ethical alignment with your organization's values and what do you want to use AI for and what are the things that for your organization it's too risky or it doesn't align with your culture and shouldn't be pursued.
Simone Consigliere: Right. Thanks. And so can you tell us a little bit more about watsonx.governance then and how it helps IBM clients to streamline their governance frameworks?
Heather Gentile: Yeah, so watsonx.governance is an open technology. It's an open toolkit as part of our watsonx platform. And watsonx is an integrated platform that includes different components related to a workbench, watsonx.ai that facilitates the build-out of models, testing, training, tuning, deploying AI models. It's informed by watsonx.data, which is IBM's integrated data lake house that provides access to data securely that scales over hybrid cloud. And then watsonx.governance is the piece that wraps around all of that. But as I mentioned, we took a very open approach and that's because the clients that we work with, many of them have been successfully adopting predictive ML models for years. And so they're already invested perhaps with some of our partners, AWS, Microsoft, Google, and it wouldn't make sense for them to rip and replace technology that's working. So in addition to being able to govern across the watsonx platform, we can actually leverage APIs and plug into any third party application. We can successfully govern third party models, open source models in support of our client's existing framework. And I think the approach we've taken with watsonx.governance is rather unique because we focus on three pillars for the solution. The first is lifecycle governance, which starts at the time that the use case is requested by the business and allows us to begin to build that audit trail should this use case be approved. And if it is, we conduct a risk assessment. And so we can now determine if it's of high, medium, or low risk to the business. And based on that, make recommendations for performance monitoring. And the workflow carries through to engineering where we have integrated governance behind the scenes in support of the work that developers do. So in no time in the governance process does a user have to stop and think, now I need to think about governance. We're automatically capturing that to help reduce risk and also ensure compliance throughout the model adoption process, which brings our clients confident in their ability to adopt AI responsibly. And then of course, as we get through the model selection process and it goes to production, we have the different pieces in order to do proactive monitoring and send real-time alerts for things like bias, drift, performance changes, as well as some of the newer risks to generative AI models, things like PII data or hallucinations. The second pillar is related to risk management, and that's really that monitoring component. So regardless of if it's a predictive ML model or if it's generative AI, watsonx.governance can monitor and alert for any changes because we capture the performance metric as part of fact sheets when that model is selected and promoted. And then the third pillar and growingly very important is compliance. So compliance from a number of perspectives, the organization's internal policies and procedures, alignment with industry standards like NIST AI, and then globally we see so many different regulations being proposed and going into effect along with guidance with regulators, which really does need to be treated as a regulatory requirement. So being able to have that breakdown of the requirements in the organization's associated controls in order to have confidence that the AI adoption is moving according to your policies and procedures and in alignment with what's expected of the organization from a compliance perspective.
Simone Consigliere: Thanks, Heather. Now in your first pillar, you talked a little bit about bias. So this question, we'll talk about that, right? So it is said that AI is biased. So is there any technical evaluation or algorithmic assessment that watsonx.governance is equipped with that helps identify and address potential risks, biases, and unintended consequences of AI systems across businesses?
Heather Gentile: So I do believe that this feature is critically important because especially with generative AI models, they're always learning and they can be subject to change. And so in order to mitigate that risk, it's very important to have a proactive monitoring tool that's going to look for deviation from the original performance metric to the results being returned. And the sooner that anomaly can be detected and escalated to the right subject matter expert, we provide details on the bias of exactly where in the data did it occur. So it could be in any area, it could be related to age, for example, or to gender. So many different things. So it's important to be able to have algorithms as we do that detect that level of detail, and then also capture what happens next with that model. Because model life cycle management is an iterative process. This model may now need to be reviewed by engineering again. There may be tuning that needs to be done or additional training in order to make that correction, but an organization absolutely wants that as part of the audit trail and the history in order to maintain confidence in the model's output.
Simone Consigliere: That's really interesting. Now I have a little bit of a cheeky question. How can businesses encourage and ensure that their employees use AI for, let's say, productivity purposes and not as a shortcut to getting work done?
Heather Gentile: Yeah, that is something that we've talked about a lot in the past year. If you think about where we were a year ago, generative AI was not a buzzword. And then ChatGPT introduced its very easy-to-use interface. And everything changed. In organizations that were starting to experiment with machine learning, maybe implementing predictive ML models in support of certain business units found themselves quickly have to elevate their use of AI up to a more strategic level and think about employee considerations. So policies and procedures around what employees should be and should not be doing, and how do you effectively control to show that the employees are really doing what you think they are. And that came with a lot of training as well, creating training and awareness so that your organization wouldn't be one in the news where there was an unintended negative impact because an employee didn't realize the risk. So I think that training and guidance element is critically important, and it really does elevate AI governance to be something like what we saw with data privacy, where it has to be part of the culture. Employees need to be able to understand the opportunity and be encouraged positively for areas where the organization does want them to embrace AI. Because of course, productivity and day-to-day work is a huge opportunity within any organization, but you also need to balance that with awareness and risk management.
Simone Consigliere: Exactly. Thanks, Heather. And now finally, how can businesses future-proof their investments in AI?
Heather Gentile: So I think AI will continue to be an evolving area of technology and opportunity, and this is where we're seeing more stakeholders from across the organization come together to set the AI strategy, which then translates into that AI governance foundation. So at the top, the CEO has ultimate organization responsibility, but then informing that from considerations like finance for example, these AI models can be expensive to run. So making sure the organization is selecting the right model to support the use case with a focus on ROI is really important. We see marketing involved from the reputation standpoint, HR from that employee impact and productivity standpoint, risk and compliance, helping to guide the risk management, prepare for regulatory messaging as regulators become more involved. And then of course the traditional involvement from the chief privacy officer, the chief data officer, and those sorts of things. And so as we look at the future, it's these leaders within the organization who are helping to set the roadmap. When we talk about AI for business, we have a lot of conversations with our clients about where they are today and where they're planning to go. Because a lot of times these experiments that are being done now are on internal use cases, which is absolutely the right approach. You have a chance to learn a lot about your data, you have a chance to get more confident in the data governance that's informing your use of AI, and then slowly evolve to more outwardly-facing customer-facing use cases, at which time you've matured your processes and have a lot of confidence.
Simone Consigliere: So what I'm taking away from all this is that AI for business means AI for all of business, not just one specific department, or it doesn't just sit with the CEO, it doesn't just sit with the CIO, for example.
Heather Gentile: Yeah, it's really evolved quite a bit in just a year's time. And I think when we look back a year from now, we'll see even more changes, even more opportunities, and that's what makes it so exciting.
Simone Consigliere: Yeah. It definitely is. Thank you so much, Heather, for being with us today.
Heather Gentile: Thank you for having me.
DESCRIPTION
Making AI Real | Episode 2: Building Responsible AI at Scale
Check out our second episode from the #MakingAIReal podcast series featuring our guest speaker, Heather Gentile, Director of watsonx.governance Product Management, and host, Simone Consigliere, Communications Leader, IBM Consulting APAC. Join us for an insightful conversation on how businesses can ethically and responsibly leverage AI.
Learn more: https://www.ibm.com/products/watsonx-governance