Black History Month – The impact of AI on communities of color

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Black History Month – The impact of AI on communities of color. The summary for this episode is: <p>Have you considered how Artificial Intelligence can be influenced by the biases and assumptions made by humans? As we celebrate Black History Month in North America and a few other geos, we invite you to embark on a compelling exploration of AI’s impact on communities of color with IBMers Traci Bermiss, Dr. Stacy Hobson, and James Stewart. Join our insightful conversation as we navigate the complex AI landscape of potential opportunities and challenges. Against the backdrop of IBM’s continuous efforts in implementing AI ethics, policies, and practices for its own business and clients, you will be guided through the intricate intersections of AI and community impact. This podcast empowers listeners to not only grasp the complexities, but also contribute to a future where technology serves everyone equitably.</p><p><br></p><p><br></p><p>Be Equal – Learn more about the <a href="https://www.ibm.com/impact/be-equal/communities/black/" rel="noopener noreferrer" target="_blank">Black Community at IBM</a></p><p><br></p><p>IBM Blog: </p><p><a href="https://www.ibm.com/blog/why-we-need-diverse-multidisciplinary-coes-for-model-risk/" rel="noopener noreferrer" target="_blank">The importance of diversity in AI isn’t opinion, it’s math</a></p><p><a href="https://www.ibm.com/blog/ai-skills-for-all-how-ibm-is-helping-to-close-the-digital-divide/" rel="noopener noreferrer" target="_blank">AI skills for all: How IBM is helping to close the digital divide</a></p>
Intro
00:38 MIN
Welcome
01:48 MIN
Traci's introduction
00:30 MIN
Stacy's introduction
00:26 MIN
James' introduction
00:55 MIN
Black History Month at IBM
03:55 MIN
AI impact: opportunities and challenges
06:07 MIN
AI bias in healthcare
04:42 MIN
Mitigating the potentially harmful effects of AI
02:14 MIN
Augmenting human intelligence
02:31 MIN
AI and its potential for diverse communities
04:24 MIN
The importance of AI education
06:03 MIN
Closing
00:19 MIN

Jill Stewart: Hello, and welcome to our IBM Be Equal podcast. I'm Jill Stewart, the Director of Diversity& Inclusion at IBM. Thanks for joining our conversation around D&I to learn how we at IBM are continuously looking for ways to expand equality and allyship across the enterprise. We have eight D&I communities focused on making a difference for underrepresented groups, and here is an opportunity for you to hear directly from our IBMers. Every month we have a new episode, so enjoy.

Joy Dettorre: Hello, and welcome to our Be Equal podcast, where IBMers will have the opportunity to be vocal, be powerful, be proud, and share their stories and experiences about their work, the impact their work has on IBMers and the world, and how IBM focuses on being a good corporate citizen by making the world a better place, and of course, focusing on good tech. I am your host, Joy Dettorre, and I'm a global diversity, equity and inclusion leader at IBM. My pronouns are she and her, and I'm speaking to you today live from South Florida, which is also known as the ancestral lands of the Miccosukee and Seminole nations. And I'm especially excited about today's podcast because during today's conversation, we will be talking about three things. Number one, closing the technology divide. Number two, understanding more about IBM's commitment to ethical AI. And three, in general, AI's impact on communities of color. Our three esteemed panelists today are Traci Bermiss, James Stewart, and Stacy Hobson. All right, let's get started with introductions so our listeners can begin to associate your name and your experiences with your voice. But for introductions today, let's throw in a little bit of a twist. Why don't you tell us three things? Number one, of course, tell us your name. But number two, when you hear the words artificial intelligence, tell me three words that immediately come to mind. And lastly, how about telling everyone about your role at IBM? So let's go ahead and get started, Traci, with you.

Traci Bermiss: Certainly. Thanks, Joy. My name is Traci Bermiss, and the three words, of course I can't limit it to three that come to mind, are, the future, we can make that a combination word, inclusivity, and challenge. And I am the global diversity and inclusion manager for the Black community here at IBM.

Joy Dettorre: Awesome. Thank you, Traci. Great words, by the way. I like the compound, the future. Stacy, why don't we move over to you?

Stacy Hobson: Good morning, Joy. I am Stacy Hobson. I am a research director and I lead a research group focused on responsible tech research. I'm very passionate about conducting research that considers the benefits and impacts of technology on society. My three words would be automation, social impact, and technology.

Joy Dettorre: Wow, okay. James, why don't we close out the introductions with you?

James Stewart: Hi, Joy. It's great to be here. I'm James Stewart, and my three words would be ethics, potential, and optimization. Those are the three things that come to mind when I think about AI. My role in IBM is a chief technology officer for one of our global telecommunications accounts, and I also work as the AI ethics focal point for the UK.

Joy Dettorre: Wow, I love the perfect bridge there, James, about your ethics in AI, and I think Stacy, we're going to see a little bit of a linkage as well about your word, social impact, and all the good that you and your team are doing around research and AI. And Traci, we couldn't close this conversation or the introductions without your word, inclusivity, so why don't we go ahead and start with you. We'll begin the conversation with Traci because you can help us set the stage. During the introductions, you mentioned your role as a D&I leader supporting the Black community, and we happen to be recording today's podcast as part of celebrating Black History Month. As a leader for the Black community, can you tell our listeners more about Black community engagement at IBM and what makes it so special?

Traci Bermiss: I would say it's a privilege, it's an honor to lead such a community. Our community is one of the most engaged communities at IBM. It is a giving community, and giving not just in donations, but really in time and talents. I think the core of Black culture is the giveback. What are your contributions? How are you leaving a legacy? And what are you doing to empower the next generation? Especially here in the United States and in many of our other countries, our community holistically has been required just by the essence of how we've evolved to contribute, to bring others up, and to, as I've stated before, really leave the community a better place. So one of the things that we're doing is we inaudible Martin Luther King Jr. Day of Services, which is what we host here at IBM, we thought a lot about civil rights and we think about the challenges that come about when it comes to civil rights and how that has evolved over time. Right now, not even just in the future but right now, there's a significant challenge and inequity when we think about access to technology, when we think about the number of Blacks that are employed by technology companies or in technology roles, there's a huge gap in STEM there. And so for Black History Month, you mentioned that in particular, our challenge and our theme is level up, empowered by technology. Some of the elements that we've utilized in a recent summit that we hosted from a D&I perspective, we talked about being transparent, we talked about being creative, and we talked about being empowered. And so with the network of over 25 global Black BRGs, everywhere ranging from the US all the way to Germany, South Africa, Canada, Costa Rica and Brazil, our employees have connected with their external communities. They've worked on training, they've worked on volunteer opportunities and engagements that teach what we do here at IBM, or teach career goals. They work on resumes, they work on AI training and kits and teaching that to middle schoolers and high schoolers and college students. So I would say it's a privilege to be a part of one of the most engaged, if not the most engaged D&I community that we have here at IBM.

Joy Dettorre: Traci, thank you. You said so many things there, not only being a privilege and honor to have the role that you have, but there were a couple of things I really wanted to anchor on. This idea of giving back, the importance of educating that next generation, making sure that we close that technology divide, leaving a legacy, making sure that individuals in the United States and around the globe, especially people of color, that they have access to technology and roles with STEM degrees. I love the idea of the theme being level up. And with that as a little bit of a backdrop when it comes to AI, we need members of the Black community and our allies to understand where and how AI can have an impact on communities of color. So Traci, thank you so much. I asked you to set the stage and you sure did. So let's help us understand especially what our allies need to know, because in order to help out an entire community, allyship is critical. So James, let's start with you because I think you want to address the importance of bias, and not only bias, but bias awareness. So, I'm going to flip that first question over to you.

James Stewart: Yeah, sure, Joy. I think it's a great question. The first thing I'd like to say is that AI has a lot of potential. It can be a really useful tool in all sorts of different aspects of lives. And of course we're all exposed to it in some way or another, whether it's at work or at home, or even just out and about doing our shopping, et cetera-

Joy Dettorre: No escaping it, right James?

James Stewart: No escape. It's pretty much everywhere, and I think it's only going to continue to expand, so we're going to have to get used to it. I think the important thing is to recognize that it can have some positive impact. There's been some work that specifically looks at using AI to help vulnerable communities, using AI to help with diversity and inclusion awareness and education, so I think those are all fantastic use cases that can really help communities of color. But we also have to recognize that there are some risks that come in with AI, and I think it's all about understanding what those risks are, how they might manifest themselves, education around that, and then being able to put in the relevant guardrails, processes and governance to address them. So usually when I approach this, and I do from time to time run workshops where we talk about some of these risks, we actually have, from an IBM perspective, something called the AI risk atlas, which has come out of lots of the research that we've been doing. We published that. People can access it and it talks about all the different types of AI risks, but importantly, how to address them. I'm not going to go through all of them with you, but there's a couple that I thought were worth calling out. We have things like drift where the intent behind that actual AI model might change in terms of the data that's coming in, the questions that are being asked, and of course the output. So, we have to deal with all these different types of drift. We also have, especially when we look at generative AI, one of the most common areas that people talk about, things like hallucination. That would be the AI model returning some results that are actually fabricated. It might even return some links that look real, but they're not actually based on real links or real information, and so that can be misleading. But the one that I think probably has a significant impact or potential to have an impact on communities of color is bias, and in order to really drill into that, we need to think about how the bias actually originates. It can come in from the thought process behind how the AI application is designed and built. And essentially what I'm talking about there is if we have a lack of diversity in those teams, then lots of points of view aren't being represented, things like the potential unintended consequences of that particular AI application may not be considered if we don't have that diversity of thought, so that's really important. But that theme of diversity actually then spreads across the data that we use, so making sure again that all the groups who might be subject to that particular algorithm are actually represented, or their data is represented within the dataset. And that's not just the initial dataset, but it's all of the training datasets and making sure that they're diverse and balanced as we go through various training cycles. And of course, then we have the ongoing monitoring of the AI as well, so making sure that we continue to monitor for some of the areas I spoke about, things like drift, but also importantly, bias or fairness as some people often refer to it. I would also say, Joy, that it's worth probably considering one or two examples just to bring this to life a little bit.

Joy Dettorre: Sure.

James Stewart: I ran a workshop last October with a group of mainly Black representatives from the UK from different organizations, and some of them were students as well, and we ran a bit of a survey asking which type of AI application they were most concerned about when it came to things like AI bias. And the things that came out top were recruitment and finance, so things like applying for loans, policing and education. And I think in the UK actually, a few years back we had a very prevalent example around education, which was the 2020 A-Level, which is one of the exams that you take in the UK when you're around 18 as a sort of higher education before going into university. So those A-Level exams, because of course the students couldn't actually go into the schools or the colleges to take those exams, they had to do it... They couldn't actually sit the exam. And so it was based on a number of inputs from the teachers, from previous performance of the school, but also taking into account the location of the school as well and past performance, and there was an algorithm that was created in order to estimate those results. And what was found was that actually, the results of students who came from, let's say lower-income schools, so not the private schools where more wealthy families might send their children but the lower-income schools, and particularly schools in places where we have higher numbers of communities of color, those results were actually lowered by around 40%. So, it's quite a significant impact in the students who come from those backgrounds.

Joy Dettorre: Wow, that's amazing. And James, there were a couple of things you had mentioned. We know that there are challenges, but there's also this positive impact of AI, but you just hit on a couple of the risks, right? The importance of not only training, but monitoring those models. Okay, so Stacy, I'm going to come over to you. James mentioned examples of things he called algorithms, and the impact of AI in areas that he specifically mentioned, recruiting, finance and education, but the topic of healthcare and AI for communities of color are also illustrations where you've seen opportunities for improvement. Why don't you tell us more?

Stacy Hobson: Yes. Thank you so much, Joy. As you mentioned, James talked a little bit about how AI is being used in various domains and some of the concerns that people of color or communities of color have about the disparate impacts of AI in these particular areas. In terms of the healthcare industry, we're seeing a lot more usage of AI to help doctors, to help insurance companies, to help healthcare industry representatives make faster and easier and better decisions, but concerns creep up as well. For example, there was a recent article a few years ago about the use of AI in prioritizing kidney transplant recipients. In this particular case, AI was used to create a scoring model to determine which patients on the kidney transplant list should be prioritized to receive transplants next. They realized that white patients were scored higher on the list, meaning that they were prioritized to get kidney transplants faster than that of Black patients. That is highly concerning. Can you imagine being a person on this list or the family member of a person on this list and you're trusting the healthcare system to identify when your loved one should get a transplant, or when is the best time, or when they should be prioritized on this list, but you're having a failure in the system because of this problem with the AI scoring model. That's really concerning. Can you imagine the people who have died because of this? And that's just one example on healthcare, there are a number of examples as well. But then also James mentioned financial services. Everyone's probably very much aware of some of the examples a few years ago around financial services with a husband and wife team both applying for a credit card, but the husband being given a higher, although all of the information they provided were the same. And that speaks to gender bias, but we can also imagine examples of bias by ethnicity or race or other examples as well. One more example I want to mention is the use of AI in reviewing the applications of people who are applying to rent a home or rent a condo or rent an apartment. This has been something that's been more prevalent in the past couple of months or years, and we're also seeing some of the same biases creep into the solutions based on AI in that proxy data is being used. So it's not actually based off of real data about the person, but you look at things like the area that they lived in previously to help determine their credit worthiness. And given the history of redlining and lots of other issues here in the US, this actually can contribute to biases and negative outcomes for the applicants if they are from the Black or Latinx communities.

Joy Dettorre: Wow. Talk about an amazing linkage and bridge, Stacy, between some of your comments and James', especially this concept around bias and the negative consequences of that. But Stacy, I cannot get my head wrapped around the life and death consequences of trusting a system that maybe has a bias. And when you personalize it, oh, it's not just some nameless person that maybe falls somewhere on the list. As soon as you personalize that and said, can you imagine if that happened to be your family member, based on the system, based on the trust, based on the prioritization, go back to James in this concept of algorithms falling lower on that prioritization list. And I just think about the consequences of that and then them potentially being fatal. So again, James and Stacy, thank you for keeping this very real. And Stacy, let's keep this conversation with you for another moment. You lead a team that is conducting research on responsible tech. Tell us about your research and other efforts going on within IBM that are aimed at mitigating the potentially harmful effects of AI.

Stacy Hobson: I have a team focused on responsible tech, and this is really about anticipating and mitigating potential negative outcomes of artificial intelligence. And in particular, we have been researching and developing practical methods and tools for our colleagues within IBM, other technologists externally, so other people that do similar work to that of our colleagues within IBM, so other technologists, other researchers and so on, and also for the broader public to understand more about technology. We talked a little bit in the beginning of the session about the importance of awareness. If people are not aware of both the benefits and potential harms of technology, then we may miss some of these outcomes that could happen later on. And if we develop and deploy technologies that we think will have a universal benefit but don't anticipate some of these harms, we may also miss some of the early opportunities to mitigate the harms. So very specifically, we have released tools via open source, we have made some of our methodologies and frameworks available publicly as well, because we really want to help people understand the broader impacts of technologies and think to themselves, okay, what do I do? What can I practically do to help ensure that the technologies I'm developing will not lead to some of these harmful outcomes we've seen in the past and so on. So it's really about this combination of awareness, anticipation or being proactive, and very, very importantly, mitigation. What are the practical actions that we can take to mitigate these harms?

Joy Dettorre: Stacy, thank you so much for that. It was amazing. Okay, James, I'm coming back to you for the next question. Stacy mentioned, I got my notes, practical tools and methods that they have developed in IBM research as well as creating greater awareness for the broader society, especially she mentioned these non-technologists, increasing their awareness and understanding about and interacting with AI. Now, what are your thoughts on what else can be done to address some of the challenges and concerns related to AI?

James Stewart: I have to say I'm really enjoying this conversation so far.

Joy Dettorre: Me too.

James Stewart: I touched on diversity, and maybe just to continue that thread. As I mentioned earlier, the diversity of the teams, the diversity of thought, I think is probably the strongest single tool that we have. However, if we take it one step further, it's also about some of the processes that we put in place and also the principles. Now, IBM has some principles when it comes to AI. The first one would be that the AI is there to augment human intelligence. I think that's really important. We have this concept that's often referred to as human in the loop. And this is something that I advocate for whenever we embark upon an AI solution journey with one of my clients, it's all about using the AI to provide insights and information, but also being transparent in terms of where that information comes from and providing the human who's performing a certain role or using the application to make a decision based on that. And it's all about how can we help speed up that decision-making, but ultimately, for many AI applications, having the AI make decisions on our behalf isn't always the right answer.

Joy Dettorre: And James, there were a couple of things. First of all, the diversity of thought. We don't always think of that. When we think of diversity, a lot of times people think race, gender, ethnicity, and all of that is right. But talking about research, there's actually research out there that says that more than 70% of our diverse characteristics, they cannot be seen and they cannot be measured, and I don't think they can be underestimated. So you mentioned this whole concept of diversity of thought, not something we tangibly see, but diversity of thought that impacts who we are, how we show up at work, and the decisions that we make every day. So I just wanted to anchor on that diversity of thought. Stacy, I'm going to come back to you for this next question. We have spent a considerable amount of time on today's podcast addressing some of the concerns around AI and actually provided examples of negative impacts around AI for minority communities. And yet, as James mentioned at the very beginning of the podcast, we can't escape its path, nor would we want to. All of us realize that AI is transforming the way we work, the way we shop, the way we live our lives, and so much more. But let's pivot to the possibilities. What opportunities does AI offer diverse communities?

Stacy Hobson: Joy, you mentioned a couple examples from transformation through how we shop, how we just live our daily lives. It's actually funny, I was shopping for a birthday present for my younger daughter over the weekend, and I started to think about getting her a kids-based personal assistant, like one of the little voice recognition systems where you can listen to books on tape and so on. And I just thought about how convenient that would be, because she loves listening to read-aloud stories and so on, and listening to music, and it'll just make things a lot easier to have her have that ability to listen to music or other items at her convenience. Those are examples that we think about, ways that we interact with technology, and sometimes we don't even realize that these systems are often AI-based, especially the systems that are giving us recommendations when we stream TV or we stream music and so on. A lot of these systems are actually fueled by AI technologies or AI solutions. So I think of convenience, I think of efficiency. And James mentioned the use of AI to augment the tasks that humans perform in our day jobs. When we think about being technologists at IBM, there's so many ways that AI can help us do our jobs. There are these benefits, and there's another aspect as well. Another area of research that my team has been looking into is, how do we use AI and other emerging technologies to bring value to community-based organizations or nonprofit organizations that are trying to drive programs and services for underrepresented minority communities? A lot of these organizations don't have strong AI expertise or skills, so there are opportunities within IBM or through universities or through volunteer work to leverage our expertise as technologists to build systems that can support the nonprofits or these community-based organizations in doing the work that they're trying to do. Again, driving more efficiency, making it easier for them to serve a larger population of people and so on. So there's so many opportunities when we think about AI helping our communities, AI helping us as individuals, and AI helping us in our day jobs. But I want to bring back this other point. As we use AI and we think about all of the benefits, we have to think about the other side of the impacts as well, so this need for balance, this need for understanding, yes, I can build a system or a technology that benefits in these ways, but let me also think about, are there unintended consequences? Are there ways that this technology may bring harms that I had not considered? And if that's possible, what do I need to do differently or what do we need to do differently in how we envision or develop these systems to maximize the potential benefits for the broadest set of people possible?

Joy Dettorre: There are so many dots to connect in there, Stacy. But then the last thing that I wanted to talk about was this concept of the leveling up that Traci mentioned, how important it is that we help people understand not only the positive effects but the potential harmful effects, education, being critical. I think this is where I'm going to go ahead and anchor our last question, and I'm going to direct the same question to all three of you. How important is education when it comes to AI?

Traci Bermiss: There's a phrase that many may be familiar with and those who are not, hopefully you too will recognize the value in this phrase, and it's, "You don't have to get ready if you stay ready." I think it leans in on the anticipation factor that Stacy spoke about. The readiness of our community in general isn't only about bringing up the next generation or preparing the next generation. When we look at IBM SkillsBuild, our goal and our objective is to educate over 30 million people by 2030. That is a huge objective. It's one that we internally have readied ourselves to meet. And it's not just thinking about who's going to enter technology next when we look at high schoolers or collegiate students, but it's also adult learners. We are in a phase when we think of the workplace and the culture and the ways in which people work now, and remaining in the same position for multiple years or 10 years or decades or the same companies, that's changed. The way people work, the way that people show up, and what they value in organizations causes them to shift and change their careers on a more frequent basis than past generations. So even those adult learners who are looking to learn new skills, that's what we provide. We provide curriculum that allows someone to get badged or credentialed, or get certification in different technology principles such as AI, such as quantum, such as hybrid cloud. And so again, it's really important, and I impress upon the listeners and community members just to remember, you don't have to get ready if you stay ready. So when that next opportunity presents itself, you're already ahead of the curve and you are aware and understand how our technologies impact our world.

Joy Dettorre: Wonderful, Traci. James, over to you.

James Stewart: Fantastic answer from Traci. And I think it's important to recognize that the importance of this education spans all age ranges, from the very youngest to more mature people who were probably coming towards the end of their career or may not have had access to technology when they were growing up. So I think recognizing terms like the digitally left behind, are we advancing too fast and leaving some people behind? Do some people or communities not have access to technology or platforms upon which to learn about the technology? So that could be farmers out in really remote areas who could probably benefit from the technology, but don't actually have access or don't have the connectivity. How can we help them? How can we educate them? So maybe if I just end with a quick example, one of the things that I'm working on at the moment is I'm teaming up with a charity here in the UK who look at education options for children from low-income families, or maybe they're in social care. One of the initiatives that they have is to help the children learn how to grow their own food in a community allotment. And what we've been able to do is start to look at how we then bring very accessible, very low-cost technologies into those allotments and actually get the children to start building their own dashboards so they can monitor water levels and think about how and when they water the plants, bringing weather data in, that sort of thing. So for me, it's really enjoyable, but it's also really important that those who already have those technical skills are able to share them with others, and hopefully they'll find their way into a technical career as well at some point.

Joy Dettorre: Wow. The two things you both said and you hinted at it, it was the importance of being a good corporate citizen. Stacy, I'm going to let you end today's podcast.

Stacy Hobson: Great. Thanks, Joy. And I want to leave us with this quote by Nelson Mandela, "Education is the most powerful weapon which you can use to change the world," and I want everyone to think about it. And Traci and James both highlighted two aspects of this need for education. One is education for non-technologists or general members of society so that they know how AI impacts them, so that they know how they interact with AI, and also to build the skills and expertise to use AI to help themselves, their communities and others well. The second aspect Traci talked about was the importance of education for our technology community, and in particular, the members of the Black community within IBM. And we know that technology is one pathway for economic mobility for many people and our community members in particular, but it's also this pathway to really help us contribute our skills and expertise to make the world better. If we think about IBM's mission to be the catalyst that makes the world better, that ties back everything with our opportunities as members of the Black community to use our knowledge, our education, our expertise to make the world better. Lean into the power of knowledge and expertise.

Joy Dettorre: Wow. Because the most powerful weapon you can use to change the world is education, I love that. Thank you for the quote from Nelson Mandela. And I wanted to end our Be Equal podcast today by thanking Traci, James and Stacy. Thank you for sharing your insight, your perspective, and the actions that you have taken to make IBM an inclusive place for others to thrive and grow.

DESCRIPTION

Have you considered how Artificial Intelligence can be influenced by the biases and assumptions made by humans? As we celebrate Black History Month in North America and a few other geos, we invite you to embark on a compelling exploration of AI’s impact on communities of color with IBMers Traci Bermiss, Dr. Stacy Hobson, and James Stewart. Join our insightful conversation as we navigate the complex AI landscape of potential opportunities and challenges. Against the backdrop of IBM’s continuous efforts in implementing AI ethics, policies, and practices for its own business and clients, you will be guided through the intricate intersections of AI and community impact. This podcast empowers listeners to not only grasp the complexities, but also contribute to a future where technology serves everyone equitably.



Be Equal – Learn more about the Black Community at IBM


IBM Blog:

The importance of diversity in AI isn’t opinion, it’s math

AI skills for all: How IBM is helping to close the digital divide

Today's Host

Guest Thumbnail

Jill Stewart

|IBM Diversity & Inclusion, Director
Guest Thumbnail

Joy Dettorre

|IBM Diversity & Inclusion, Global Leader
Guest Thumbnail

Luiz Lopes

|IBM Diversity & Inclusion, Engagement Leader (Be Equal Podcast Producer)

Today's Guests

Guest Thumbnail

Dr. Stacy Hobson

|IBM Director, Responsible and Inclusive Technologies Research
Guest Thumbnail

Traci Bermiss

|IBM Diversity & Inclusion Leader - Black Community
Guest Thumbnail

James Stewart

|IBM Principle Account Technical Leader & AI Ethics UK