Georgian's Parinaz Sobhani on the Fairness in AI

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Georgian's Parinaz Sobhani on the Fairness in AI. The summary for this episode is: <p>In this episode, Georgian's Head of Applied Research shares thoughts on how to address fairness and bias in AI.<br /> <br /> She answers:</p> <ol> <li>What is the biggest issue facing AI today?</li> <li>How can AI systems be racist or sexist?</li> <li>How can leaders design products that are fair?</li> </ol>
Pari describes her role leading the applied research team at Georgian 🤖
01:41 MIN
What is the biggest challenge we face with AI? 🤔
02:23 MIN
How can AI systems be sexist or racist?
01:43 MIN
How can we build fair AI systems? How has Georgian approached this problem in the past? ⚖️
03:23 MIN
Warning: This transcript was created using AI and will contain several inaccuracies.

Hey everyone. My name is David pool and welcome to another episode of The Georgian growth show. So today I'm joined by Partners Bonnie who is off ahead of Applied research at George and we're going to be talking about some of the downsides of using a I specifically around fairness and tires, which is a subject to the Forefront of every language at the moment. This is something that we Georgian have been focused on for a while and it's a topic that all of our listeners is company and product leaders want to act on off right now very welcome to the show. Thanks so much for joining us. Why don't you kick things off by telling us a little bit about yourself and your background and your role Georgian?

Text David for having me I'm really excited to share my perspective on friends and wires in a i my background is computer science and machine learning. I'm home of Applied research Grant leaned George in and it might be useful if they provide a little bit more context about an applied research program at Georgian. Our main vision is to select club research areas or areas of focus that can solve critical business problems for our companies and help them to create sustainable differentiation, like risking Innovation and loss for them.

Country main areas of focus for us or transfer learning and opt presentation learning and also mow and last but not least trustworthy a on Thursday between keep pillars with we have focused so far our privacy and compliance furnace and explain ability. That's awesome. So as you mentioned that trustworthy is one of your team's Focus areas. What do you consider to be the biggest threat of a i

Yeah in my humble opinion, I believe currently the biggest threat away are is bias in his systems while these Technologies are super powerful and they can potentially improve many human lives. They can also reinforce existing societal biases and even unintentionally create new ones off. That's my biggest concerns and you have to be aware that most likely the data that you use as the main ingredient to train. Your machine learning models is biased and I was a result of that the products or services that you provide for your customers will perpetuate entrepreneurs and discrimination. What is really important to to consider here is the potential impact of such discriminative systems because we already know human processes are human based processes are all dead.

But imagine that you have a bias racist or sexist loan officer in a bank that person can potentially impact hundreds of people at most per day, but I'm biased AI system can impact thousands or even millions of individuals per day. So the impact is higher and that's why they have to really pay more attention to such potential problems in the system. Yeah. I think you hit the nail on the head. I think often when we think about the potential problems of AIP kind of naturally go towards computers taking over the world and these images from science fiction, but actually when you think about the issue of bias in a is much more Insidious because it's just kind of quietly reinforcing some of the underlying issues of our society today and and if we allow it to happen and don't focus on it and don't actively create them.

Since that that mitigate the impact of this bias, then nothing is going to change in in fact, everything will get worse because there is less visibility into the decision-making of these systems. So I think that's absolutely right that you focus that tell us more kind of how an AI system can be biased can be racist or sexist.

Quick question our systems can be birth because of various reasons and I've recently years like many other researchers. They also try to figure out what are the The Roots home prices. Some of the more obvious ones are related to data and mainly because we use historical human behavioral data or actually explicitly as humans to be able data for us and then we use them as training instances or training data sets. It's going to show up the boy is going to show up in our models as well. But less obvious one is our training data scientist might not be a representative of the target population. Let me give you an example to make it more clear. For example, you have a cancer detection model that has been only trained on white people dead. And it's also used to detect cancer for people of color and most likely because the patterns are different from one type of Micro segment of the segment of population to another Market segment of population wage.

He's always going to make more errors for setting people. So that's another example and at the end of the day, but we have to remember as the main objective is to offer similar performance or get the similar utility out of these systems for all market segments of the population. So at the end of the day, what should be our main objective is to have a utility or similar performance for all micro segments no matter what their ethnicity gender or background. Absolutely. So it all starts with the data and having that objective in mind kind of Designing for fairness right from the start and making sure that all micro segments are treated equally off. So, let's go a little bit deeper and then think about how we can build AI systems and and maybe if you can include an example of how specifically Georgian hath

Approach this problem in the past sure. I believe one of the very first steps is to test models properly and it take potential biases and The Roots month previously in your projects with our companies. We use tools like for a test and Google what if and if they turn out to be super helpful for us to be able to identify The Unwanted associations between models outcome and any sensitive attributes and it's really important to think about what is sensitive a tribute in the context of the product or service in the literature. They may talk about ethnicity gender or religion or socioeconomic status, but for example in the context of one of the projects before speaking of our companies turn it in V realize for example of non-native speakers. That's the kind of as a tribute it can also be considered as a sensitive attribute and there might be some call log

Kitchen between the models out, and such attributes. Then the next step is naturally mitigated By Us by having through place and transparent communication ever really complicated problem. So try not to simplify it by easily removing the sensitive attributes off your training data because them can be correlations between the remaining attributes and sensitive points. And we really don't want to sacrifice the models performance by the moving may be a tributes or features even having it as an applied research area for our team. Actually the most rates that there is not a single solution for every problems there and there are significant numbers of research going on in this area and we can use potentially technology and similar optimization techniques to address some wage.

Problems, but definitely we have to go and we have to move Beyond technology and also consider how we can operationalizing responsible use of these powerful Technologies by changing the culture of the company as well. For example, how we can improve the diversity of the R&D teams wage order to go first systems. That's fantastic things by I think having a kind of systematic approach to this issue and and not relying on any choice to but understanding that that it is a broader issue and and requires kind of a holistic approach is is one of the most important things. Thank you so much for joining us on the show. I think that's a lot of food for thought for anyone who's who's facing these issues. If you want to discuss these these issues in more detail. Please reach out and make sure that job

Would be happy to discuss it.

DESCRIPTION

In this episode, Georgian's Head of Applied Research shares thoughts on how to address fairness and bias in AI.

She answers:

  1. What is the biggest issue facing AI today?
  2. How can AI systems be racist or sexist?
  3. How can leaders design products that are fair?