Episode 99: Should You Be Worried About Your AI Liability?

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Episode 99: Should You Be Worried About Your AI Liability?. The summary for this episode is: <p>The use of AI raises a lot of new questions about the use of personal data. For companies, this means being more thoughtful about how you collect, store and use data. But where do we stand when it comes to the law? In this episode of the <a href="https://georgianpartners.com/the-impact-podcast/" rel="noopener noreferrer" target="_blank">Georgian Impact Podcast</a>, Jon Prial welcomes <a href="https://ca.linkedin.com/in/carolepiovesan" rel="noopener noreferrer" target="_blank">Carole Piovesan</a>, Partner and Co-founder at INQ Data Law. They discuss some of the ethical implications of gathering and using data in artificial intelligence and how to square autonomy with liability.</p><p><strong>You’ll hear about:</strong></p><ul><li>How meaningful consent and privacy policies define our relationships online</li><li>Where we draw the line between the&nbsp;private and public domains in our digital lives</li><li>Whether current laws are sufficient for the evolving privacy challenges of artificial intelligence</li><li>Where liability lies in autonomous systems and where new regulations might emerge</li><li>Algorithmic accountability, <a href="https://georgianpartners.com/thesis/trust/" rel="noopener noreferrer" target="_blank">fairness and trust</a></li></ul><p><strong>Who is Carole Piovesan?</strong></p><p>Carole Piovesan is a <a href="https://www.linkedin.com/in/carolepiovesan/" rel="noopener noreferrer" target="_blank">Partner and Co-Founder</a> of INQ Data Law, a law firm focused on data governance, privacy law, cybersecurity and AI. She has advised the Canadian government on legal and policy issues related to AI and provides advice to companies on their data practices. In August 2018, she served on behalf of the federal Minister of Innovation as one of six appointees to lead Canada’s digital and data transformation consultations (#CDNdigitalTalks).</p><p>Before launching INQ Data Law, Carole was a lawyer at McCarthy Tetrault LLP where she served as co-Lead of the National Cybersecurity, Privacy and Data Management Group.</p>
How meaningful consent and privacy policies define our relationships online
00:29 MIN
Where we draw the line between the private and public domains in our digital lives
02:25 MIN
Where liability lies in autonomous systems and where new regulations might emerge Algorithmic accountability, fairness and trust
02:20 MIN

Jon Prial: Today, the law. Let's start simple. Terms and conditions are legal agreements to some degree. And although they might have a lot of words, they're no big deal. It's just two steps. Step one, click. Step two, put your head in the sand. Well then again, perhaps that's not the right strategy. There was a company in Manchester, England that offered free public wifi and they inserted a clause in their terms and conditions, quote," To illustrate the lack of consumer awareness of what they're signing up to..." end quote. What happened? Well with those clicks, 22, 000 people signed up for 1, 000 hours of community service, including cleaning toilets and fixing sewer blockages. But seriously, terms and conditions are really the tip of the legal iceberg here. There are other laws and regulations that matter. And have you thought about a company's responsibility for algorithms that are making decisions, perhaps decisions that affect people's lives? Today, we're focused on a serious topic that goes to the heart of your company, your customers and the type of relationship you choose to have with them. Today we are talking with a lawyer, so please fasten your seatbelts. I'm Jon Prial, welcome to the Georgian Impact Podcast. Today, we're very fortunate to be able to spend some time with Carole Piovesan. Carole's a partner and a co- founder of INQ Data Law. That's spelled I- N- Q. And she's also a policy advisor to many companies. At INQ, she focuses on data governance, cross border data transfers, privacy, cyber and AI. Carole, we're glad to have you on the show. Now you recently left McCarthy Tétrault and while there you were the co- lead of Canada's National Cybersecurity, Privacy and Data Management Group. Tell us a bit about this and what were you looking to accomplish?

Carole Piovesan: Yeah, so data is a really complicated thing. It is complicated at law. It's complicated in policy. There are some real serious ethical implications with the use of data. And as a group, what we were trying to do is drill down into some of the issues that our clients were most worried about and really thinking about, which had to do with the use of data, how it's structured, how it's governed, where it's stored and then ultimately something that became super interesting to me, which was also how is it used? You're gathering all of this data, what are you going to do with it? And this is where I started to focus more specifically on artificial intelligence, looking at the data implications of AI and then also at sort of the liability implications associated with AI.

Jon Prial: We're going to get deep and we'll talk about AI, but I love to start simple. Why do we have terms and conditions? Do they matter at all to the consumers in any way, shape or form? Or are they just, I don't know, for some type of business accountability or their insurance policies? Why do we have these terms and conditions?

Carole Piovesan: Let's talk about it a little differently because what you were talking about touches not only on terms and conditions, but sort of on privacy policies and notices generally. And the point of these notices is to allow the consumer to understand the relationship they have with the vendor from whom they're purchasing. And that's really important because these contracts are not negotiated. These policies are not negotiated, but they're putting you on notice so that you then can decide if you want to participate. The reality however, is that they are long, they are legalese, they're cumbersome and nobody reads them. Your point of click and put your head in the sand is exactly right. In the context of privacy, what is interesting is as of January 1st, 2019, the Federal Privacy Commissioner here in Canada issued a set of guidelines that gives guidance to companies as to how to properly obtain meaningful consent. And what they do is they try to help companies break down the policies in bite size pieces so that they are accessible, that people will read them, that it's just in time information and so you have a greater likelihood of some kind of meaningful consent than what you have today with your 25 page policy or terms and conditions.

Jon Prial: And hopefully they'll change the system so that you could have different degrees of opting in or opting out because today, if you say no, then you just don't get to use the system. There's no granularity. It is still quite binary.

Carole Piovesan: Yeah. In some cases, what companies will say is," You can say no and in saying, no, this means you can't access or you won't optimize these aspects of the system, but you may still have access to other aspects of the system." And so it's not necessarily an all or nothing proposition.

Jon Prial: You mentioned as part of that violating someone's privacy, I'm pretty sure it's illegal to cut a hole in the wall in your hotel room and videotape someone in the room next door. However, if your neighbor makes so much noise that they disturbed you, you do have some opportunity to protect your private space from the public space. Now this is in a non- data IT world. This is just kind of brick and mortar life. What privacy do you think we're entitled to now in this new data age? And if we are making information public, do we still have some rights to privacy here? And I'll give you an example. We've seen that insurance companies are looking at Facebook of people who are on disability claims and they're seeing that they shouldn't be taking a disability claim. You could say" Gee, as a business leader, that's a great thing." The flip side though, is what if you're asking for an insurance policy and they see that you were hang gliding or that you're eating cake every day, maybe you're not so healthy. Everything's on a spectrum. Where do we draw this line of what rights people have and what rights business might have to do with that data?

Carole Piovesan: I would actually characterize it differently, not as a right. I would characterize it as a responsibility in terms of a digital literacy responsibility. The example you gave is very helpful because in that context, let's say I'm the one that's hang gliding and eating lots of cake, which would certainly be more on the latter, not so much on the former, but the fact that I am posting those pictures is not an invasion of my privacy, because I have done that. The fact that I have provided this information to the world and I have done so willingly and voluntarily is for me to understand how I'm contributing my data. What is important for me to know is the fact that my data could be public, meaning it could be accessed by those outside of my friend group. It's important for me to know that insurance companies could conceivably access my pictures and then do what they please with it. I can tell you in the litigation context, of course, you're going to look at Facebook to see if somebody who's claiming disability is posting pictures of running around. That's valuable information. The fact that they put it out there is no different than me getting a private investigator to go and conduct surveillance on them.

Jon Prial: And both of those are legal, you're right.

Carole Piovesan: Both of those are legal.

Jon Prial: It reminds me of this story and I don't want to condemn an entire generation, but it seems to be a millennial story that somebody takes a job interview and then gets on Twitter and say," Well, that person was an idiot." Not a good idea. Not a good idea at all.

Carole Piovesan: Never a good idea. No, but your story is a very good example for digital literacy. It's very important for people to understand when they're deciding to put their information online or they're deciding to contribute their data, the implications of that contribution so they can make informed decisions about whether they want to contribute their data to a particular site or a particular app. And fundamentally, that was one of the biggest concerns when it came to Cambridge Analytica, which was people are contributing their data to Facebook, unaware that that data could then be contributed for purposes that they may not support.

Jon Prial: Right. There's another tiering of how far we go with this and everybody should at least understand the first tier is acceptable. And whether it's a terms and conditions or some other communications between an individual and a company, that second tier should clearly be spelled out. That's the responsibility of a company I would think.

Carole Piovesan: Well, that's it. And so where there has been pushback from privacy commissioners and the privacy community is the second tier in these policies are often defined very broadly so it would allow you to do almost anything. And what privacy community has said is," That is not okay. We need to have some clarity. There needs to be a legitimate purpose for this data use. We need to have clarity of the data use. You need to give people information to be able to make an informed decision about what they're contributing their data to."

Jon Prial: This ties to, I guess it was last year, the G7 got together, they published a position statement on AI. You were the legal advisor to the Canadian government on that. Where do you see all the governments going? GDPR happens in Europe, but now every North American company has to do it. Yet Germany has already as best I can tell restricting Facebook data gathering, the US government reported on the future of AI, but they said," No regulations yet." Where do you think we're going to end up in the next say year or so? So our CEO's could think about what actions they might need to take.

Carole Piovesan: I'll break that down into two separate parts, because there is one part that is specific to the data and that is related to AI in that AI consumes massive amounts of data. Companies are looking to gather as much data as possible with a view to how they may use it through predictive analytics or more advanced technologies. That's sort of bucket number one. Bucket number two has to do with broader concerns around the liability associated with AI systems, which is really what the G7 and governments are grappling with at a very high level right now. And we see this mostly, I would say in the EU, but we see governments around the world creating AI strategies and AI action plans to help position their countries to properly invest in and ultimately adopt AI technologies. As to the first bucket, where do I see us? I agree with you. The GDPR came into effect last year. It has set a gold standard for privacy regulation around the world. There are other countries that have followed suit, Brazil being one, Canada is not far off the standard of the GDPR and I think what we will see is the GDPR remaining that type of standard for the foreseeable future. That it will remain a gold standard. This means that companies need to be very thoughtful about their use of data and their collection of data. Thinking forward to how you advise the CEOs, it is really critical I say, to create a thoughtful strategy around what I call the data governance strategy, which focuses on the following questions, what data are we collecting? Why are we collecting it? Where are we storing it? How are we using it? And how should we use it? That is bucket number one. Bucket number two has to do with concerns around AI liability. Why do governments care about AI liability or liability associated with AI systems? Well, I'll give you just a short explanation on first of all, why this is even arising and then I can explain to you why they care. First of all, the law is structured to govern human behavior. It is a system of rules that governs what you do versus what I do. And if you cause me harm, it's a system that allows some predictability as to who will pay for that harm under what circumstances. That's at a very high level. AI systems are unique in the world of technology in that the more advanced sophisticated forms are self training and self executing, which means the human touch is very light at a certain point. You have the creator of the AI system, you've got training of the AI system and then you have operationalization of that system and a true artificially intelligent system is able to ultimately continue to train itself and execute. All right, so what that means is that there is a remoteness between the individual creator and the harm that could be caused.

Jon Prial: Right. Sort of like person A punched person B in the head. Hit them with a brick, say my brick and mortar metaphor.

Carole Piovesan: That's right. The more there is that remoteness between a human action, the more complicated it is for the law to decipher who is liable in a liability scheme. What is interesting what governments are turning their minds to and what legal scholars are starting to turn their minds to is when we get to this world where we are using sophisticated AI systems, are our current laws sufficient to govern those systems?

Jon Prial: Let me just a very simple example. Today, if my credit is rejected by a human being all by him or herself sitting at a desk or with a little simple computer system, there is an appeals process. Now we might have an AI rejecting things that we don't really know where the decision was made. Maybe the word I'm thinking of is does this begin to mandate transparency. As the laws are thought about, do we get to that sooner than later?

Carole Piovesan: There's a lot of discussion around algorithmic transparency and interpretability and the reason for that in part is because of this notion that we are entitled to an explanation. If you make a decision that has an impact, a direct impact on my life, I am entitled to ask why and you have to tell me why. And using deep learning systems in particular, that is really hard to do because they are black box. You can't decipher. Just like I don't understand the mechanics of your brain as you start to give me an answer to my question, I just have to take what you say as having been properly processed. I don't know what went into it. I don't know how you processed that information, but you gave me an answer and I have to understand that answer. Where we are today is very much in a transition period. The systems we use are still, they may be automated in tasks, but the thinking behind it is all very human. We look to that human decision maker to give some explanation for why a particular decision has been made. And that's important because then liability rests with the human decision maker who uses the automated system as a tool, an expert tool in their toolbox, but not as the ultimate arbitrator of a decision.

Jon Prial: Right. That one is clearer, but we are going to have cases where AIs are just going to do something. You have a self driving car, it's going to make a decision to hit the brakes or to step on the gas based on going around something or hitting something or whatever. You're quite removed from that person that might've designed that system.

Carole Piovesan: Absolutely. And so in a case like that, the self driving car is the easiest example. And I think it's the best example or one of the best examples for thinking through a use case of a truly autonomous system. And not, of course, it's not today because today the expectation is eyes on road, hands on wheel, but fast forward to the future where that's no longer the requirement and all of a sudden you have to determine why something went wrong on the road. Here's my projection, which is that will be regulated. The self driving cars will go through a number of different tests following which or during which there will be regulation that is set, that helps govern the relationships between manufacturers, insurers, drivers, within quotes and all other players that are associated with the industry.

Jon Prial: Let me take a little spin of that and I can understand where a self driving car might go. And there's lots of different ways you might inspect that system. How many miles did this car get driven or kilometers did this car get driven? And where did all the data get collected? It all comes back to the data. And I want to think a little bit more about liability so I'll use extreme numbers here to make the point. This concept had been discussed in some previous podcasts of ours around healthcare and facial recognition, but to make a point with extreme data, I have 10 million white male records in the dataset and I have 500 black female records in the dataset, who raises the red flag that the odds are your healthcare results recommendations that AI system or the facial recognitions are likely to fail, just because you didn't get the data right getting into the system. Might that be part of the inspection that would happen as these things begin to get slightly regulated or some overseeing process?

Carole Piovesan: I think that's a great example and I do think we'll see that there's an audit process that is put into place where you're using AI systems in sensitive context. For sure. I think we're going to see that. And I think what we'll find is that whether it's through regulation or the courts, we will see that multiple different players have responsibility at different levels to a ensure a certain standard. And in the example that you've just given, the initial creator who put that dataset together, they no doubt will have a responsibility to be mindful of ethical and bias issues in the dataset. And then they will have a responsibility to communicate to the next level of user what the deficiencies are, so that at every stage, however the system is being used, it's being used in a thoughtful manner.

Jon Prial: I think our evolution of the conversation has been great because we actually started with an individual use of data, sharing data, what might happen with it, where that could play out. Once this data gets munged into this giant data set somewhere, it's fair to say, it's no longer yours and now we're really talking about the systems and tools that are built around that. And so we talked about transparency. You just set us up a little bit with the legalities around fairness and bias so we'll see kind of a combination do you see of transparency as one lens that governments and laws might look at these systems and then fairness and bias as another one? Or think they're going to kind of come together? I guess my question is, do you think we'll end up with legislation or something that will mandate companies to somehow prove that they're on an even playing field?

Carole Piovesan: It's really hard to legislate fairness outside of principles, which we already have in our constitution. I don't think you're going to see a bill on data fairness per se. What I think you will see though, is a requirement that you have a certain degree of transparency and I can see that being made into law, but I think trying to legislate ethics is really tough.

Jon Prial: And it's funny, going back to our person A harming person B, I won't mention the brick again, but it's really entity A and entity B and you're right, the laws are kind of the same. We often talk that AI and ML teams should not be just constructed of technical people, that they really need the business leaders, the customer, the people that are most aware of what the customers are. Sociologists should be part of these teams, particularly when we talk about conversational AI and how computers interact with people. I don't know that we've recommended in any of our writings that we need to be having a lawyer on that team, but it might be something that needs to be thought where this is going to go.

Carole Piovesan: Jon, I always recommend that a lawyer is on the team and it's always baffled me that we're not invited to the table.

Jon Prial: I think the highest level issue we've talked about a little bit is algorithmic accountability and kind of went with talking about liability or bias or what data is being used and knowing what that's there. You mentioned a bit about kind of standards coming to bear. I'm kind of excited of this RAIL system, where researchers from Google, Microsoft and IBM are creating this thing called Responsible AI Licenses, that's RAIL. And now there are some end user licensing and source code agreements that developers could use that will hopefully demonstrate that they're preventing some harmful use of the technology. This has to be something that's good news to you.

Carole Piovesan: It is good news. Any form of accountability, you have to read the actual license to see if it's enforceable or in what way it's enforceable, but any way that we can promote responsible use of artificial intelligence and the responsible creation of AI systems is really important for multiple reasons. First of all, it's important because it puts the responsibility on the originator to be mindful of what he or she is creating. And that is always an important thing. You can come up with the best idea in the world, but you should also have a really good use case for how your idea will be put to use commercially. And you need to think through the positive use cases and the negative use cases, because you don't want to put something out in the world that you know very well can have a very negative use case. And if you do that, you want to put up all sorts of parameters around it to ensure or to promote the fact that it will only be used in a positive way.

Jon Prial: Guard rail comes up often. This is kind of a great way to close it a little bit. And for me, if there's a takeaway from the work we've been doing on trust here at Georgian and the conversation you and I just had is I kind of implore all CEOs to be very explicit. And this should be both publicly and internally within the company as to what the company's values are. That's the guard rails and you want to be successful around that so that you will grow and you won't have any negative ramifications. What's your sense? You think that's a good next step for CEOs?

Carole Piovesan: I totally agree. I think CEOs need to be thinking about their company values, their company priorities, their existing data and how change will occur in their company. I have often talked about the creation of a strategy, whether you call it an AI strategy or a data strategy, but it's a process by which companies are very mindful of the multiple changes that need to occur in order to use the data you have in a meaningful and beneficial way. And a lot of that just starts with the individual employee. It's about changing corporate culture. It's about being very transparent with your employees about what you're planning to do. And then it's being really focused and targeted and strategic with what data do we have? And what do we want to do with it? I agree with you a 100%. Trust is critical. It's not only critical internally, it's also critical externally as you start to roll out more advanced technologies. You want your customers to be comfortable with the process that you've put in place. And I think as part of trust, transparency is critical. Again, internally and externally, you want to be transparent.

Jon Prial: And have the lawyer sitting at the table.

Carole Piovesan: You want the lawyer at the table, Jon.

Jon Prial: Carole, what a great discussion. Thank you.

Carole Piovesan: It's been a pleasure. Thanks, Jon.

Jon Prial: That was Carole Piovesan of INQ Data Law. It's an important space. Glad I didn't say nerveracking. Maybe I should have, but I'm sure you'll be very busy indeed. Thanks so much for being with us today. It's been a pleasure.

DESCRIPTION

The use of AI raises a lot of new questions about the use of personal data. For companies, this means being more thoughtful about how you collect, store and use data. But where do we stand when it comes to the law? In this episode of the Georgian Impact Podcast, Jon Prial welcomes Carole Piovesan, Partner and Co-founder at INQ Data Law. They discuss some of the ethical implications of gathering and using data in artificial intelligence and how to square autonomy with liability.

You’ll hear about:

  • How meaningful consent and privacy policies define our relationships online
  • Where we draw the line between the private and public domains in our digital lives
  • Whether current laws are sufficient for the evolving privacy challenges of artificial intelligence
  • Where liability lies in autonomous systems and where new regulations might emerge
  • Algorithmic accountability, fairness and trust

Who is Carole Piovesan?

Carole Piovesan is a Partner and Co-Founder of INQ Data Law, a law firm focused on data governance, privacy law, cybersecurity and AI. She has advised the Canadian government on legal and policy issues related to AI and provides advice to companies on their data practices. In August 2018, she served on behalf of the federal Minister of Innovation as one of six appointees to lead Canada’s digital and data transformation consultations (#CDNdigitalTalks).

Before launching INQ Data Law, Carole was a lawyer at McCarthy Tetrault LLP where she served as co-Lead of the National Cybersecurity, Privacy and Data Management Group.