Ethics in the Age of AI: Generating the Future Responsibility

Media Thumbnail
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Ethics in the Age of AI: Generating the Future Responsibility. The summary for this episode is:
Meet Ethical Compass Advisors
02:05 MIN
What is generative AI?
01:08 MIN
The biggest hurdle: humans struggle to understand that AI is not actually a person
02:15 MIN
The New York Times & ChatGPT-3
01:03 MIN
AI for ecommerce
07:13 MIN
How might case law develop around holding companies responsible for the actions of AI?
01:38 MIN
AI for content publishing and social media
05:20 MIN
AI for B2B SaaS
05:28 MIN
AI for connected devices
02:35 MIN
Could AI eventually exercise judgment beyond algorithms?
01:56 MIN
How do you balance IP with generative AI?
02:34 MIN
What about regulating AI?
04:21 MIN
How should you handle risk mitigation and prevention?
06:42 MIN
Ethical Compass Advisors upcoming book
01:41 MIN

Matt Blumberg: Okay, I think we should get going. And let me just welcome everyone. I'm Matt Blumberg, I'm one of the co- founders and CEO of Bolster. Very happy to have all of you here today and before I introduce our guests and we get going with our topic, I just want to invite everyone as we go, feel free to post any questions in the chat. The three of us will try to pick them up and address them in the flow of conversation, assuming that works. If it doesn't, we'll try to leave some time at the end to get through all of those. But let me just start by introducing our two guests today. Noah Feldman and Seth Berman are the co- founders of a firm called Ethical Compass Advisors, which they'll tell you more about in a minute. Noah, in addition to that, is a professor of constitutional law at Harvard Law School and has done a variety of very interesting things over the years, including being a Supreme Court clerk for Justice Souter, writing about 10 books and being a columnist for Bloomberg and working with the Iraqi Provisional Government on writing a constitution in 2004. And Seth, before starting Ethical Compass Advisors had a lot of his career as a federal and state prosecutor specializing in cybercrime, was a business leader at a digital forensics firm focused on data and data privacy in the legal arena. The two of them together have worked with a lot of companies, big tech and small tech conceiving and architecting the Facebook oversight board, which I assume everyone knows about, but working with lots of other companies of different sizes as well around ethical governance issues related to technology in general, not just AI. Seth and Noah, welcome, really happy to have you both here. And let me just start by asking you to just give everyone a super quick description of Ethical Compass and what you all do and then we'll get into the topic for today. Seth, you want to start with that?

Seth Berman: Sure, I'll start with that. Thank you so much, Matt. Thanks for having us and it's nice to meet everyone on chat at least and a little bit on your screens here. Ethical Compass, we started a couple years ago at this point. Noah and I, in different ways, I think throughout our career have been sort of struggling with how to help companies deal with complex governance issues and complex ethics issues. We came together to do this together I think because one thing that we have seen is that the business environment around ethics and what sorts of ethical questions businesses are being asked, we think has changed quite a bit over the course of the past decade or so. I think in our view, gone are the days when it is enough merely to not break the law and ethics consisted of making sure you didn't break the law and your employees didn't break the law. I think a lot of business people, business leaders are being asked to essentially answer questions like, " Is there stuff we shouldn't do even though it's legal to do? If so, how do we decide what those things? Are there things that we need to make a stand about because either the leader thinks so or my employees think so, or my customers think so?" And if so, how do we come up with ways of thinking through those decisions and making them, that's not just like, " It's my gut feeling?" Because it turns out that when you go with your gut feeling all the time, sometimes that doesn't work out so well. That was sort of our concept. We have been working with a number of different kinds of companies on creating governance structures either to solve these sorts of issues, the sort of who do you work with or who don't you work with issues or sometimes to create governance structures from scratch to try to address an inherent risk of the business and to make sure that the company does not become evil, I guess in the eyes of its founders and others. Noah, anything you want to add?

Noah Feldman: No, that's great. Then just last of all that we also have close relationships with lots of businesses who essentially use us as a go- to group of people to talk through what strike them as the most challenging ethical issues that are arising at a given moment.

Matt Blumberg: All right, so let's dive in on our topic for today, which is AI. And I guess the first question, just to make sure everyone's on the same page, when we're talking about AI, what are we talking about? I think it's one of those terms that now sort of means lots of different things to lots of different people. Noah, how do you want to define the boundaries that we're talking through today?

Noah Feldman: One thing that we've learned, especially in working directly with some of the companies that are creating the present AI is that even they don't settle on a common definition. It's a mistake to try to say, " We know what the definition is." Instead I think we should just say what we're going to talk about. I think what we're going to talk about today is basically large language models of artificial intelligence of a kind that have been heavily in the news for the last few months since OpenAI put its ChatGPT- 3 out there for people to play with. These are human simulating as it were in their interactions and in their conversations, they're cousins of the same technology that can do the same with images. We're going to focus on that. We will talk a little bit perhaps about some of the emerging AI tools that help people with marketing that are not based on a large language model and are relevant to our conversation as well, but that's what we're going to focus on today.

Matt Blumberg: Great. Then within that, is there sort of a big picture framing or insight that you would say kind of drives this conversation about, " Okay, we have this big brand new strange thing that has been unleashed on the world?" If you had a single headline around it for companies to think about, what would that be? Then we can kind of peel it back from there.

Noah Feldman: Yeah, thanks for asking that, Matt. The core I think structuring insight for our conversation is that we as humans do not yet have the software in our own brands to interact with a life- like correspondent, like a large language model, and realize that it's not actually a person. We ascribe to it the intentions of a person, the morality of a person, the feelings of a person. And that's going to last probably not forever, forever, but probably for our generation. It may be our whole lives when we continue to have that associative set of actions. And just to give you a concrete example of what I mean, as Matt mentioned, I teach constitutional law, so I gave my final exam in the fall semester to the ChatGPT- 3 when it came up and it got some parts of it right, but mostly it did pretty badly. But what was interesting was when it" didn't know" the answer, it lied to me about what the answers were. Now why do I say lied to me? Well, that's a strange thing to say because student exams are not all correct. They involve mistakes and I don't think my students are lying to me when they make a mistake, but the reason that I don't think they're lying to me is you can sense even in a written record, their insecurity about their answers when they're circling around something that's not quite right. The machine doesn't do that. It presents everything that it says with comparable indicia of confidence. I reading the answer felt that I was being lied to when for example, it invented cases that didn't exist, made them up and said what they meant when they had never existed in the first place. The key point here is not that it did that, although that's important for our conversation, but that I ascribed to it even though I knew it was a machine, the intentionality of wanting to lie to me and I was offended.

Matt Blumberg: It's sort of the often wrong but never in doubt inaudible.

Noah Feldman: Exactly.

Matt Blumberg: I don't know how many people have read the New York Times article last month about the use of Bing and the chat bot there, but it may be worth kind of recounting that for people too.

Noah Feldman: Seth, do you want to tell that story of Kevin Roose's conversation with ChatGPT-3?

Seth Berman: Sure. In very short version, he played with ChatGPT- 3 and at first was completely enamored about its ability to do anything and wrote an article to that effect and then kept playing with it and then wrote another article essentially saying, " This is the scariest thing I've ever done and I've completely changed my mind," because by the end of it, he was getting ChatGPT to try to talk him into leaving his wife and convincing him that he was really in love with Sydney, as ChatGPT called itself, or Bing's chat or whatever it was, called itself. And that he was in love with Sydney and Sydney should therefore, he should leave all his wife and he should upend his life. And he didn't really understand that his marriage was miserable and how could he not understand that. This seemed pretty far- field from what these things are supposed to do.

Matt Blumberg: What I'd love to do is move into some sort of practical questions and I see John, that's a really interesting question, we are going to come back to regulation toward the end of the session, so just kind of hold that thought. But if we think about the different sectors that are likely represented by Bolster clients who are on here that are sort of tech adjacent, tech enabled, it'd be really interesting to sort of walk through a few of them and ask the two of you for some thoughts about sort of use cases, maybe one use case for each of a few different sectors. And then give the audience some thoughts about how AI could have adverse downstream consequences. And I would say both if it works the way it's supposed to work, and then also if it goes off the rails and starts telling people to leave their wife. Why don't we start with e- commerce? So e- commerce or any kind of DTC business where there are use cases of talking directly to a consumer purchaser, what is something that you've seen or heard clients talking about as kind of an AI use case, and then what are some of the issues that can come from that?

Seth Berman: I think the use case is really to help someone find a product that they're trying to buy and find the right product. This is the eternal problem of e- commerce for any of us. There are so many options. It can be sometimes hard to figure out what's the kind of water shoe you want to buy when there are 700 different kinds of water shoes or whatever it is. I think that's a place where it seems like it could be extremely helpful in many ways, probably more helpful than a person could be because it's going to have a way more encyclopedic knowledge of what's being sold. If it works and it works very well, a problem it might nevertheless have is it might work too well, which is to say it could get extremely good at convincing people to buy things, so good that if it were a person doing it, we might call it fraud. It will essentially learn to get people to make decisions that is the goal that's been set, which is to essentially get people to buy. And you don't know how it's going to do that. We as people, whether you think of it or not, are limited in part or at least, yeah, I guess limited by your own internal sense of morality. And you may not be thinking that you're making moral decisions, but at some point there's stuff you're not willing to do, not just because you won't go to jail presumably, but also just in general. The chats are not going to have that at all. They're going to do whatever they're programmed for and if it's not in their sort of limiting behavior to do, they're just going to do it and the world is going to blame you. If you have an AI that's convincing people to do something that for a person would feel like fraud, your customers are going to feel like you committed fraud on them, probably regulators are too.

Matt Blumberg: That's probably a good point about any vertical we talk about that the use of AI, it is going to be pinned on you even if it goes off the rails, right? It's probably the reason after that Times article was written that Bing started limiting the number of interactions you could have with its chat bot, so you couldn't talk it into talking crazy talk. Let's dig a little bit more into e- commerce and DTC because I think that point about sort of salesmanship and morality is an interesting one, but what are some of the other things that could kind of pop up in the e- commerce realm where AI is just making kind of weird decisions?

Noah Feldman: Well, one context is pricing. And here we do have an example even before large language models, which I encounter in my job as sometimes I look for a strange and obscure books on Amazon and often they'll be priced at a thousand dollars for a paperback book. That's a rare paperback book. And that's not because anyone, any actual person wants to pay a thousand dollars for it. It's that there's an automatic bidding mechanism that's been set up, which is effectively an artificial intelligence mechanism as between two different vendors, each of whom have maybe one copy or two copies of a book and they begin to bargain each other up and there's no human being involved. In this interaction between two different AI models, and we're going to see this recurring in a range of different areas, you get bizarre outcomes and there's actually no way for the customer to get around that. There's no way for the customer at present to say, " Well, wait a minute, I would pay you X amount of dollars for this book" because the price has now been set by this repeated interaction. I think what that stands for is a circumstance where too much automation on both sides of an interaction blows things up. And that's both a practical problem and also potentially an ethical problem if you're trying to represent a seller, the seller doesn't want actually the price to be placed at that point. If you're also trying to get a product to someone that they need, you've made it effectively look unaffordable to them when in fact it might not be.

Matt Blumberg: We had one question that popped in on the chat that is relevant to this category from Yvonne, which is how do you think about e- commerce and recommendation engines potentially getting into issues around product liability? " Your product made the recommendation and."

Seth Berman: Yeah, let me take that one. Look, there's an analogy here to cybercrime law. I started doing cybercrime law in the early'90s or the mid'90s, which was before most of the major cybercrime laws were actually created. What happened is there wasn't an obvious way to charge someone with hacking necessarily at that time. What you would do instead is you would try to take some law that already existed on the books like trespass and then try to see if you could squeeze hacking into it because there wasn't exactly a law on point. This same thing is going to happen here. Can I tell you that product liability as exists now includes the product recommending itself? I don't think there's ever been a case like that. However, a smart litigant will try very hard to make it so, if that's what happens. What exactly courts will do, I'm not sure. Although if I were betting on it, I would strongly suspect that courts are going to say, " If you did it, the fact that it was your AI is not a defense, the company put this out there and that you had no idea what's happening does not make it better." That's my suspicion. I would actually go with, yes, I think that's a potential problem.

Noah Feldman: And I would just emphasize there the deep point that Seth is making, namely that companies will be held responsible for what AIs that they are deploying do. The government will treat that AI as you in the same way that a customer or a third party interlocutor will treat it as you, it's the same phenomenon we were talking about of our system still thinking that there's a person behind there and if there's no actual person, they'll find a person and that person may be you. And I think that's going to happen over time in these large language models. There's also a question of how advertising is going to work in this space. If advertising becomes seamlessly integrated into what the model proposes, then the product's liability issues probably rise because there's false advertising. And false advertising can give rise to products liability claims. If you are paying the model to promote your product and the product is promoting in the way that it would naturally do it, that starts to look like familiar advertising. If it's false, you may be held liable for it.

Matt Blumberg: That's probably a good segue to go to a second vertical right now. Although let's hit Scott Petrie's question first. His question is that comment you just made, Noah, that companies will be held responsible based on any precedent, and how would you see case law developing there?

Noah Feldman: Yeah, it's based in general on the precedent of what happens when new technologies come into existence and courts have to figure out how to allocate responsibility. The really classic famous example of that all have to do with the railroads in the late 19th century, which transformed the country and made industrialization possible and led to all kinds of accidents and problems that didn't occur previously. And what happened was the railroad lines started to be held liable for a whole wide range of things that were happening vis-a- vis there what was happening out there much more so than would've been true say for someone who controlled a toll road. The same tendency was there already 150 years ago, and I think we're likely to see it in the same space, but as Seth was pointing out, we still don't have cases out there yet on recommendation engines. There is a case in front of the Supreme Court now in fact about whether Amazon can be held or Google could be held liable for recommendations of videos. In particular, it's about YouTube videos and whether you can be held liable for recommending something, some radicalizing videos that people then watch, but from oral argument, it sounded like the justices were going to duck the issue. In theory, they should be a about to decide a case about that, but they may not actually.

Matt Blumberg: Yeah, I was going to say you probably have some sense of where they're going with it.

Noah Feldman: Sounded in the oral argument like they were going to duck it because, and this is actually interesting and relevant to us, they realize they don't know enough to know and they realize this is an area with major consequences.

Matt Blumberg: Okay. Let's move on from e- commerce to content publishing social media. That's an area where I know you both have done a tremendous amount of work. How do you think about sort of AI there? And again, sort of same questions, what are the downstream consequences of what you're seeing companies use or start to play around with?

Noah Feldman: Yeah, as Matt mentioned, we advise a bunch of social media companies, mostly of the big type, but I think their worries are kind of consistent across the industry. A first worry is that all of the content moderation problems that have plagued the social media industry for the last decade will recur and repeat themselves in the context of the large language models. If you're worried about what gets said and in what terms it's being said, and whether it's having bad real world in influence, whether it's misinformation or radicalization or bullying, you have to worry about all of those things in the context of large language models. History here shows very specifically that the public and eventually the legislature starts to hold the entity that's the host responsible for what's said on the site, even if it's generated by users rather than by the company itself. That's I would say the leading consideration, it's a really important one. A second one has to do with how the market for user generated content will operate. Picture now you or your kids go into TikTok and you want to watch a video about something, you watch the video and then it recommends another similar video. Well, what if it were the case that using the equivalent of a large language model, the server could actually produce a video for you in real time that matches what you want? And it gets the capacity actually to tell you about that. Now the user generated content model has a further component or even a competing model, which is spontaneous generation of new content. Now imagine you got a third stage. In this third stage, the individual users who are generating content are themselves generating content using AI help, you could very easily get yourself into a situation where a bit like the example I mentioned of Amazon pricing, AI are talking to each other. One AI is producing content to post on TikTok, another AI and TikTok is responding to it. Yet a third AI is another person who's generating AI and AI in response. There's the possibility of these regularized feedback loops in which humans are not the primary participants. And this creates tremendous uncertainty about how the algorithms will operate, how people will interact with the service. And I think it's in a sense the biggest work that they have because what we understand in social media is that there are people behind each post, and as long as there... And of course bots have been a problem because there aren't always people behind, but this is sort of bots at a much higher and more effective level. And I think there's a real concern about the effect that this is going to have in social media and it's obviously a major concern not only to a company that is a platform, but also to anybody who's generating content on social media because of who's going to be reading your content and how they're going to be interacting with it.

Matt Blumberg: How does data privacy factor into this in social media in particular? Although that's quite frankly it's probably an issue for any of these verticals with the model chatting with someone.

Noah Feldman: We're familiar with the data privacy problem in terms of the data that you generate in the course of interacting with a platform, but mostly that's your search requests and what you look at and how long you look at it for. If you know happen to notice Congress beating up on the CEO of TikTok over the last couple of days, one of the lines there about data privacy is about the kind of data that's generated just by your ordinary interaction with TikTok and perhaps following you across apps. Now, imagine that your interaction is a lengthy and complex conversation with a chat bot. The amount of information about yourself that you are conveying in the course of that conversation is vastly greater than that which you convey by choosing which videos to look at for what period of time, and occasionally searching for videos according to a theme. Because properly analyzed that data can tell you about your writing style, your interests, your intellectual capacities, your mode of engagement and response to things. It's an almost infinitely richer dataset. The data privacy problems that we're already familiar with are going to be really, I would say, multiplied on a logarithmic scale. In this context, my guess is it's actually going to lead to a whole rethinking of what data privacy means because the nature of the data generated is so much richer.

Seth Berman: One thing I think that might help though, just to throw in a plug for perhaps a good use of AI is that I'm sure in very short order, someone is going to come up with essentially an AI you can turn on to pretend to be you, to go around asking a whole bunch of questions and doing things on all these things to completely mess up their image of who you are. It would essentially be a chaos generator so that it becomes impossible to figure out what's really you and what's the AI. People have tried doing this anyway with social media companies, but I don't think it would be particularly hard to set that up.

Noah Feldman: Well, but that's also bad if you're an advertiser. If you're an advertiser, a marketer and you want to gain that information in order to give people perfectly legitimate ads, that's going to make it harder to perform your job.

Matt Blumberg: Like you think blocking cookies is a problem?

Noah Feldman: Yeah.

Matt Blumberg: Yeah. Okay. Let's move on to a third vertical now, which I know is interesting to a lot of our clients and several that I see on here, which is B2B SaaS. Think about the world of B2B SaaS could be very broad- based applications like Salesforce or HubSpot where everyone in the enterprise has a seat and they're using it for different things. They could be narrow, more specialized applications like Expensify where everyone in the company is just emailing their expenses into it and it's hooked into the payroll system to generate a reimbursement. It could be an email marketing platform or programmatic media platform doing any kind of targeted campaigns on behalf of clients. B2B SaaS is a really broad category, but maybe talk us through a couple of uses that you see in that world, and again, what could go right and what could go wrong?

Noah Feldman: Seth, do you want to jump in first or do you want me to jump in?

Seth Berman: Go ahead, Noah.

Noah Feldman: One of the striking things about the large language models is they do great in regular conversation, but nothing compared to how well they can generate basic software. And indeed, some of the early commentators have been pointing out that fast as the transformation may be in a range of different businesses, the one area where there might be an immediate and vast transformation in the nature of the business is just the writing of basic software. This is going to have major consequences throughout the entire range of applications that Matt was just talking about. I mean, if your basic business model is you have an excellent software product and you help your clients to use it, and when they need it modified, they need you, you can easily imagine clients saying, " Well, I actually want to build my own software loops and tweaks on top of your platform, and I can now do so even though I don't have the programmers to do it using a chatbot that produces credible code." If at first, presumably every company would say, " No, you can't do that." Over time it may become increasingly practically difficult to enable stopping that as companies in fact interact. I would say that's a kind of deep question for the entire business line at the just production level. Then at the concrete everyday level in all the applications, Matt, that you were just talking about, there are these iterative interactions between what are presently usually human beings and other human beings who are going to use the product for a range of different purposes. As those interactions start to have an AI on one side of them, they raise the question of is the conversation proceeding the way it should? Are the actual aligned goals of the person deploying the AI working? As they start to be AI on both sides, you get the cycling problems that we're talking about. And I would say at the most basic level here, I want to come back to the theme that Seth mentioned earlier, namely that the user will think that it is a person on the other end, even if it is not a person, everyone knows that it is not a person. And that will mean that people will attribute to you, to your customer support, to your interaction at the end user level, to the sales act, to the salesperson. They will attribute intentionality and they will attribute something like humanity, and they will hold you responsible if the interaction doesn't go the way that you want it to go. Quality control is going to be absolutely of the essence. Then just the last point I would add to that is, and again, Seth was alluding to this, we're not even aware, our company is called Ethical Compass Advisors because we believe that people have an ethical compass, not that we don't think that they have an ethical compass or that we're their ethical compass. Everyone has an ethical compass. If you're engaged in a regular interaction, whether it's with a client or whether it's in a sales context or whether it's in a support role, you have an in- built ability to make reasonable judgments about what will make that person happy, what will make them very unhappy, what they would view as unethical. At this stage, the large language models do not have that common sense. They fundamentally just don't have it. It does not mean that they couldn't eventually be taught it, but they don't have it now. And that means that there'll be a lengthy period of time where you as a company, no matter what part of that value supply chain you're in, are going to have no choice but to think seriously about whether the interactions that are taking place through your AI or the AI you're using match the ordinary human ethical judgments that you would prefer to engage in.

Seth Berman: I just want to add one thing to that, Matt, which may actually nicely transition to our next subject, which is I think one thing that's worth remembering as someone who's dealt with a lot of companies after bad things have happened, if you sit down with people even after a disaster, even after a crime has occurred, and usually you say to them, " What were you thinking?" Nine times out of 10, maybe even more, there is an answer and the answer is logical. It may not be accurate, but it's logical, right? " This is why I thought it was okay. And yes, maybe insider trading wasn't the best idea, but I didn't think it was that bad for whatever reason." You're not going to get anything like that out of AI, particularly in situations not as absurd as that one, where someone actually may have had a good reason at the time they did something and it turned out badly, and in retrospect, it looks awful, right? AI, one thing we know about it is that these AI models have no ability to tell you why they did what they did. They're not going to be able to say, " Oh, well, I thought that under these rules, this was okay." It's just going to be like, " Well, that's what we did." It won't even necessarily be repeatable, if they do it again and they might do something different. That is a totally different world and one that is going to require some rethinking because having humans in the loop there does add something. And we believe one of the things that adds is an ethical compass.

Matt Blumberg: Right. I want to hit a fourth sort of vertical example before we go into some more general questions and then a little bit about practical tips, which is where we'll end of how people can think about protecting their companies. The last vertical I want to hit is connected devices. And I have to imagine that the world of connected devices meets AI is a different example than the three prior ones, and one, in some ways that's even a little scarier. I don't want my Peloton suddenly controlling me, telling me what exercise to do, but I'm sure there's some more harmful examples than that. I would love to hear if you've run into that at all.

Noah Feldman: Yeah, so first, there's the two AIs talking to each other problem, and that's very easy to imagine. For example, just in the case of say a thermostat and a heater or an oven or something in your house that are interacting with each other and they're each operating on an AI basis, so it's very easy to imagine them cycling with each other and leaving you out of it. The consequences could be serious and bad quite quickly. More broadly than that, I think, Matt, you're right to think about the Peloton example. A future Peloton, I don't mean by the Peloton company itself, would have a more highly personalized account of what your exercise profile should be and should look like based on some not only statistical analysis of your demographics, but actually what your behavior and practice have been so that it could be evolving for you in real time. That will have potentially tremendous advantages of individualized training. It could be as good or better than a personal trainer who's actually present for you, but at the same time, it also could push you in a wide range of ways that could be harmful. Imagine that you have your... We won't call it their Peloton, but your Schmeloton and it's working at this incredibly personalized level, and then your 14 year old starts using it and your 14 year old starts developing an eating disorder because the Schmeloton is encouraging a rate of exercise and giving very skillful psychological recommendations to get that kid to eat more or less than the kid ordinarily eats. You might really want that for yourself. You may not want that for your 14 year old. Again, these are the things where a human involved would be able to make a situational judgment and that we're not yet at that stage for these models, and that's potentially, you could imagine a serious set of questions for liability and regulation. And I know lots of the questions that we have now are about regulation and liability, and I'm eager to... I'm actually working on a series for Bloomberg about that right now that I just started writing, so I'm actually really eager to hear people's thoughts and to talk about it.

Matt Blumberg: Yeah. All right. Let me ask some sort of questions that I think come across when you're giving all four of these examples of different sectors. The first one, just to build on what you said a minute ago, AI is currently not capable of exercising judgment and it's not currently capable of offering an explanation as to why it did what it did other than maybe algorithmically. Is there a world where those things become possible?

Noah Feldman: Yes, and we're talking about probabilities here, but to give one example, there's one of the very prominent, highly publicized AI companies that's made up of people who spun off from one of the other big AI companies, and they're aiming to develop what they call constitutional AI. And what they mean by that is AI that begins with general principles of not harming people, treating people with dignity, not engaging forms of discrimination and so forth and so on. Their claim, and again, this is still in a testing phase, is that they can actually constrain and limit AI by training it on judgment. The intuitive way to think about that is AIs can be trained on any kind of data, and if you train the AI on the data of a lot of humans making reasoned ethical judgments, there is in principle no reason that you couldn't get it to replicate that structure of judgment. As for interpretability, and people have been talking about this in the chat and there's a range of rich and I think all correct views here, even though there's some tension between them. There's some aspects of interpretable AI that people in the industry are working on very, very hard. They have a cost associated with them. There are other areas where at least under the current models of technology interpretability would be in principle so difficult relative to the scope and scale that nobody in the field thinks that it's going to be practically achievable in the foreseeable future. The last thing I would say is you can potentially train the AI to give answers that sound like human answers to why it did what it did. But that doesn't mean that they will be the right answers because again, we're talking about models that are basically grounded in prediction and so they can learn how to talk the way we talk, but their forms of logic and reasoning are not the same as our forms of logic and reasoning.

Matt Blumberg: All right. Let's move on to IP for a minute. IP, always an issue that software and tech companies are thinking about. What are some of the headlines of IP issues to worry about as it relates to AI? Who inherently owns generative AI with extensive user inputs? How do you think about it as it relates to software development you talked about a few minutes ago?

Noah Feldman: Seth, do you want to go?

Seth Berman: Sure. I mean, look, I think there are a few issues that people have brought up about AI and IP at the moment, some of which you touched on, but let me touch on some others. One question that people are floating is AI is sucking in all this data, lots of it is copywritten and owned by other people, and then spitting out information based in part on that data. Is that alone a copyright violation? The answer is probably not, although I'm not totally sure that we've gotten to the end of that discussion and I think it'll be interesting to see how that develops over time. A second issue is who owns the copyright of something AI produces? That is going to be an extremely difficult answer. I think much more difficult than... I mean, obviously if you use a word processing program and write a book, no one thinks the word processing program owns it, right? Even though you used it. Now it's getting a little closer. But by the same token, no one else could have created the thing you created with AI perhaps. Therefore I think it's a little less clear who owns it and I expect that that's going to be the subject of some disagreement, shall we say, as time goes on. I don't have a good theory as to how courts are going to unravel that anytime soon. I definitely think that what will happen even now, just like to take this in the real world, Taylor Swift got sued for using three words from another song that were repeated, or five words or maybe something like that, right? That's going to happen with AI for sure. It's going to come up with some phrase that someone else has used. It's going to use it, someone's going to publish it and someone's going to claim that it's plagiarism and it's not going to be so easy to untangle that and you're going to have no way of knowing where it found those five words or six words or whatever it is in a row. I definitely think that's a significant potential issue. I think that a whole new area of law ultimately is going to have to develop around this and it's too early in it to know where it's going to go. Other than that our overall theme here is you need to think a little bit before you put this into place, we're all going to need to put this into place. We're not saying this is too dangerous, don't touch it. That's just not realistic. But definitely give it a bit of thought before you decide to just roll it out and see what happens.

Matt Blumberg: A related question, although I'm just noting for a future conversation, Seth, that you did just bring up Taylor Swift, so we'll hold that thought for later. Let's talk about regulation for a second, so that's come up in the chat several times and I'm assuming that the answer has something, some flavor of what Noah talked about with there's precedent. It goes back to railroads, right? The laws of today and the regulations of today aren't equipped for this. The conversations that I've been in with elected officials would suggest that neither they nor their staff truly understand what AI is, how it operates, and what all the pitfalls are. What's the regulatory environment going to look like and is it going to end up being sort of consistent with what we saw with data privacy, where there was a patchwork of things, the federal legislation regulation tried to come in and homogenize that and ultimately it all got trumped by Europe anyway? What's the regulatory environment going to look like?

Noah Feldman: Let me try to sum that up relatively quickly, but by start by saying, and one of the questioners in the chat just raised this issue, and you can read it in the New York Times, there are people who are worried, including lots of people who work in AI, that AI might have extremely bad consequences for humanity as a whole. If that turns out to be the case, I just think we can say this very clearly, governments will effectively nationalize AI the same way they do for nuclear power and for nuclear weapons. I can't just raise capital to start a startup to make nuclear weapons. The field is so intensely regulated that that's not possible. The reason for that is that there's an existential risk there. Right now there are a lot of people talking about existential risk for AI, that's very different than a credible view that there are visible existential risks, other than that we've all seen Terminator and it's scary. If it does turn out to be the case that these risks at that level of weaponization are real, the story of the regulation will be short and sweet, it will be the government will come in, it'll take over the companies and it'll ban a lot of forms of AI. It has the legal authority to do that. It doesn't want to do that if that risk is not there because of the tremendous improvements to efficiency at tremendous scale that would otherwise be lost and no one wants that to happen. The next level is governmental regulation, probably not just through statutes but also through the creation of some kind of a new regulatory agency. As Matt as you mentioned, that goes back to the 19th century when these new complicated industries, you need people who actually understand them in order to regulate them. The thing about that kind of regulation is it takes a long time. You have to create a new institution, you have to recruit people, you have to issue regulations according to notice and comment. That something like that is coming in AI seems probable, but the timeframe on it is such that it's not going to suffice in this wild west period. What we actually have in practice are two things that remain. One from the government standpoint is our wonderful American system of after the fact regulation where we don't say we're regulating you until afterwards and we sue you in court for negligently causing some form of harm. And as everyone on this call knows, that's the most unpleasant aspect of our legal system if you run a business because you know are held responsible for behaving" reasonably," and then who's going to decide what's reasonable? Some court's going to decide what's reasonable, perhaps according to a formula if you're lucky. And what that means is that if you're deploying the AI, you have to price in the downside risks associated with your creating content or deploying AI in a way that will be held by a court later to have been harmful or damaging. And when the court says you didn't behave reasonably. That's out there and whether we like it or not, and it is in the United States, our first way of regulating something without admitting that we're regulating it. The second aspect, and I'll just shut up after this on this and I apologize for being a longish answer, but this is the last part of it, is self- regulation by the industry. And here, industry leaders are already gently beginning to talk to each other about whether they might be able to create consistent safety regulations that they commit themselves to. And individual companies are also interested in engaging in self- regulation of various kinds. And that's something that at a practical level is relevant to everyone who's going to use AI, even if it's an API from some third party vendor. You have to think of yourself as your own regulator in the first instance. And you're going to have to figure out what the benefits are to you, but also what the potential downsides are. There are tricks to targeting what those points are. Then you are essentially doing your own self- regulation as the first cut.

Matt Blumberg: That's a good segue into sort of the last couple of questions and we're sort of out of time, but we're going to keep going. I think a lot of people probably have the whole hour on hold and if you have to leave, we will certainly post the recording to anyone that was here today. But when you work with a client, first of all, how do your clients handle risk mitigation? Someone posted this early on in the chat, what department does it come out of? And obviously I'm not talking about Meta or Google that has armies of lawyers and risk mitigation departments and huge GC offices, but more, not even a raw startup, but sort of a mid- size company. How do they get their arms around risk mitigation? Then we can move in and talk a little bit about what that looks like.

Seth Berman: Yeah, let me take that. I think the answer we believe is that it really depends on who you are as a company and what exactly it is you're trying to resolve. I think the most important thing from our perspective is that the system needs to be regularized by which I mean not only in terms of the structure, who in the company is responsible for this, but also how are they going to make this decision? What are the questions they're asking? What is the intellectual framework on which they make a decision? Whether it's about AI or any other ethical question is important, because ultimately the question can't just be, do we think this is ethical? That's your first question, but it's surely not your last question. You need to have some system of making that decision and then being able to explain it to other people because ultimately if you can't explain to someone why you did something, they're never going to believe that you did it with a good intention even if you did. That's one part of the answer. I think we've had clients who are really trying to hack the corporate form. They're trying to figure out how can we use the corporate form to do this? Do we create a voter who's responsible in voting our shares, but voting them not for profit, but rather than for the good of humanity or something like that? Some companies have done that. Some companies have set up, far less radically have set up internal committees whose job it is to evaluate these things either to make the decision or to advise the ceo. Some companies let the GC do it, although I can say this as a lawyer, I feel like lawyers are not usually well positioned to do this because lawyers have been trained to ask the question, " Is this legal not? Is this right?" And I think some lawyers have trouble transitioning between those questions. Not all, but some. Noah, do you have anything you wanted to add to that?

Noah Feldman: Yeah, I would just supplement that by saying that our general approach to how a company should address a hard question like this is to do a rigorous, clear, self- assessment. To be really, really clear. The analogy would be going to the doctor for a stress test when you're about to go on a trip to go climb the Alps. I mean, your doctor wants you to do a stress test every so often anyway, but maybe you do, maybe you don't, but you definitely want to do it if you're going to be climbing the Alps and engaging with AI is sufficiently different in terms of the muscles it's going to use and the amount of energy it's going to take for most companies that it's worth some kind of serious self- assessment. And in that self- assessment you ask, what are the touchpoints internally and with customers that involve us in making judgements? And that means the good judgments, the things you're really good at doing, because you want to make sure you preserve those and also the areas where you have vulnerabilities. Then if you add to that noticing what those new touchpoints are going to look like when you incorporate some AI component, and that stress test, self- assessment will put you in a position to make an informed judgment about how you want to roll this out, how you want to communicate with your counterparties or your customers when you do roll it out, what kinds of explanations you want to give before the fact, before you have a problem. This is a sort of way, it's a technique for teaching yourself to look around corners.

Matt Blumberg: You answered half of the last question I was going to ask, which was presumably you work with clients on prevention. You just talked about some things that you do before deployment. My guess is over your careers, you have also spent a lot of time working with clients on cleanup after something goes wrong. What are your sort of top three things that you're going to tell a client to do after something goes wrong? And is that different than any other something goes wrong at a company?

Noah Feldman: Yeah, I think it is different because it's much more about, realistically, it's not about covering yourself from a PR perspective. I mean, there are people who are expert in that. It's about how you convince people you're going to do it better the next time. The three steps that we work people through, work companies through is first transparency about what went wrong. Companies still to this day remarkably sometimes think that if they just hold it back, it'll all be okay. But if you have any degree of employee objection these days, especially Gen Z employees, think nothing of going public. No company is too small to have a story go public. Transparency upfront is a tremendous advantage. Second, we think it's really important to engage in a process of explaining why you're going to do what you're going to do in the future. That is to say giving reasons for your conduct. You explain why you did it wrong and now you explain why you're going to do it right in the future. And last but not least, that lands you on a set of principles that you are open to being measured by. You say, " Look, this is what we believe in and we're going to try to take action in a way that reflects that value. We're going to explain to you why we're doing it." And as a consequence, you're going to see us giving life to these principles. And no one believes you right at first because we live in a cynical world. But if you keep with it over time, you can rebuild trust and actually achieve a substantial amount of legitimacy, including in an area where you might have been seen as hitting rock bottom by virtue of the fact that you're doing this. The theory behind this is that we just don't live in a world anymore where people are naive and are willing to believe assertions by corporate actors of any scale, but they will believe what they see demonstrated over time. There's no shortcut. You have to actually demonstrate over time that you're engaged in thoughtful reasoned decision making.

Seth Berman: I would add one thing to that, which is if this does happen, if you end up bringing an AI on board your company in some way and it goes rogue and something bad happens as a result, standing up and saying it's the AI's fault is definitely not going to help. I would definitely not start from there. You may need to say, " We messed up, we employed AI when we shouldn't have, and here's what happened and that was caused by the AI." That's a way better formulation. If you just turn around and blame the AI, my guess is it's only going to... It's just going to make it look like you're denying you did anything wrong, which is going to prolong the problem quite significantly.

Matt Blumberg: Okay, let's wrap up. Let me just start by saying lots of questions we didn't get to. I think what we'll probably do here is spend a little time over the next week or so trying to answer those in a Google Doc and publish them out when we publish the recording link. Thank you all for being such an engaged audience. First, Noah and Seth, I think you are working on a book. Can you give everyone a little preview of what it's called, what it's about, when it's going to come out?

Noah Feldman: Well, we're working on a book that is basically about the relationship between power and ethics in the age of technology. If you have a snappy title that you want to propose to us, put it in the chat by all means. The basic proposition of the book is that the old fashioned Machiavelli's The Prince, do whatever it takes to succeed and then apologize for it later if you have to, is not an effective way to run a business in this current era where everything is going to come out and it's going to come out faster than you expect, and in which companies are competing among other things for being trustable in the ways that they engage with the world, including the way they engage with technology. We therefore think that it's possible to lay out a kind of rubric for how companies can behave in this environment that involves some of the same themes that I just mentioned at the individual level. It involves identifying what your core values are, it involves assessing what you do to see if you're living up to them. And it can also involve governance techniques that protect you against your own worst instincts. Sometimes borrowed from the idea of separation of powers, namely the idea that if you're the best person to achieve X, but your incentives are to do it in a certain kind of way, maybe there be someone else in the organization who's got the responsibility for taking a counteraction who doesn't have the same incentive structure that you have. That's the big picture and very eager to get people's feedback on that too.

Matt Blumberg: Super interesting topic. Thank you both for being here. I think you guys know, I just launched a podcast called The Daily Bolster, which everyone here should subscribe to. Will you join me on that and talk about this a little bit more sometime over the next month or two as the story unfolds?

Noah Feldman: You bet. Love to.

Matt Blumberg: Great. Well, thank you all for joining. As you know, Bolster is here to make startups more successful by connecting startups into leading minds and great executives like Seth and Noah. You can find Seth and Noah both on Bolster if you want to contact them or engage them. They're both Bolster members, and just want to thank the two of you for spending some time with us and our clients today.

Noah Feldman: Thank you.

Matt Blumberg: Thanks, everyone.


Our expert panelists Seth Berman and Noah Feldman, co-founders of Ethical Compass Advisors, share fascinating insights with Bolster CEO Matt Blumberg around the ways generative AI can be leveraged across key industries, what it can do well, and where the potential pitfalls might be. They also discussed the regulatory environment and what businesses can do proactively to prevent issues and mitigate risks.