The Future of Trusted AI with Marc Benioff and Sam Altman

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, The Future of Trusted AI with Marc Benioff and Sam Altman. The summary for this episode is: <p>Straight from Dreamforce, the world’s largest AI &amp; CRM conference, Clara brings us a conversation between two of the biggest names in tech: Marc Benioff and Sam Altman. AI’s potential to impact humanity is limitless, and it's essential to navigate the risks with care. Hear from Marc and Sam about how the future of AI hinges on trust.</p>

Sam Altman: So one person with a really good idea and a deep understanding of what a customer needs is going to be able to execute on that with what would have taken complex, many, many person teams before. And so, giving someone the ability to do more, I think, is going to lead to a big shift.

Clara Shih: Welcome to Ask More of AI, podcast at the intersection of AI and business. I'm Clara Shih, CEO of AI at Salesforce. This week, we're coming to you from Dreamforce, the world's largest AI event, where we just heard an amazing conversation between Salesforce CEO and founder Marc Benioff and OpenAI CEO and cofounder Sam Altman. Sam covers everything from what he's learned and how he's been inspired by his favorite AI movie, Her, as well as what's been surprising to him about the development of AI. Take a listen.

Marc Benioff: Sam, welcome to Dreamforce.

Sam Altman: Thanks for having me.

Marc Benioff: Oh, it's my pleasure. And Sam, you've been traveling all over the world. I haven't really seen you since you got back. I saw you just before you left. I know how excited you were to go to all these different countries and to give them the prophecy of artificial intelligence, the good, the bad, the ugly. What was your biggest surprise making all these trips?

Sam Altman: The level of enthusiasm, hopefulness, excitement around the world, of course balanced with wanting to make sure we successfully address the potential downsides. It was really extraordinary to see. And it's a little bit different in different places, but I sort of thought, " Well, maybe this is still just mostly a tech, Silicon Valley phenomenon." And to see what people everywhere around the world were doing with the technology and how they had incorporated it into their lives and how interested and hopeful they were was very cool.

Marc Benioff: Which country really stood out to you as potentially a great leader of artificial intelligence?

Sam Altman: That was another positive surprise. The quality of work happening everywhere was really something. I think the United States will be the greatest leader in artificial intelligence. We are very blessed to have so many things in our favor, but this will be a truly global effort, and people will contribute altogether all around the world.

Marc Benioff: Did you meet somebody who really offered you an incredible insight or inspired you or somebody that you said, " This really changes my mind on this aspect of AI"? Was there someone out there that you were impacted by?

Sam Altman: I mean, a lot of really super impressive world leaders had a lot of good advice for me and for the company, and that was helpful. But the biggest takeaway was just talking to our users around the world. And out of all of that and out of what we heard that they wanted, that they had problems with the technology, the features we were missing, that has really fed into what we'll be launching in a couple of months, and that was definitely the most meaningful feedback we got.

Marc Benioff: Indonesia gave you the golden visa. Was that inspired by your love of Eat, Pray, Love?

Sam Altman: A lot of people that I met with on the trip were like, " Oh, you need to come here more. We'll give you a golden visa." So I got this little collection in my passport now, which is awesome and unexpected.

Marc Benioff: So it's especially sparkly?

Sam Altman: I do love Eat, Pray, Love. I thought that was a great book.

Marc Benioff: You've got a sparkly visa?

Sam Altman: Yeah.

Marc Benioff: Passport is just shining bright?

Sam Altman: Passport's very beat up after the trip. It fell in the water a few times, but that's okay.

Marc Benioff: Were you swimming with your passport?

Sam Altman: Not intentionally, but it did happen once.

Marc Benioff: Let me ask you, you've thought about AI probably more than most people on earth and you've been really having the opportunity to interact with so many great AI leaders, not just all over the world, but here in the headquarters of artificial intelligence, San Francisco. What's been your biggest surprise in the last seven or eight years of OpenAI? Here we are. It's 2023, or in September, your GPT- 4 is coming up on one year. What has been the big surprise over the last eight years?

Sam Altman: GPT- 4 has only been out for six months, which is just a good reminder about how fast things have been happening. The biggest surprise is just that it's all working. When you start off on a scientific endeavor, you hope it'll work. You let yourself dream that it'll work. You kind of have to have the conviction and the optimism. But when we got together at the very beginning of 2016 and said, " All right. We're going to build artificial general intelligence," that's great, but then you meet cold, hard reality. And we had a lot of stuff to figure out, and figuring out any new science is always hard. And in this case, for a bunch of reasons, it was particularly hard. We had conviction in a path laid out by my cofounder and chief scientist, Ilya, but having that conviction and then actually getting it to work. This was, I think, something that, very much, consensus wisdom in the world was this was not going to work, and it's just through the effort of a lot of enormously talented people that it did. But that's probably the biggest surprise.

Marc Benioff: When did you know that this was going to be a success and that things were going in an exceedingly right direction?

Sam Altman: Sometime a little bit after GPT-2, which would've been 2019 at some point.

Marc Benioff: Now, as you kind of see GPT- 4 maybe as kind of a North Star going forward, what's the next big step for OpenAI to kind of get to where you want to get it to?

Sam Altman: Two things. One, on this current hill that we're climbing with the technology of the GPT series, we're going to keep making it better. We'll make it more reliable, more robust, more multimodal, better at reasoning, all of these things. And we also want to make sure that it is really useful to people. And all the ways that it's transforming, things keep happening. So we're now deep into the phase of the enterprise really adopting this technology and getting the systems to be very secure, very highly trusted, handle data appropriately, not hallucinate, or at least not when you don't want them to hallucinate. That's leading to enormous transformation for a lot of companies, and we want to keep doing that. And then the other thing we're going to do is figure out the remaining research from this paradigm we're in right now to something that we could all truthfully call AGI.

Marc Benioff: What's been the most complex part of dealing with the hallucination problem?

Sam Altman: Well, there's a lot of technical challenges, and then we could talk about those, but one of the sort of nonobvious things is that a lot of the value from these systems is heavily related to the fact that they do hallucinate. If you just want to look something up in a database, we already have good stuff for that. But the fact that these AI systems can come up with new ideas, can be creative, that's a lot of the power. Now, you want them to be creative when you want and factual when you want, and that's what we're working on. But if you just sort of do the naive thing and say, " Never say anything that you're not 100% sure about," you can get a model to do that, but it won't have the magic that people like so much if you do it the naive way.

Marc Benioff: When you think about your own company and how that kind of reflects back to you, I know how focused you are on trust and how focused you are on kind of the deep ethical backbone of OpenAI, what are the core values of OpenAI going forward? What things in priority do you think are most critical for you to focus on?

Sam Altman: We want to have the smartest, most capable, most customizable models out there, because we so believe in terms of what this will enable for humanity and that the good will be orders of magnitude more than the bad, but we also want to make sure that our models are aligned and that enterprises can trust us with data and that we're very clear about our policies around that, and that we're keeping not only the data secure, but also all of the learnings that a model has after ingesting a company's or a person's data, and so keeping the model very secure itself. And that balance of continuing to push on the capabilities but also making sure that from a safety and privacy perspective, we are keeping pace with the capability, we want to continue to lead on both of those.

Marc Benioff: How are you managing between alignment and intelligence?

Sam Altman: So this is a very common question. Right? Are we doing enough on alignment relative to capabilities? Should we be slowing down work on capabilities to work more in alignment? And I think the frame of it is nonsensical. It really is the same thing. What's happening here is we have this amazing new piece of the human tech tree called deep learning, and that approach is solving lots of problems. And that same thing, which is helping us make these very capable systems, is how we're going to align them to human values and human intent. And when we make something that is considered an alignment breakthrough, RLHF, which is how we take the base model and get it to follow the instructions that a human user has, that's an alignment technique, but it leads to the model going from not very usable at all to extremely usable. And so, in some very important sense, that's a capabilities thing, a capabilities gain as much of an alignment gain. Same thing with interpretability. We can look into what these networks are doing. That does, for sure, help with safety in all kinds of ways, but it also helps with capability. And so, there's more one- dimensionality to the progress than people think, and we think about it like a whole system. We have to make a system that is capable and aligned. It's not that we have to make a capable system and then separately go figure out how to align it.

Marc Benioff: That's definitely been a surprise to me to see how you've led through RHLF and all these different kinds of next- generation alignment models, got them to work, invented, I think, many of these techniques as well that were not really part of the AI constant and continuous AI narrative, even as, I would say, recently as the late 1990s and then into the, I would say, first part of the 21st century. I think when I read some of the textbooks that are even in 2015, 2019, the models that you've invented aren't even mentioned. Are you surprised at how much core research you're doing?

Sam Altman: No, we thought we were going to have to do a lot. And we call ourselves a research lab, and we very much are. The sort of standard way that tech companies work is you start a product company. At some point, when you're generating enough profit, you're like, " Oh, I'm going to start a little research lab on the side." And the challenge there is they don't usually go very well. We started as a research lab and then we started a little product effort on the side, and the challenge is we have to make that go very well.

Marc Benioff: What's been the scariest thing that you've seen in the lab?

Sam Altman: Can I finish one more thing on that?

Marc Benioff: Oh, jeez.

Sam Altman: Oh, no. I just thought about-

Marc Benioff: That was a really good question. Come on.

Sam Altman: I'll get there. In terms of how we make progress and are we surprised, not really. We are empiricists. We know we're going to be surprised. We know that reality has a different way of playing out than you think it's going to when you sit there and make your beautiful theories in the ivory tower. So we're never surprised, and we just try to meet reality where it is and follow the technology where it can go. And I think that more than any other... Well, there's one more thing, which was we're extremely rigorous and careful combined with a lot of team spirit. And that combination, I think, is hard to get a whole company to be rigorous about every part of a system that has to come together. But those are two of the things that I think are special about OpenAI. Scariest thing we've seen in the lab. Honestly, nothing super scary yet. We know it'll come. We won't be surprised when it does. But at the current model capability levels, nothing that scary.

Marc Benioff: When you were in India, I was listening to one of your interviews, and I was really taken with something that you said about intelligence, that you said that as you look at these large language models and specifically how they're coming up with their answers, that you think it speaks actually or is a reflection back on human intelligence and maybe what human intelligence means. What were you trying to say?

Sam Altman: I don't remember exactly, but probably that intelligence is somehow an emergent property of matter to a degree that we probably don't contemplate enough and that it can happen with electrical signals flowing through neurons that are reconnecting in certain ways. It can happen with electricity flowing through silicon, but it's something about the ability to recognize patterns in data. It's something about the ability to hallucinate, to create, to come up with novel ideas and have a feedback loop to test those. And as we study these systems, which are easier to study than the human brain for sure, without doing a lot of collateral damage to, there's no way we're going to go figure out what every neuron in your brain is doing. But we can look at every neuron in GPT- 4 and look at every connection.

Marc Benioff: Let's go a little bit step deeper, because I think when you talked about that in India, I was really surprised. You're talking about intelligence is this emergent property of matter. Do you feel that that's what's happening then in the lab, that you're seeing this kind of wake up, that you're starting to see the kind of intelligence emerge from this software that we have inside the people in this auditorium?

Sam Altman: Well, I don't think we're seeing anything wake up, but I do think we are seeing intelligence emerge from a very complicated computer program.

Marc Benioff: And how does that go forward?

Sam Altman: I think the current GPT paradigm, we know how to keep improving and we can make some predictions about. We can predict with confidence it's going to get more capable, but exactly how is a little bit hard, like why a new capability emerges at this scale and not that one. We don't yet understand that as scientifically as we do about saying it's going to perform like this on this benchmark. But I suspect, I'm pretty sure that there are major new ideas to still discover, and that if we assume that this sort of GPT paradigm of the world is the only thing that's going to happen, we're going to be unprepared for very major new things that do happen.

Marc Benioff: Like what kind of things?

Sam Altman: The one that I would be tempted to say as the most important is the ability to reason. GPT- 4 can reason a little bit in some cases, but not in the way that we mostly would use that term. When we have models that can discover new scientific knowledge at a phenomenal rate, if we let ourselves imagine a year where we make as much scientific progress as a civilization as we did in the previous decade or even the previous century and think about what that would do to quality of life, what's possible, that's pretty transformative.

Marc Benioff: I thought it's really interesting thinking about intelligence as an emergent property of matter and that feel like there is this unremarkable but also highly correlated connection between working with these chatbots but also thinking there is some human aspect of it. Is it coming mostly through the training, or do you think it's actually inherently there?

Sam Altman: The human aspect of talking to a chatbot? I think it's coming mostly through the training. We're really training this in some way to like be all of humanity. That's what it trains on, is the output of a huge fraction of humanity.

Marc Benioff: And what would be the next step?

Sam Altman: Well, we can talk about the obvious ones and the ones that are going to be more speculative. So the obvious ones are just the models are going to get dramatically more capable. They'll be dramatically more customizable and dramatically more reliable. And really, I think that the model itself, that is the fundamental enabler of everything else. We will continue to build the features around the model, like we were talking about earlier for enterprise- class usage, and we will continue to build consumer products, like ChatGPT, to make it easier for people to just start playing around. But in the same way that the internet and then mobile just kind of seeped everywhere, that's going to happen with intelligence. And right now, people talk about being an AI company, and there was a time after the iPhone App Store launch where people talked about being a mobile company. But no software company says they're a mobile company now because it would be unthinkable to not have a mobile app, and it'll be unthinkable not to have intelligence integrated into every product and service. It'll just be an expected, obvious thing. And companies will have their AI agents that can go off and do things and customers can interact with, and all sorts of other things that we're seeing happen right now. But this will be a big shift in terms of how we interact with the world and with technology.

Marc Benioff: When you think of dramatically more capable models, what is a dramatically more capable model? Can you try to describe it to us?

Sam Altman: I think it'll happen in all sorts of different ways. But one example, a lot of people use ChatGPT to help them program, and maybe they say, " Hey, it writes 25% of my code," and then it'll get up to 50 and then 75 and then 87 and a half and then 90 and then 95. And at some point, it's not just doing more code, but it's letting you do things you just couldn't do before. I'm a big believer that quantitative shifts lead to qualitative shifts at some point. And so, if you have better tools, if you can operate at a higher level of abstraction, if you can keep more of the big- picture problem in your mind at one point, you can just do dramatically more. And that's difficult to say exactly what that's going to be like, but we could reach back into history for an example. And if you think about the kinds of problems that a programmer let him or herself dream about when they were working with punch cards versus what you can think about with a high- level language of today versus what you can think about where you can just say in natural language, " This is what I want a program to do." " No, how about this? Actually, that's a really interesting idea that I just saw emerge here. How about this?" and the cycle time in the sort of iterative feedback loop is so different than it is today, that will change what a single programmer is capable of. That will change what a single person running a one- person company is capable of, and I'm tremendously excited to see.

Marc Benioff: I remember in one specific dinner that we were having, I was describing all the different things that I wanted to do at Salesforce, some of them that we just showed at the Dreamforce keynote, and then other things, and other things, and then I just kind of shook my head and I just said, " I just don't know how I'm going to hire all the people who do this work." And then you turned to me and went, " Marc, you idiot. You're not going to have to do any of that. The computer's going to do it all for you." Is that what you're saying?

Sam Altman: Not quite. I'm saying-

Marc Benioff: Which part, that I'm an idiot?

Sam Altman: What I'm saying is the amplification of one individual's capabilities. So one person with a really good idea and a deep understanding of what a customer needs is going to be able to execute on that with what would have taken complex, many, many person teams before. And that ability to give people these tools and let things happen with less resistance, with less friction, faster, easier, it's very easy to kill a good idea. It just takes one person to be a little bit less than supportive. They don't even have to say outright no. Creative ideas are very fragile things. And so, giving someone the ability to do more, I think, is going to lead to a big shift.

Marc Benioff: On a bigger message, I think a lot of families try to encourage their kids to go and become computer programmers and take coding classes and coding camps and all of these kind of things. Do you think those kids are going to come back and really be quite upset at their parents?

Sam Altman: I remember my grandma trying to tell me in second grade or something that was really important. I paid attention and learned cursive in class. I don't even think they still teach it. Probably they don't. But I was like, " I have very bad handwriting," and that was a real struggle for me anyway. And I cheerfully went through it. I knew at the time that this was like, " No way this was going to be important," because I was already typing on a computer, but I'm not upset about it.

Marc Benioff: When you watch all these kind of unusual movies about artificial intelligence, Minority Report or HAL or Her or WarGames or all these various things, there's been so many of them, which one is your favorite?

Sam Altman: I like Her.

Marc Benioff: Which part of Her do you look at and go, " This is never going to happen"?

Sam Altman: Well, I mean, a ton of it. I think that's not the fair question because... And looking back at sci- fi, it's always easy to-

Marc Benioff: Can you just fill in that part that... Which part is going to happen? Which part do you think will never happen?

Sam Altman: Yeah, yeah. But I think it's unfair to dunk on old sci- fi for all the parts they got wrong, and we can cover that, but that's always true. What's amazing is that people get it right at all. And the number of things that I think Her got right that were not obvious at the time, like the whole interaction model with how humans are going to use an AI, this idea that it is going to be this conversational language interface, that was incredibly prophetic and certainly, more than a little bit, inspired us. So it's not just like a prophecy, it's like an influenced shot or whatever. But I think this idea that we all have a personalized agent trying to help us, and we talk to it like we talk to ChatGPT, that was actually not what most movies... I mean, most movies thought if we interacted with an AI at all, it was going to be like robots shooting us or something.

Marc Benioff: You think that that will be happening soon?

Sam Altman: I do not. I do not. I realize it is a compelling movie, but no, I don't. I think it'd be great for Hollywood to have some new tropes. But yeah, I think Her got something deeply right on the interface, and that is no small feat.

Marc Benioff: One of the hobbies that you have, and we've talked about it, is that you think about if you actually had to prepare for a moment in life when you had to kind of escape and kind of protect yourself. Where did you find that interest in that kind of idea?

Sam Altman: I mean, I think like a lot of little boys. I thought Boy Scouts was really cool and I like outdoor stuff and survivalism, and I have no delusions that any of that is going to protect us or me or you or anyone if AI goes wrong. And I sort of think it's silly that other people assume that's what it's for. It was like a boyhood hobby that stuck.

Marc Benioff: So you're not escaping to one of these Indonesian islands now that you have a golden visa and that's your new...

Sam Altman: The speed of light is really fast. I think the AI can get there much faster than I can get there.

Marc Benioff: You have such an interesting background in life. And the first time we ever had a meeting, it was because you came to me and you had some political interest, and we were at my home and we were talking about your political interest, and I was really... It was many years ago and you were very, very young. Where did that political interest come and where did it go, and is it still in there?

Sam Altman: I love California. I would like to spend my whole life living in California. I was concerned then, I'm concerned now about what's happening with the state. I think everyone has some civic duty to think about what they can do. So I was lightly looking into it, but I don't think I'd be a good politician at all, and I think I have something else to work on that's really important to me.

Marc Benioff: I think you'd be very good.

Sam Altman: I don't think so. That's really nice of you to say. I wouldn't say I'm like a super self- aware-

Marc Benioff: You had some interest around that. There was like a desire. Okay.

Sam Altman: I had some interest in seeing the state and the country at this point too get better, but I think we've all got to figure out the way we can contribute. I think the way I can contribute is scientific and technological progress. But I do think we need to get more people to jump into politics.

Marc Benioff: Well, you're a very young man, and when you were an even much younger man, you did have a very big political interest. And that was before you started OpenAI, and now you have. You've obviously now come out on a global stage as clear to people you are the visionary, capabilities that you have and leadership capabilities that you have. Tomorrow, you're going to go back to Washington, D. C. You've already been out there quite a few times. Tomorrow, you're going to be interacting, I think, mostly with the Senate. So as you meet with, maybe just talk about the United States for a second and dealing with the White House or the Congress or the Senate and trying to explain to them what's happening here in San Francisco and the technology revolution that you're leading. What is your message to them, and what has been your biggest surprise dealing with them?

Sam Altman: Pleasant surprises for the most part. I think our leaders are taking this issue extremely seriously. They understand the need to balance the potential upsides and the potential downsides, but nuance is required here. And I did not go in with particularly high hopes about that nuance being held appropriately, and it really is. I don't know exactly how this is all going to play out, but I know that people seem very genuine in caring and wanting to do something, wanting to get it right. And the message... I think our role is just to try to explain as best as we can, realizing that we don't have all the answers and we might be wrong. Technology takes weird turns sometimes, what we think is going to happen, where the control points can be, where the dangers might lie, what it takes to make sure the benefits are broadly distributed, and then let the leaders decide what to do.

Marc Benioff: When you have these discussions with these political leaders, which part of it has been the most shocking or surprising to them?

Sam Altman: The first time that they see GPT- 4, it usually exceeds their expectations. I would say it's the technology demo and realizing that this thing that people have been talking about for a long time is here.

Marc Benioff: Has that also been true on the international leaders as well?

Sam Altman: Yes, although by the time of that trip, most people had, by then, already played around with it to some degree. When we did that trip, it was sort of like, ChatGPT had already had its news cycle.

Marc Benioff: If you would have one goal with the US government between now and, let's say, next year when the elections start and potentially there'll be changes in administrations and so forth, what would be the goal, do you think, for the next 12 months from a political perspective?

Sam Altman: I think getting a framework in place that can help us deal both with short- term challenges and the longer- term ones. Even if it's imperfect, starting with something now would be great. It's going to take people a while to learn. I think solving this legislatively is actually quite difficult given the rate of change. I think an agency, a new agency would probably make the most sense. But getting something going so that even if it's just focused on insight and not oversight so that the government can build up the muscle here, I think would be great.

Marc Benioff: So are you suggesting the CIA should change their letters into the CAI?

Sam Altman: I mean, I can imagine worse agencies of less competent people to hand it over to, but I really do think a new one would be appropriate.

Marc Benioff: What part of all of this are they just getting really wrong?

Sam Altman: This is a very human thing. I mean, I don't just think the government's getting this wrong. I think most of us get it wrong to different degrees too. You show people an exponential curve of technology and they believe you that it's been exponential, but they don't believe you it's going to keep going exponentially. They believe it's about to level off. And that happens in all sorts of subtle and not subtle ways, and it's a very difficult bias to overcome, particularly when if you accept it, it means that you have to confront such radical change in all parts of life.

Marc Benioff: And so, what are you doing to help them understand that?

Sam Altman: I mean, a bunch of things. One is, I believe in the power of repetition here. Another is to just keep showing up every year and say, " Hey, that thing you thought was impossible or that thing you said, oh, well, I don't really have to worry about this, that your last model couldn't do it. Try our new one. See what happens."

Marc Benioff: The last few times I've arrived in the United States, I noticed that the Customs also has now a camera and sometimes they don't even ask for my passport. They just put up the camera in front of me and the AI is able to know it's me. We know in other countries, especially certain Asian countries, that there's a lot of that type of technology already in place. Do you think we're moving into a surveillance economy? Do you think that this technology will accelerate the move into greater levels of surveillance?

Sam Altman: I do. One of the things that I struggle with is, if AI is as powerful as we think and people can do significant harm with it, I don't see a world where we don't have less surveillance, and I don't think that's a good thing. But I have talked to a lot of people about this and have not yet heard a great solution.

Marc Benioff: I know that you like to just kind of get away and maybe get on a small boat somewhere, like a sailboat, and be with a friend or two and just sail by yourself and completely disconnect. When do you think it'll be that you're going to take that trip and that sailboat will not be quite so disconnected and we'll know where you are, no matter where you are, and maybe there'll be some autonomous characteristics to the boat as well?

Sam Altman: Well, we have Starlink now. In some sense, that's a transformative thing, I think, even more than people realize. I'm okay with that. I'll take that trade for sure. But I think this sort of idea that I'm going to be off on a sailboat unreachable, unless you really want to be, that's over.

Marc Benioff: When you now look out at your product pipeline for the next couple of years with OpenAI, you mentioned a few things. You especially mentioned these remarkable new models. You use the example of the computer science model and the ability to write any code and maybe become a whole engineering team. Can you take another industry or another application area and give us an example of something else that you think will be a remarkable achievement in terms of a model moving forward?

Sam Altman: I'll mention a few of my favorites. What's happening with education is incredibly gratifying to us to watch, and the ingenuity of teachers, of entrepreneurs building edtech companies, and also just of students themselves finding new ways to use ChatGPT to learn is quite remarkable. We know what a difference it makes to a student to have a great one- on- one tutor. It's like two standard deviations better outcome or something. But society before AI, just realistically, was not going to be able to afford that. And so, we didn't even let ourselves think of how we could deliver it. And now, I think we see a path where with a combination of humans and AI together, we can offer this to everybody in the world. And if we can deliver on that, and again, it looks to me like we're going to be able to with the technology, that will be a transformation for the world that I think would be a huge triumph. In healthcare, I think we can imagine a world where not only do these systems go help us cure a lot of diseases, but also the actual healthcare product we deliver to a person. Again, this hybrid of a doctor and AI together means we can just offer something far beyond what we think of as even possible today. In creative work, it's actually quite remarkable to see what's happening, what a visual artist, for example, can now do with the latest image generation tools. We're all going to get better art than we've ever had before.

Marc Benioff: Are you especially surprised to see that level of creativity in art and how it's connected back to the systems?

Sam Altman: In some sense, yes. If you had asked me to predict 10 years ago the order that AI was going to disrupt industries, I would've said physical labor first, cognitive labor second, and maybe never, but certainly last, creativity. And I hadn't actually thought that hard about it, but it's what all the experts were saying. It was what my college professors said, whatever. But it was what everybody thought. And for creativity to go first, which clearly it has, is an update. But also, as we started working on it and we kind of saw where the system strengths were and where the weaknesses were, it's a kind of area that can tolerate the flaws in our current systems quite well.

Marc Benioff: Yesterday, we gave a$ 20 million grant to our local public schools to help them augment their understanding of AI and we did some seminars, and the one in high school that had the most attention was on music. And kids were just fascinated with how the AI was able to work with them with music, but also wanted to understand all the legal aspects, the trademark issues and brand issues and everything else associated with music. When you look at the music industry overall, are you surprised to see kind of this dramatic change in music and the level of interest in AI music from kids?

Sam Altman: I mean, not at all surprised on the interest. People love music, and it's a really important part of life, and what people can do with these tools is great. Unfortunately, I think the music industry has a reputation such that companies just don't want to deal with it and are focused on other areas. And I think that's a loss, because like we're seeing with visual art, I think this will be a tool that amplifies humans, not replaces them.

Marc Benioff: All right. Well, thank you, Sam.

Sam Altman: Thank you very much.

Marc Benioff: This has been a great interview. Please thank Sam Altman, would you?

Clara Shih: What a great conversation. Three takeaways for me: First, that creativity and hallucinations are two sides of the same coin, and just to appreciate that. Number two, even Sam didn't expect AI to be so capable in creative tasks. And so, a lot of these technologies, they're going to continue to surprise us. Last but not least, I was struck by how Sam describes himself as an empiricist. OpenAI was not the first to develop large language models, but because of their pragmatic approach, they were able to go to market first with ChatGPT. That's it for this episode of Ask More of AI, the podcast at the intersection of AI and business. Follow us wherever you get your podcasts, and follow me on LinkedIn and Twitter or X.

DESCRIPTION

Straight from Dreamforce, the world’s largest AI & CRM conference, Clara brings us a conversation between two of the biggest names in tech: Marc Benioff and Sam Altman. AI’s potential to impact humanity is limitless, and it's essential to navigate the risks with care. Hear from Marc and Sam about how the future of AI hinges on trust.