The Next Big Questions in AI Research with Andrew Ng

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, The Next Big Questions in AI Research with Andrew Ng. The summary for this episode is: <p>Listen in as Clara sits down with world-renowned AI researcher Andrew Ng, founder of <a href="http://deeplearning.ai/" rel="noopener noreferrer" target="_blank">DeepLearning.ai</a>, Coursera, and Google Brain, as they discuss the next frontiers of AI research, the way kids engage in unsupervised learning, and how his lifelong interest in AI started with flying helicopters.</p>

Speaker 1: What's really overblown is the extinction risk. I don't see any plausible power of AI leading to human extinction. Maybe in some ways that humans will make humans extinct, but this idea of sentin AI, it seems very implausible to me.

Speaker 2: Welcome to Ask More of AI the Podcast, looking at the intersection of AI and business. I'm Clara Shih, CEO of Salesforce AI, and today I'm excited to be here with Dr. Andrew Ang. Andrew is a world renowned AI researcher and founder of deeplearning. ai, as well as Course Era. Thanks so much for being here, Andrew.

Speaker 1: It's good to see you online, Clara, and I think just saw you a few weeks ago at the Salesforce event. It was fun to catch you in person.

Speaker 2: Well, and congratulations to you on your Time 100 honor. So well deserved.

Speaker 1: Oh, you too. I think it was good to see you and everyone there.

Speaker 2: You joined the Stanford faculty in 2002. Now, if we could go back those 20 years with you, what were some of the research questions that you were investigating then in the lab, and were there sparks back then of what was to come?

Speaker 1: So back then in 2002, I was very focused on reinforcement learning research. My PhD thesis from UC Berkeley was using reinforcement learning to fly helicopters. And so I was continuing to work on a lot of robotics research, and that's how I wound up years later working on deep learning and neural networks, which was I could get the robots to be controlled, to fly upside down, have a robot duck climb rocks or whatever. What I could not do back then was get my robots to see, and so I shifted a lot of my attention to building neuro networks because I could get them to move quite agilely, but perceiving the world and reacting to the world, that was really hard. And so that's how I wound up shifting from reinforcement learning research to deep learning and perception research. And I remember when did I organize a workshop at the Neuros Conference at that time called NIPS, now NeurIPS titled Two Was Human level AI, and it was a very controversial topic. People thought, " What? Human level AI? Why are you even talking about that?" But that was a fun early discussion from, I think Yanakin spoke at that conference, that workshop that I co- organized as well. A bunch of people were there about what the long- term future of AI could be. So fun times.

Speaker 2: So incredible. So even back then, and was that an aspiration for you? Did you ever think that in your lifetime and career that we could achieve AI that could pass the Turing test and feel like it was human?

Speaker 1: I was definitely very motivated by the question for a lot of my career. So I think when deep learning was just starting to take off again, that's when I organized that. Around that time was when I organized that workshop. And even when I started the Google Brain Team, a lot of my motivation back then, in hindsight, maybe it turned out to be harder than I thought that many of us thought, but at that time, my pitch to Google was, " Let's take neural networks and scale them out really, really big, and then we'll make a lot of progress in AI that way." And while part of that recipe turned out to be correct, scaling our models really, really big, that worked. I think the path to actually achieving human level AI or really artificial general intelligence or however you want to call it, that turned out to be really difficult path. I think we knew it'd be difficult, but maybe it turned out to be... Maybe when I was younger, I was more naive and thought it'd be easier than it turned out to be.

Speaker 2: And it turned out to be right, even though it hasn't been an easy road. I just imagine back then a lot of people refer to it now as the AI winter. It wasn't the hot cool thing that everybody wanted to jump on. Was it difficult to raise money? Was it difficult to attract graduate students? How did you persist and continue to stay so focused and committed to AI during those years?

Speaker 1: Yeah, it's funny you put it that way. It feels like I missed the AI winter. And I know that there was an AI winter, but as an AI insider working on this stuff for the last 20 years, what I saw was rapid year- on- year progress over what was possible the year before. And I know that broader public, there were hype cycles that they died down, neural networks were hot, and then they were not, and then they were hog again. So I know that factually there was an AI winter, but as an insider, if you look at the actual progress, there wasn't actually much of a winter in terms of what AI could do. And I saw my software, my collaborative software, it got better pretty consistently year after year. And so maybe this makes me sound really dumb, I don't know. But frankly, it feels like I just worked through the AI winter without really being bothered by it. So I was a happy camper all this time just working away at AI.

Speaker 2: It's incredible because because of the work that you and your lab and so many others have done that we are out of the winter. But it sounds like from within the depths of the research lab, there was no winter and you were continuing to make new findings, which is fantastic.

Speaker 1: Oh, yeah. And I feel like if you look at the research progress, it's just been getting better every year. I think the societal perception has massive swings, but the actual, how well does this stuff work? It just made progress every year for the last 20 years.

Speaker 2: Isn't that so interesting though, how something is from the inside can be so different than how it's perceived on the outside? And just for us to always keep that in mind when we're working on big audacious goals.

Speaker 1: Yeah. There's actually one, I learned this lesson at Coursera as well when I think there was a year, was it 2011 or 2012? That the New York Times or someone declared as a year of the massive open online course. And I feel like what I see when you're developing technology, what it often feels like is every year is, I'm going to make up a number, every year is like 50% better than it was the last year. And so when you work on the technology, you see this smooth exponential curve. It is exponential because it's getting 50% better every year or something. But it turns out that to people that aren't actively working on it, that 50% year- on- year growth, that looks like an exponential curve. And people go, " Wow, this came out of nowhere." But what it actually feels like working on it is, " Yeah, I've been working on this for five years and it's been getting a little bit better every year. So what's the big deal?" But there are these moments when due to social dynamics, I was working on online education for years before Coursera, but I think finally we hit that product market fit. I thought, " Oh yeah, the feature's now a bit better than the year before." But there was a phase transition of hitting a much stronger product market fit and then capturing the match a lot of people, which is why then sometimes we go, " Oh, it came out of nowhere." I find that when I'm inside the working the field, I'm not always, I don't think I'm actually that good at predicting when I capture societal awareness because I see, yeah, it's just getting better every year. So that's an interesting lesson for me to try to take the outside of view and not only the insider view because that affects how we should think about growing our businesses as well, and the market timing of what we do.

Speaker 2: It is. But I think ultimately it's really the power of compounding, and I talk to this all the time with my kids is you keep at it. You get 1% better at piano or math or soccer every day. And on the outside it doesn't seem like it's getting much better. But then all of a sudden you perform your piano piece several years later and everyone's like, " Wow, how did that person get so good?" And I think that that's true for any type of endeavor that requires persistence and patience and multi- years of work.

Speaker 1: Yeah. In fact, I often wonder about what is actually creativity or acts of genius. I suspect a lot of really creative acts is tiny little increments, but stacked up over a long time so that outsiders don't even understand how it was done and that is very creative. I remember I think Gary Kosoff after he was defeated by Deep Blue chess, former chess work grandmaster, I think he said that he found the AI systems moves very creative, and I thought that was incredible. But to me, sometimes the most creative acts I think are reflection that we just don't know how this came about. And if we knew how it came about through the sweat and hard work of building one little brick at a time until suddenly there's this magical cathedral. That's what creativity sometimes feels like.

Speaker 2: It's like a catchall phrase. That's very interesting. So I want to also go back to helicopters. Reinforcement learning is not mutually exclusive with neural networks and deep learning. And so what was the specific aha moment for you and what were the specific areas that you were looking into at first, and then how did you transition to deep learning and neural networks?

Speaker 1: Yeah, so for my PhD thesis at UC Berkeley, I actually had created my own reinforcement learning algorithm to train a very, very small neural network to fly a helicopter. And I think that at that time, reinforcement learning was a very academic subject with relatively few practical applications. So when I released these videos of a helicopter flying really well, seeing the air rock solid, that's more stable than a human pilot could probably fly it, that was viewed as a big success by the reinforcement learning community. And not many people know it was actually with very, very tiny neural network, embarrassingly small. We could count the number of neurons, probably few neurons that have fingers. I don't remember anymore. But then the research transitioned to deep learning, it was one of the books that I was very motivated by was a book titled On Intelligence by Jeff Hawkins. And Jeff Hawkins had written this groundbreaking book that raises theory controversial to this day that maybe a lot of intelligence may be due to one learning algorithm. And I thought, if human intelligence is due to a thousand different learning algorithms, maybe evolution, evolve a thousand different things with different pieces of brain to do, then how on earth could we have to build that? Well, it's going to take forever to figure out a thousand algorithms. But there was this fascinating theory that if there's one or a very small number of organizing principles for intelligence, whereas a simple algorithm plus a lot of data, then maybe within our lifetimes we could figure out a lot of what that is. And so I was very motivated by that. And in fact, the early days of when I was leading the Google Brain Team, primary mission I set for the Google Brain Team was I said, " Let's make really, really big neural networks." So fortunately, that worked out well. A lot of neural networks, very large networks with long days. Fortunately, that part of recipe worked out well. There was one other thing that I was excited about back then, which turned out not to be the optimal short- term direction it turns out, which was I was very focused on unsupervised learning, learning from unlabeled data. Because a lot of human learning, a lot of human infant learning, you and I both have kids, no matter how loving a parent you are, we're not going to point out a thousand calls to kids. We just don't have that patience. And so kids actually do a lot of learning by wandering around the world and observing and figuring stuff by themselves, not by parents labeling every single object over and over. So I was very motivated by this idea of learning from massive amounts of data. And we did that in the early days at Google on the Google Brain Team. And I think to this day, unsupervised learning is important. It's actually how large language models now are trained. But I think I got the timing of that wrong in hindsight. What really worked a decade ago was supervised learning. There's only more gradually now that there's much more self- taught learning or self supervised learning or unsupervised learning that's driving more progress.

Speaker 2: You were just ahead of your time, and of course now reinforcement learning has made a comeback in a big way and reinforcement learning from human feedback, and maybe you weren't so wrong all along and all these different pieces coming together.

Speaker 1: Yeah, although I find that timing is really tricky. I don't give myself credit for when I got the timing that far off. One thing I think about when Apple released the iPhone, Steve Jobs got the market timing right. Well, iPhone took off. But Apple also had released the Apple Newton much earlier, which is a stylus tablet handheld with a screen. But I think the ecosystem just wasn't ready to support that. So the Apple Newton failed as a product because wifi and touchscreens, the ecosystem just wasn't there. But the iPhone got it right. So you actually get a lot of points in today's world. You can get the market timing right, which is really tricky. I don't know. I wish, it just makes a huge difference. So I am trying to put a lot more thoughts as well into long-term research is great. We should do long- term research that won't pay off for 20, 30 years. But in terms of taking things at product that market timing, I try to think a lot about how to do better at it.

Speaker 2: That's a good point. And you just look at the success of ChatGPT and how so much of that was really dependent on the availability of powerful GPUs and the ability to do this large amount of compute for training and for inference, but then also very practical things that they did, like reinforcement learning at scale with all these higher labelers. And so maybe it's certainly a big part of the timing, but then a big part of smart decisions that are made.

Speaker 1: I think. Open AI has done great. I'll say one thing. I find that a lot of people are saying, " Boy, ChatGPT came out of nowhere." And I don't think that's true. I think for a lot of people who came out of nowhere, I didn't realize I did this until a couple weeks ago when a friend points out to me. But it turns out that in September 2020, so a little more than two years before ChatGPT was released, in my weekly newsletter, The Batch I actually wrote that I felt GPT3 was pointing to a significant way of how we change text processing. And I wrote then, I forgotten I wrote this, but inaudible. And more than two years before ChatGPT, I actually wrote publicly and saw on the website that I felt GPT3 was changing the way we're processing texts, that scaling up will improve it even further. And the AI funds were already seeing entrepreneurs used ChatGPT in new ways. And I know I wasn't the only one. Some of my insider friends in NLP, attending NLP conferences, it was clear something was in the air, I don't think any of us knew it'd be exactly ChatGPT, but by the time of ChatGPT3, it was already pretty clear something weird was going on. So I think sometimes when you're close to tech, you can make predictions even a couple of years ahead of time. And I know these days I make, maybe I'm still trying to think now what are the predictions I have for a couple of years in the future? And who knows I'll be more or less right this time around? But it's a fascinating space where what will happen in the next couple of years seems like.

Speaker 2: And it ties back to our discussion earlier about what seems on the outside, like something coming out of nowhere, a stroke of creative genius was actually years in the making. And the closer you are to it, the more you realize that it can happen in the better you are at being able to predict that big moment.

Speaker 1: And a lot of the disruptive changes I think many sometimes people close to the tech really had a sense for quite some time.

Speaker 2: You've spent a lot of time in your career in academia at Stanford and at Berkeley before then, you've also worked at large companies, Baidu founding the Google Brain Team. You're also an entrepreneur, you co- founded Coursera, now you have deeplearning. ai. What do you think the role is of large companies versus startups versus academia in furthering AI research and is this changing?

Speaker 1: I think it's all important. I think all of these organizations are important. From a business standpoint, I find that many large companies have a huge distribution advantage, which is why when the company figures technology companies can often be fantastic at getting it to their user base, even if they're not necessarily the first to market the product. So this is why even though some of the large cloud companies didn't invent a lot of the OM technology or something, I think many of them would do fine selling OM, large language model or generative AI, APIs. I think different companies are different and some companies are really good at being highly nimble and innovative, but many are not. And then I think I really enjoy the breathtakingly fast decision making of startups, the ability to, I don't know, on a Saturday morning, make a call, send an email, and then ship a product on Tuesday. It's just very difficult, more difficult to do that in most big companies. But the breathtaking decision making speed of startups means this pace of iteration. It's been interesting. I've spoken with some people that have been big companies for 20 years and sometimes they'll say, " Oh Andrew, we move really fast." And they are fast, maybe relative to their frame of reference. But I find that the fast moving startups CEOs, even I've been surprised, actually with very good CEOs I know of startups that got on the call with a phone call and within 60 minutes that made a major engine architecture decision. And I go, " Wow, they just make that call in 60 minutes." And they did. And it was a good decision even. And I think academia, I know that there's a theme that maybe academia doesn't have the massive compute resources, but I think even those, some types of research needing massive capital investments are very difficult to do in academia. There's so much work to do in terms of basic research. I see groups at Stanford, Berkeley, many other institutions doing really continuous work as well, even though academia tends not to have the marketing engines that the big companies have. So sometimes they have really good work in academia doesn't receive as much attention. But I think it's all important. I think there's so much going on AI, all the dispute is very additive. In fact, I've been following some of your work at Salesforce as well there, and I feel like, do you want to say anything about that?

Speaker 2: I would love any feedback from you. We're very committed. We're very excited building on the work of one of your former students, Richard Socher, who was the founder of Salesforce research 10 years ago, and we're really excited about our new Einstein co- pilot that we're working on and just bringing AI into the flow of work of all of our existing products.

Speaker 1: Yeah. And I think chatting with Silvio and some Salesforce engineers over the last few months, I think I was in Palo Alto and randomly ran into a group of Salesforce engineers and wound up chatting with them. But I think the fact that Salesforce is training your own large language models and thinking through the entire stack of software hardware use cases, I think given your very large user base, it feels like I'm imagining the high degree of excitement and all the things you could do.

Speaker 2: It is a thrilling time. And our customers have been great collaborators with us on this too, and I feel like we're literally building out the future of technology.

Speaker 1: Yeah.

Speaker 2: What do you think are the most interesting questions in AI research today?

Speaker 1: So AI has become so diverse and so rich that the field is not going in one direction. It's going simultaneously in a hundred different directions. And I think it's good that lots of people have lots of opinions about what research to do, but maybe just to share with you some things I'm excited about, I remain excited about unsupervised learning. If I had plenty of time and didn't have full- time work, I would find it fascinating to go and do research on unsupervised learning, which is well aligned with the current ways that we're pre- training large language models and large vision models and large multimodal models. But I feel like there's still something missing. I feel like there are algorithms. Something is off, especially for the vision models, the way that we're training large vision models. It feels like Google Brain Team, my former team had published a transformer paper and vision transformers takes images and turns them into tokens, turns them into numbers and shoves them into the transformer. But that whole two chain, I think there's a lot of room for improvement. Even though launch vision models, launch vision transformers are working quite well. So I think a lot of exciting work there to be done. Another site I find exciting is Agents, A lot of people, this is one of the many wild west areas of AI where you can prompt an OM to say, " Hey, I would like you to whatever, help me find competitors of this company. How do you do that?" Then as the OM decide, " Well, I'm going to search online for the competitors, visit the competitors websites, inaudible whatever." But have an OMM decide for itself what actions to take in a sequence and then to take that sequence of actions is an exciting idea. It's frankly not working that well. I would not use this stuff in production right now, frankly, but this is one of the several wild west areas that I find exciting. And then also data- centric AI. I think for the short term practical, let's get stuff to work. How do we systematically engine the data to deploy? That's exciting. And then I'll make a prediction about the future. Always dangerous, but I see a lot of excitement about edge applications. I know a lot of the attention is on the cloud right now, and some of may say, " Hey, why is Andrew pushing 1970s technology? Who writes desktop applications anymore?" But I feel like the smaller LLMs, the one to 10 billion prem LLMs, they run fine on say a modern laptop maybe with a good GPU. And I think there are privacy, maybe latency advantages to large language models and large vision models running at the agents. So how to get that work, what are the use cases? My team AI funds spending some time taking that through. But these are some of the directions I'm excited about. But I think the AI community collectively has a hundred or more than a hundred directions. I think it's fantastic to see this very diverse sense of research topics. I'm curious, what are you excited about, Clara?

Speaker 2: I am excited about a lot of those things too, what we just announced at Dreamforce around the Einstein Copilot Studio, and I think that agents in general could be wild west, but when it's in the context of a secure enterprise environment where you have trusted data, trusted workflows, checks and balances and guardrails in place, I think there's a lot of promise to being able to use the agents for decisioning and for actions.

Speaker 1: Cool, cool. Yeah.

Speaker 2: I too am very interested in the edge, especially given what meta announced with their new AR glasses, imagine running a large language model on your phone connected to the glasses. That could be very interesting. The other thing that's interesting to me is as we train on these large data sets and in some cases proprietary data sets, how do we combine novelty with original training set? Are we going to just all converge to the same answers to the similar questions? And what's the role of new ideas, new creativity, and continuously expanding the corpus of knowledge?

Speaker 1: Yeah, that would be interesting. It's one of those things. Creativity is very hard to define. I don't know of any definition of creativity. I don't know of the mathematics, sorry, I don't know about mathematical test of creativity, which is why some things one person will perceive as creative, someone else will not. But it would be very nice if it would be a thing. It says, " How do you prevent collapse?" If all the text on the Internet or most of the text is generative IOMs and OMs train on its own data, that seems like we could lead to a mode collapse situation unless we can get them to be somehow creative and ingest new sources of knowledge that generate new information rather than just regurgitate all the information. Somehow humans can do that, but can OMs do that? I don't know. So it would be interesting to try. It would be fascinating if in the future if OMs ingest and generate new texts so that this worry of the text is completely polluted by auto- generated texts. And we have mode collapse as it trains on the own data. This is exciting to think about.

Speaker 2: Yeah, exactly. So a lot of people, rightfully so, including our own team here at Salesforce, we're focused on the risks that come with AI and making sure that we mitigate those risks. You've said before some of the risks are overblown. Can you talk more about that? What do you think is real? What should we do about it versus what do you think is overblown?

Speaker 1: So what I think is real AI has well- documented instances of bias, fairness, unfairness, inaccuracies causing harm. So I feel like those are real risks that many people are working on to address. The good news about that is that AI is much safer. In fact, if we were to prompt a large language model today to try to get it to say, give you detailed instructions to commit an illegal act or something. You can still do it, but it's much harder now than six months or 12 months ago. 12 months ago, for a lot of the LLMs, you can ask it to give you detailed instructions to do something bad. And it would just tell you. Today, most LLMs much more likely to refuse to give that. So I think we're improving a lot on safety. Probably need more progress on bias. It's difficult, but again, making progress. So I think those are some of the real risks. And also inaccuracy, maybe an obvious use case would be if a driver assistance decision makes the wrong decision, it could lead to car crash. So I think fortunately, AI systems are getting much better. What's really overblown is the extinction risk. I was surprised when collaborators I deeply respect, including Jeff Hinton and inaudible signed a petition calling for paying attention to and guarding against AI leading to human extinction risk. I think that's really overblown. I don't see any plausible path of AI leading to human extinction. Maybe there are some ways that humans will make humans extinct. But this idea of sentin AI, it seems very implausible to me. And in fact, because I hope that within our lifetimes we'll build AI smarter than any human. I hope we'll get there within our lifetimes. But humanity has lots of experience steering things more powerful than any single person, including corporations and nation states. Corporations and nation states are much more powerful than any single person, but we for the most part managed to control them. And I don't really doubt that we'll manage to control AI as well. And so if you look at the real extinction risks that face humanity, things like the next pandemic, fingers crossed, or maybe climate change leading to massive depopulation in parts of the planet or much lower probability, another asteroid wiping us out like it did the dinosaurs, much lower probability of that. But I think our response to any of these real risks to humanity will deeply involve AI. So if you want humanity to survive and thrive for the next thousand years, rather than slowing down AI as some have proposed, I would rather make AI go as fast as possible.

Speaker 2: What would it take to create an AI that is smarter than humans?

Speaker 1: So I think AI systems today are already smarter than any single person on specific dimensions. We've had AI much better than any of us at adding numbers for a long time. And now it's much better than any of us at remembering lots of facts and answering factual questions about esoteric corners. The weird thing about benchmarking human knowledge, humans and AI, this goal of AGI, artificial general intelligence, which I think is a great goal, is the digital path to intelligence, AI, has turned out to be very different than the biological or the human path to intelligence. They're just good at very different things. And so this definition of artificial general intelligence, the most widely accepted definition is AI that could do any intellectual task that a human can. But we're forcing an AI to do any intellectual task that the human can and forcing the digital intelligence, which is wonderful to do everything that the biological part can. It's just a really tall bar for digital intelligence to take. I hope we'll get there. I don't see any fundamental reason why we can't get there at some point, but I think that will take decades and we'll still need new technologies that have yet to be invented. And I realized something recently, I know that there are some people that think, " Oh, we'll reach AGI in three to five years." I realized something recently, which is some of those groups have a non- standard definition of AGI. And I think, " Well, sure, if you redefine AGI, we could totally get there in a few years." I was chatting with an economist friend, he said, "Well, with that definition of AG, I think we got there 30 years ago." But I think by the original definition of AGI, I think we still pretty far away.

Speaker 2: Yeah, I've heard many different definitions. Very interesting. You recently had a clone made of your voice. Tell us about that process and what happened from that experiment.

Speaker 1: Yeah, so Speech Lab is a AI fund portfolio company, and I was chatting with the CEO Seamus one day and I commented to us using some of the commercial voice coding systems and it just didn't do that well on my speech because I have a nonstandard accent. I think if I sounded... I think for average or typical American accent, to the extent there's such a thing as typical American accent, maybe there isn't. It does better by taking American speakers data and adapting to a person. But because my accent or whatever I do is relatively non- standard, the commercial systems I was using, it just didn't sound like me. And then Seamus said, " Oh, my team can build a voice clone of you, just give us some data." And I think in a few days, he trained the voice clone. And it was actually pretty interesting when I recorded a bunch of jokes and then she had the voice clone record, say a bunch of AI jokes. And then when I listened to the AI generated messages at first I was thinking, " Did I say this or did the voice clone generate this?" So even I had a hard time telling, was this me or was this my voice clone? And then when I released on social media, here are four AI jokes, either me or my voice clone. Can you guess who said what? One of my parents guessed right and one of my parents guessed wrong.

Speaker 2: Oh my gosh. Even your own parents and even yourself, you couldn't tell the difference. So that's amazing, but also scary. How are we supposed to trust any video or audio interaction going forward that it's actually that person?

Speaker 1: Yeah, I think there's one thing that I think would be very helpful, which is watermarking. And by watermarking, there are various technologies that can embed a hidden code either in generated text or video or audio to signify that something was generated by AI. A couple months ago, a few months ago, the White House had a number of large companies make voluntary commitments to AI.

Speaker 2: Yes, we were part of that.

Speaker 1: It's interesting. Oh, you were? Awesome. All right, so I'm going to say something you may not like then, but candidly, I think all the commitments were fluff except for one, which is watermarking. And so to me, it just sets up an interesting test. Let's look back in a few months to see if anyone actually watermarked the content. And I think that honestly, candidly right now, I'm not feeling very positive that this approach to regulation is working because even since that White House voluntary commitment, I'm seeing multiple companies, not your, but other companies take moves that feel like backtracking from a commitment towards. There's companies that made public statements saying, " Yep, you can't tell what's real or fake. So well, here's how they manage it." So I'm actually very concerned about the regulatory process in the United States. I think it is not going in the right direction.

Speaker 2: So you feel it should be heavier handed?

Speaker 1: I think it should be smarter.

Speaker 2: And how?

Speaker 1: So what I'm seeing in terms of the US approach is there's, US and Europe both, I'm very concerned about the degree of free capture. For example, all the time that governments are thinking about how to prevent extinction, which has more attention in Europe than in the US. I think that's frankly not time well spent. I know in the US there was some regulations looking into preventing AI from having access to nuclear weapons. It's not that I think AI should have access to nuclear weapons. That'd be a really dumb idea. But any time that regulators spend stopping AI from accessing nuclear weapons, which is just not a thing, is time spent preventing a non- problem rather than accessing, rather than crafting thoughtful regulation that will protect citizens and empower the technology to move forward. And then I've been quite concerned about number of players that have been lobbying against open source as dangerous. I feel like there are definitely commercial interests that do not like open source, but frankly in AI, in tech, we all stand on the shoulders of giants. And open source is one of the most beautiful powerful sources moving AI forward in progress. So anything, I feel like some of the pressures to regulate code and a lot of requirements before you can open source software, I think that would be very damaging to global innovation. I hope that doesn't come to pass, but I find the lobbying against open source be quite alarming. And frankly, I hope those lobbying efforts fail because while open source is not perfect, and yes, someone could download open source software and do something bad with it, fully acknowledge that risk, on average when open source is released, the number of beneficial use cases I feel like has almost always vastly surpassed the number of dangerous and harmful use cases. The United States often gets things wrong for a while, but then eventually figures it out. And maybe was it Winston Churchill that once said, " Democracy is the worst possible way of running a country except for all of the alternatives." And I think maybe that applies here. So I'm quite dismayed at what I've been seeing recently, but most years, not always, but most years, the US is a relatively well run democracy. So I hope we'll muddle our way through and eventually get a better trajectory than we seem to be right now.

Speaker 2: Okay, so cautiously optimistic. And I think you have a big role to play in helping educate lawmakers and the broader public as you are in LLMs. Andrew, you grew up in a few different places. You spent time in Hong Kong, which is also where I'm from, in Singapore, in London. How do you think your upbringing shaped some of the incredible work that you've done?

Speaker 1: Yeah, so my childhood, I was born in London, spent a lot of time in Hong Kong and Singapore growing up. I reflect on my good fortune of having had many good teachers. And I think, wait, one of your parents was a teacher? One or both of your parents are teachers, Clara.

Speaker 2: Both, yeah.

Speaker 1: Oh, cool. Yeah, I reflect on my good fortune of having had fantastic teachers, and that's what made me try and fail for a long time. But eventually I got better at becoming a good enough teacher to help others. But I think for all my life, I really valued good teaching and valued what others did for me and had a desire to be able to help others in a similar way as well. But yeah, definitely Hong Kong as well as Singapore were places that I appreciated some great teachers. And you, Clara, how has growing up overseas?

Speaker 2: I think just being different and thinking differently and persisting through challenge and feeling comfortable being contrarian. I think it's very important for anyone who's an entrepreneur.

Speaker 1: Yeah, yeah, that's true. I remember when was a kid in school, I think my teachers thought I was a good kid because respectful. And actually went as an unambiguous rule, I mostly followed it. But I was definitely the kid that would do the weird things like if there's a school competition within the rules, I would be the kid that submitted some weird thing and sometimes it did not win. Sometimes it was a disaster, but sometimes it was creative. So I think that unorthodox thing sometimes is an effective way.

Speaker 2: Yes, certainly in Silicon Valley, that is. We don't create new things by just following the rules and doing the same things that we've been doing before. That's a good segue to my next question. So we both have kids, and even in the last 12 months, what AI can do now is astonishing relative to what humans could do before. How should we educate our kids differently to be prepared in this AI driven world that we're entering?

Speaker 1: Yeah, I feel like two things. One is lifelong learning. The world changes so fast. I think decades ago or centuries ago, when there's a tech disruption, you could keep doing your own job and the next generation, your children would then maybe have a different job. Like you farm all your life, your children, maybe the farming jobs are going down, but now tech changes so fast, we have to change within our lifetimes rather than just have the next generation change, same lifelong learning. And then second is I would love to see a future where everyone learns to code. And I did not say this couple years ago, but with generative AI and local, no- code tools and data- centric AI, what I'm seeing is that the ability to build and use custom AI is so low, and the value is so high because everyone now has custom data, whether you're big business, small business, technical role, non- technical role, even a high school student running biology experience, everyone has data. So with everyone having custom data and tools to build AI much easier than before, I think the value for individuals to learn just a little bit of coding to use AI is now high enough, or the ROI is high enough. I'd love to see a future where everyone learns to code. And I know that there's this idea that maybe you don't need to learn to code because computers can just understand English or whatever language, native language you speak. I think computers are getting much better at just tell it what you want and it'll do what you want. There's a lot of truth to that. The problem with human languages like English and other languages is they're ambiguous, which is why even now when you prompt an LLM, do learn to prompt an LLM, but you prompt an LLM, you don't always get a predictable result. Whereas if you code in Python, that's a very unambiguous language to tell a computer what you want. So I think for the foreseeable future, someone that knows how to use LLMs, how to prompt them and who knows how to write a little bit of code will be able to do much more than someone that only knows how to prompt LLMs, even though I think that's exciting too. But I would love to see a future have an educational system where we empower, just like today, we teach everyone a first language. For many people it's English in the US, other language in different countries. I think it'd be really cool if everyone learns Python as a second language or some of the language.

Speaker 2: Okay. So coding and lifelong learning. And on that point of lifelong learning, with all of your various commitments and responsibilities, how do you keep learning? How do you stay current with everything that's going on?

Speaker 1: I've been fortunate to hang out with enough friends, like sometimes you, Clara, that help give me a sense of what's happening. And then I think Silicon Valley's a very special place, and I apologize to someone listening to this outside Silicon Valley, but the net western connections here about generative AI is unlike anywhere else in the world right now. And I find that staying current is easier in general AI in Silicon Valley at this moment. But to be more constructive, so my team inaudible AI, we publish a newsletter called The Batch where we scour the world for what matters in AI right now and try to summarize that every Wednesday. And so I actually count a lot on the editorial team of The Batch to help keep me current on what matters in AI right now.

Speaker 2: And do you actually have an editorial team or are you using an LLM to summarize?

Speaker 1: It's a bunch of humans, yes. It's an editorial team, not LLMs. And by the way, we tried an LLM, couldn't get it to work nearly as well as humans. Maybe the technology will change.

Speaker 2: How ironic.

Speaker 1: Yeah, maybe someday, but not right now. I think right now and then I think our personal networks and our communities, wherever you are in the world, I think that local communities, sharing people with the same idiosyncratic interests as you wherever you are in the world. And then I think social media, and I still try to read research papers regularly. I know the first few months of the year I was counting, I think I was averaging two research papers a week. A little bit fewer now, but I find that just reading a lot. I don't know any other way to do it than that. And you, Clara, what's your advice on keeping current?

Speaker 2: The hardest for me is carving the time out, but I try to stay pretty disciplined to have a morning every single week. Sometimes it ends up eating into my weekends, but just where I really block out everything else and I'm just learning.

Speaker 1: I see. Wow. Oh, wow. That's great. You're a real lifelong learner. Yeah.

Speaker 2: Trying to be.

Speaker 1: One thing I learned is on my tablet, on my iPad, I'm pretty good at dumping my backlog of research papers onto my iPad. So whenever a spare moment, it's always pulled up and I know exactly where to go to read the next research paper. That helps a lot.

Speaker 2: Well, so amazing to have you on the show. Thank you for your insights today, and thank you for educating so many people around the world on AI.

Speaker 1: Thanks, Clara. It's always great to chat with you.

Speaker 2: Some takeaways for me from today's episode. Number one, Andrew's interest in research in AI started with flying helicopters. Number two, that Andrew believes that there's tremendous potential still in unsupervised learning, as well as image processing and running models on the edge, which I agree with. Number three is that those who were on the very inside of AI research never perceived that there was an AI winter. They were continuing to find amazing results in their work. And so what has shocked the world over the last year is not a surprise to them. Well, that's all for this week on the Ask More of AI podcast. Follow us wherever you get your podcasts and follow me on LinkedIn and Twitter. To learn more about Salesforce AI, join our Ask More of AI newsletter on LinkedIn. See you next time.

DESCRIPTION

Listen in as Clara sits down with world-renowned AI researcher Andrew Ng, founder of DeepLearning.ai, Coursera, and Google Brain, as they discuss the next frontiers of AI research, the way kids engage in unsupervised learning, and how his lifelong interest in AI started with flying helicopters.