Reskilling the World for AI feat. Google’s James Manyika

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Reskilling the World for AI feat. Google’s James Manyika. The summary for this episode is: <p>Clara sits down with James Manyika, Google’s SVP of Research, Technology &amp; Society, to talk about AI's evolution, explore current challenges, and unravel what must be done collectively to ensure AI benefits all.</p>

James Manyika: We have to keep in mind even as we try to be very ambitious in utilizing this technology to benefit everybody, the need to be both bold and responsible at the same time. I mean, this might sound like it's a contradiction. We actually don't think it is. I think we have to embrace that tension and always be mindful of it even as we pursue these opportunities.

Clara Shih: Welcome to Ask More of AI, the podcast looking at the intersection of AI and business. I'm Clara Shih, CEO of AI at Salesforce. I just had an incredible conversation with James Manyika. He's the SVP of Research, Technology& Society at Google. I hope you enjoy the conversation. Maybe you could look back the last 10 years, how far have we come and how did we get here?

James Manyika: Wow. I mean, it's extraordinary. I think the progress that has happened in AI and machine learning for me it's breathtaking. I did my research as a graduate student 25 years ago, and just the progress is amazing. And I think we've been living with these technologies for quite some time actually, because what happened is in the the mid- ish 2000, 2010 and so forth, several things came together. The combination of deep learning and those techniques coupled with compute in the form of GPUs and TPUs that allowed us to do these very complex vector matrix multiplications that normal CPUs didn't do as well. And then all the data. So, all of that coming together meant a lot of progress. And in fact, we've been living in this for quite a while. I'm always amazed when people forget that Google Translate, for example, is machine learning based and that already 1. 3 billion people use it. But I think the pivotal moment that brought us to where we are now and this excitement probably has its origins back in that classic, now classic paper from Google Research on this is the paper that is introduced, transformer architectures, the attention's all you need paper. And these transformer based LLMs have been the backbone of a lot of, especially all the generative AI we're all excited about. And it's gone on to lead to the founding of companies, initiatives and so forth. So, this is what's brought us to this extraordinary moment, especially in generative AI.

Clara Shih: So, much of the dialogue and rightly so, is focused on the risks of AI and how to mitigate those risks. And I will get to that in just a moment, but maybe just for now, can you talk about where you see the positive impacts of AI taking place to address societal's most pressing challenges?

James Manyika: Well, I think it's such an extraordinary time. I'll maybe mention a few examples in a few areas, but these are just a few examples. So, you start with, for example, the application of AI to science. Hopefully people in the room are familiar with extraordinary progress on, for example, in the life sciences AlphaFold is extraordinary. AlphaFold is a deep mind algorithm that basically predicted, solved essentially a 50- year grand challenge, which is how do we understand computationally the structure of the amino acid sequences and how they fold to become protein structures? This is the backbone of biology, whether it's drug discovery and so forth. So, we'd been making very slow progress in understanding the structure of proteins. In fact, we hadn't even fully understood the roughly 20,000 proteins in the human body, the human proteome. AlphaFold predicted the structure of all roughly 200 million proteins that are cataloged and known to science. I mean, that's extraordinary. And today, something like 1. 3 million biologists are actually using this to do their research. I think that's extraordinary. So, you've got these examples in science. So, life science is one key area. In quantum computing. The progress that, for example, our team in quantum computing, in fact, our team is called the Quantum AI Team. If you've looked at the magazine nature, they've probably had a nature paper including the cover article every month in the last six months, breakthroughs enabled by AI. So, you've got a lot of things going on in science. But I think besides science, you've also got the extraordinary impacts on pressing societal issues today. I'll give a couple of examples. What began as a fun experiment with one of our AI teams predicting floods. So, it turns out that every year, something like roughly 200 million people are impacted by severe floods in the world. So, this team began doing the work to understand how can we predict when floods are coming? It turns out that if you could actually give people something like a week's advance notice, the lives saved go up dramatically if you give them a week as opposed to two days, for example. So, it turns out that this team that began doing this work in Bangladesh and parts of India, it worked. And now they've expanded, I think as of last two months ago, we are now covering 80 countries covering roughly 460 million people who get flood alerts using AI. So, you've got all these examples, flood alerts, wildfire predictions, and so on. And you can go on and on and on. So, pressing societal issues are extraordinary. I'll mention one other example, which is much more, I think perhaps area of examples of personal things, where we worry a lot about these questions of access and inclusion. And I mentioned Google Translate earlier. As wonderful as it is, Google Translate has covered roughly 130 languages, but now it's now possible these systems to go much further beyond that. So, we actually have a moonshot to get to a 1, 000 languages.

Clara Shih: Incredible,

James Manyika: So, think about these issues of access. So, you've got a whole bunch of things. But I also just maybe end with one thing, which I know we're excited about. I mean, the possibilities to improve productivity, creativity and things that people do are also very exciting things that power the economy. So, we shouldn't forget that. That's also quite extraordinary.

Clara Shih: It's just incredible to think about. I mean, on the large language front, these large stochastic models, being able to predict the next series of ones and zeros and those ones and zeros could be a sales email, it could also be a protein structure. And the flood example, just the data has been there this whole time. But now we have the GPUs, and the hardware, and the models to really activate the data into insight and action, and to save lives.

James Manyika: No, and extraordinary. Think about how this could power the economy, power, productivity and creativity. Think about what this does for small businesses. I think one of the things I'm actually pretty thrilled about, I think, we're announcing this week at extraordinary partnership between Google and Salesforce, which we're excited. Which puts together some of the work we've been doing, Google Workspace with Duet AI, with Salesforce, and taking advantage of the incredible security and privacy building blocks that Salesforce has built that we've built in a way that enables businesses to make use of this. So, the potential for productivity for both companies and ultimately, the economy to power the economy, I think is quite extraordinary.

Clara Shih: It is so exciting, and we've gotten so much excitement from our own salespeople here at Salesforce. They use Google Workspace, they're making their customer decks and Google Slides and their sales proposals and Google Docs. And now very soon they'll be able to use all of that customer 360 data within Salesforce to generate those highly custom tailored pitches for that specific customer.

James Manyika: No, it is pretty exciting. But I think one of the things that is going to be important is, how do we make sure that those capabilities are available everywhere and to everyone, all businesses, everywhere, all kinds of companies, not just the large companies, but companies everywhere? As well as other organizations. And they don't all have to be companies. It could be nonprofits, it could be other kinds of organizations. I think the possibilities here are immense.

Clara Shih: And I think that's a shared value that our companies have, is democratizing access to these technologies in a secure and ethical way.

James Manyika: Oh, absolutely. I mean, I'm being infected by the Google's mission, which is to organize world's information, make a university accessible and useful everywhere. I mean, I think that's exciting.

Clara Shih: So, let's switch gears. Let's address head on some of the complexities and risks that this new era of AI are bringing on. What's top of mind for you and Google? And what should we be doing to address them?

James Manyika: Well, I think this is fundamentally important. I think anytime you've got a powerful transformational technology, which has all these incredible possibilities, if it's powerful and interesting enough, there will be complexities and risks. I think it's important to be clear about the different kinds of risks we're talking about and complexities because they're different. So, just a few categories. First of all, you've got what I think of as performance issues when these systems don't generate outputs that any of us would like, either because they're not factual, or they're hallucinations, or they're biased, or they're toxic. So, you can imagine those kinds of performance limitations and things that we have to solve for because those could worsen harms that already exist in society that could really cause harm. So, we have to think about that category of issues. I think there's another category, which is to think about the possible misapplications and misuse. So, even when this works well, it could be something that was built to do one thing could be misapplied for something else unintentionally. Then you've got actual misuse by different kinds of actors. There could be individuals, there could be, I don't know, terrorists, there could be governments, there could be political actors, any number of actors, even companies who might misuse this technology for things that we might not want. So, how do we think about that? I mean, misinformation is clearly one of the things that's top of mind for many of us at the moment about how do we make sure these technologies are not misused in that way? So, the misuse issues are a whole category that we have to think about. Then I think there's a third category, which is, these technologies are as useful as they are and as powerful as they are, they're going to have these incredible impacts throughout society. And an important one are things like the impacts on labor markets and jobs on various parts of the economy. We're going to have to think about things, everything from intellectual property, copyright. So, you've got these cascading impacts. Think about what does it mean for education? So, we've got these second order effects as this rolls through society, we have to think about those. So, I think it's all of these things that we have to keep in mind even as we try to be very ambitious in utilizing this technology to benefit everybody. I mean, it's the reason why we've begun to talk about this as is in our case, the need to be both bold and responsible at the same time. I mean, this might sound like it's a contradiction. We actually don't think it is. I think we have to embrace that tension, and we always be mindful of it even as we pursue these opportunities with

Clara Shih: With great power comes great responsibility. So, I really like that framing of those three areas. And let's talk about each one. So, the first was around performance. And what do you do when you are training this multi- billion parameter model on what's out there on the internet? Because the reality is there is toxic content, there is bias content in the data training set. How is Google approaching that?

James Manyika: Well, several things. I think one of the things that's interesting about that is the ways we've all approached things like bias have evolved over time. And there was a time when I think most of us thought the only way is to curate the data and then clean it up. But we've discovered, for example, that well, in some cases, you actually want to train it on everything because you are better able to detect the biases when you actually have examples in the data. So, I think even the techniques for understanding, even when you still care about bias, how you solve for it is evolving as we learn more and get more capable.

Clara Shih: It reminds me of how we talk to our kids sometimes. There's a school of thought where you shield your kids from all these bad things. And then there's a school of thought that you talk to them and you are very realistic with them about the good and the bad that's out there, and you teach them to recognize when is what.

James Manyika: Exactly. But then you've also got things like we are now doing generative adversarial testing at scale to actually understand the outputs of these systems. In addition to that, we're also learning, for example, things like I think what others have tried to talk about is constitutional AI, but there are different names for this. Which is when you try to create guidance and principles that guide the outputs that you generate, then there's still, of course, real research to be done on things like factuality, for example. We know how these models, these transformer based architectures work, which is they're predicting the next token. And because these are statistical predictive mechanisms, simply training it on the accurate information doesn't solve that problem. You're still going to have generative hallucinatory efforts. So, the question of factuality is still a fundamentally important research question that I think we're making some progress on. You either do you ground the systems in other data sets, do you make calls to search and other verifiable sources? So, there's all these different approaches to try to make progress on the outputs and the performance of these systems.

Clara Shih: And of course, we're never done, because there's always new learnings, and feedback, and iteration.

James Manyika: Oh, absolutely. And there's also just innovations you have to come up with to solve these things. One of my favorite ones, for example... For a long time we've known that, for example, image classifies and data context and so forth don't handle all kinds of faces very well. Faces like mine for example, we've always seen the examples. But even that's an area where... So, for example, at Google, we've had an effort for some time. So, it turns out that when it comes to recognizing colors, for example, facial colors or skin tone colors, there was something called the Fitzpatrick Scale, which was established in the'70s. It had a very narrow range of skin tones in a way that didn't reflect all of humanity. So, we've actually been working with some researchers at Harvard to create what we called the monk scale, which actually is based on all of humanity's skin tones, so we can do a better job of recognizing that. In fact, we've actually now open sourced that scale so that other techniques and technologies can actually get access to it. So, we've got to keep innovating these as we discover these issues. By the way, we're not perfect at Google. We're learning, making mistakes as we go along. But I think we have to be innovating on these issues to make progress on them.

Clara Shih: I couldn't agree more. Having that growth mindset. So, let's move on to the second risk category, which is these male intent, is these bad actors. How are you thinking about how to red team or prevent against that?

James Manyika: Yeah, I think one of the things that's interesting, and I'm sure you're experiencing this too, and others in the audience, it is quite interesting. People are, for example, these large language models and these interfaces like Bard and so forth, are constantly trying to adversarily prompt them, get it to do bad things, get it to say bad things. So, we're constantly doing incredible work to think about how do we red team these systems? So, the red teaming approaches are an important part of the toolkit. But the other things that is trying to work on some fundamental innovations. One of the things we worry about with misinformation as an example is how do you understand synthetically generated content and so forth? So, we've been working a lots on watermarking. So, early this year, we actually announced that we were going to put watermarking to all our generative image and video content. In fact, we actually rolled out a couple of weeks ago SynthID, which does watermarking to all the generated images and outputs. Now, of course, this is very difficult with text. It's a lot easier with images and other and video, and so forth. But this is a fundamentally important research problem. We're also working on provenance techniques. Some of you may have gone to one of the events we did a couple of weeks ago, Cloud Next. We talked about how we are approaching trying to build in metadata so people can actually understand when those images came from when they were generated. So, this is all work we must do. I'll mention one of the interesting innovation. There's two more to be done. We've actually developed, for example, something called AudioLM, which is very good at detecting synthetically generated audio. It's something like 99% accuracy. So, we're going to have to keep innovating and researching on ways to address some of these misinformation challenges. But of course, at the end of the day, society... I mean, we as society have to think collectively about it's only enough for one company or one research team to do these things. We have to think about how does the whole ecosystem of both people developing these technologies and those using them, how do we get a common understanding and set of frameworks that actually protect us, the misuse, particularly with regards to bad actors.

Clara Shih: That's right. I mean, it's similar to how we've approached cybersecurity, just teaching people the importance of having a complicated password and using two- factor authentication. We'll need to come up with what that is for generative AI.

James Manyika: Oh, absolutely. Absolutely.

Clara Shih: And the watermark disclosure, I think that's fantastic. And it reminds me of the acceptable use policy that the Salesforce office for ethical and humane use of AI published recently really requiring all of our customers that use any of the Einstein products to always disclose to an end consumer. So, our customer's customer when they're dealing with an AI versus a person.

James Manyika: Right. Exactly. And then that also gets you into some very deep important ethical, philosophical, almost questions, which is, how do we think about questions about how we want people to interact with these systems? The question of do you want to allow people to anthrophomize these systems? Do you want them to interact in ways where these systems sound like they're humans or with personalities? And these are quite deep almost philosophical questions that we're all going to have to grapple with. I think in many regards, Clara, one of the things that's interesting to me is how in some ways these AI systems and these developments are almost putting a mirror back at us as society, which is, I mean, it's quite easy for us to say, I mean, all of us will probably all agree with the following statements, we don't want bias systems, right? Yes, we want fair AI systems. We want systems that reflect our values. But what do those questions actually mean?

Clara Shih: That's right.

James Manyika: How do we think about that? These are questions for us as society.

Clara Shih: That's right. It's such a good point. Well, let's shift gears to the third risk category you talked about and the dialogue we've started 10 years ago, and these longer term longitudinal macro shifts such as job displacement. How are you thinking about that?

James Manyika: Well, I think the question of work is always interesting. I mean, I think if you take historical analogies, all the historical analogies say, it'll be okay. Because look at what happened with the industrial revolution. We've always managed to adapt and work our way through it. And if you look at the most deep research that's been done, I did some of this when I was at the McKinsey Global Institute, but also other academic institutions have done this work, and most of that work seems to say the following that research is that yes, of course there will be some occupation categories that will decline over time. If you look at some occupations have a lot of their constituent tasks that you can imagine AI and other systems automating. So, most counts seem to think that that's roughly somewhere in the 10%- ish range of all occupation categories will probably look like that over time. The numbers vary, of course, on which research reports you look at. So, there's this category of job declines that way.

Clara Shih: And for that group, how should we prepare as a society?

James Manyika: Well, I think it also depends on some of the other groups. Then we come to the other groups that will look at the whole, because I think they're all related. Then you've got other occupations that look like they'll actually grow. And grow because demand for them will rise, or because new occupations will come into being-

Clara Shih: Like prompt engineering?

James Manyika: Like prompt engineering. Actually, it is quite funny because, so the Bureau of Labor, if you look at the BLS dataset, it tracks something like 800 different occupation categories. And if you look at those, they update those roughly every 10- ish years or so. And if you had looked in 1995, web designer didn't exist. It was in other. Today, if you look at the occupation, there's nothing called prompted engineer. I'm sure a few years from now when they update it, that will exist. So, you always have these new categories as well as growth. So, you'll have some jobs that will grow either because they new or demanded them has gone up. But I think the biggest effect that's come through in a lot of this research are the jobs that will change. So, they won't decline or won't grow, but they'll just be different, because some portion of the constituent tasks are being augmented and complimented by technology. And most research seems to suggest that's roughly at least two thirds of occupations fit in that category, at least for the foreseeable future. And I think that's where these questions of skills adaptation, how do people work alongside powerful technologies become really important. So, back to your original question, which is how do we deal with all of this, particularly the job declines? I think we're going to have to get better at society at a few things. Both how do we help people transition and adjust? How do we help people re- skilling? How do we as society do a much better job than what we did, I think for example, during the period of hyperglobalization, when some similar things were happening, but we didn't do as good a job as society, or as an economy, or as policy makers, or as companies to help those transitions work well. We're going to have to do a much better job of that. So, there's real work to do there. But in all of these categories, how we adapt to society is going to be important. The difference perhaps with previous periods is that it may happen faster.

Clara Shih: Which makes it harder.

James Manyika: Right. Exactly. May happen faster. And so, our ability to adapt and innovate is probably what's going to be fundamentally important to work our way through this.

Clara Shih: And we have business leaders from every sector, every country from around the world here in person at Dreamforce and online. What's something tangible that we should ask everyone to do to help with this transition?

James Manyika: Well, I think as business leaders, I think really focusing on... I know it's something of a trite thing to say because we've said it so many times, this re- skilling question and so forth. The reason why I emphasize this is because most of the examples you see when people say, " Hey, here's an example of re- skilling," quite often the numbers are small. It's not at scale. How do we do this in a much bigger way, especially in a way that affects the most affected workers, but at scale? So, I think this is something which is much more of a scaling... Rescaling is not a new idea. But how do we do it at scale, I think is the real challenge. Then I think we're going to have to think through some more complicated things, which have to do with even when the work is there, how do we think about the potential wage effects of these transitions? Because it won't play out equally across all occupation categories. Some people working alongside AI are going to benefit extraordinarily. They're going to be more productive, more innovative. Their salaries and wages will go up. Others, it won't always be that way. So, how do we think about these questions about the wage effects for everybody, and how we include everybody? I think that's an important question. So, I think as business leaders, we have to think about that across the different sectors and the categories of work that happen. I think the other thing we have to think about as business leaders is how do we make sure everybody's benefiting everywhere? I think there's a real risk here in a geographic sense, both within countries and between countries, that some pockets where things are happening benefit from this, and some places don't. So, these differences in place I think are also very important, both within countries, as in United States, but also between countries. We have a whole-

Clara Shih: It's a modern day digital divide.

James Manyika: It's a modern day digital divide. It's a different version of it, which will affect occupations differently, places differently, locations differently.

Clara Shih: And so, what should we do?

James Manyika: I think it's a collective endeavor. I think it's not just one entity. So, I think it's companies, policy makers, governments, civil society. We have to get our minds around this and make the necessary investments and work that we all have to do. It isn't just a single company. So, we have work to do. I think as leaders, as business leaders, we should start with our own areas in which we work, the ecosystems we work with, the partners that we work with, the small companies that live in our ecosystem, the companies we collaborate with. I think there's real work to be done there. And I think the larger questions I think are going to take everybody, policy makers, governments, and others.

Clara Shih: Yeah, I agree with you in helping both of our organizations work closely with schools. So, just re- imagining K- 12 education and skilling outside of school. And I know both companies, we also offer free online training and learning courses on AI.

James Manyika: Oh, absolutely. But even an area like education, I think even there, the questions are changing. There was a time we would've all said, yeah, let's make sure we are focusing on STEM education and education for K- 12 and so on. That's still fundamentally important. But we now have tools to help with that. I mean, I think I've been struck by how... Having spent some time with some kids in some poor school districts, about how they've gone from... We've been waiting for somebody to come bring the STEM education coding education that we were promised or somebody said we should learn, and no one showed up yet. And now generative AI showed up, and in fact, I can just talk to the system, I have an idea. So, I think we should also think of the other side of this, which is how are these tools helping us to solve some of these re- skilling and training challenges? So, seeing kids who've never had a coding instructor come to their school, play with an AI generative model and work through coding examples, I think that's pretty exciting, actually. So, I think we can also look to these technologies to help us with some of those challenges too.

Clara Shih: Well, that's incredible. James. Thank you for all the work you're doing. Technologies, including AI, are neither inherently good nor bad. And we make them good and protect from the bad based on the decisions that we make and the values that we bring. So, thank you for your leadership in the industry.

James Manyika: Well, thank you very much. Thanks for having me.

Clara Shih: I really enjoyed the conversation with James. Three big takeaways for me. Number one is that there's different kinds of risk that we should think about, versus the performance of the models and the AI. Second is how to address bad actors. And third is looking at longitudinal bigger systemic risks to society, like job displacement. Number two is the importance of having a growth mindset. We're still very early days and it's constantly changing. And so, we have to keep looking and iterating, and getting better over time. Last but not least, the most important thing that leaders can do across the public and private sectors is to start re- skilling our employees and our communities now. Well, that's it for this week's episode of Ask More of AI, the podcast at the intersection of business and AI. Follow us wherever you get your podcasts, and follow me on LinkedIn and Twitter.

DESCRIPTION

Clara sits down with James Manyika, Google’s SVP of Research, Technology & Society, to talk about AI's evolution, explore current challenges, and unravel what must be done collectively to ensure AI benefits all.