This transcript is generated with the help of AI and is lightly edited for clarity.

JAMES MANYIKA:

Members of my team in Google Research began as a small pilot about a year and a half ago, using AI systems to predict floods. We started this in Bangladesh. It worked and worked in the sense that you can give people up to at least five days advance notice, which makes a huge difference. And so we went from that to where today, we are now doing this in over 80 countries.

REID:

Hi. I’m Reid Hoffman.

ARIA:

And I’m Aria Finger.

REID:

We want to know what happens, if in the future, everything breaks humanity’s way. What we can possibly get right if we leverage technology like AI and our collective effort effectively.

ARIA:

We’re speaking with technologists, ambitious builders, and deep thinkers across many fields—AI, geopolitics, media, healthcare, education, and more.

REID:

These conversations showcase another kind of guest. Whether it’s Inflection’s Pi or OpenAI’s GPT-4 or other AI tools, each episode we use AI to enhance and advance our discussion.

ARIA:

In each episode, we seek out the brightest version of the future and learn what it’ll take to get there.

REID:

This is Possible.

REID:

AI is throwing a mirror back at us. That’s how our guest today has described the way AI technologies and the resulting power and capabilities get integrated into society.

ARIA:

When it comes to game changing new technology, the first waves of people to gain access and benefit are more often than not extremely limited. We’ve seen this rollout pattern take place throughout history with the advent of electricity, cars, and even books. And as we’ve discussed on the show before, with everyone from Van Jones to Microsoft’s Kevin Scott to Khan Academy founder Sal Khan, this does not have to be the trajectory with AI.

REID:

Today, we’re asking how can big tech and business actively support making AI accessible on a mass scale as opposed to a select few countries and companies.

ARIA:

And here with us to explore this is James Manyika, the Senior Vice President of Technology and Society at Google. His experience spans big tech and business, along with advisory roles at the UN and the White House. We talked to James about designing and leveraging AI for citizens worldwide, the economic and scientific opportunities associated with this, and yes, some of Google’s latest AI news and advancements.

REID:

Here’s our conversation with James Manyika. I’ve been waiting for this for a number of months. Welcome to the Possible Podcast.

JAMES MANYIKA:

Well, thank you Reid. I’m so excited to be, to be here with you and Aria.

REID:

I’ve had the pleasure of getting to know you through our work on the Stanford Human-Centered AI Institute, and I think it’s fair to say that your journey into AI and your work in the field is, you know, somewhat unusual. So can you share how you came to AI along with what you’re most focused on today?

JAMES MANYIKA:

The first thing I ever published in my whole life was actually a paper on AI in 1992, as an undergraduate. It was actually on modeling and training neural networks. This is in Zimbabwe, which also surprises people sometimes, given the timeframe. I went on to do a PhD in AI and robotics at Oxford. And from there I also worked at Jet Propulsion Labs, actually in a man machine systems group while we were working on some of the machine learning systems for Mars pathfinder programs at JPL. Had a long detour at McKinsey. But I stayed connected to AI. In fact, I was involved with my friend Demis Hassabis at DeepMind quite early on. And now I’m at Google.

ARIA:

We talk a lot about AI in the context of global development, and you’re involved with UN’s High-Level Advisory Body on AI. Some people say like, this is what we need. We need a high level global body to be sort of overlooking AI and regulating it. Other people have questions about the UN as sort of the most effective player. Can you tell us more about like, what that looks like, your involvement in it and what, what insight has it given you into global ideas about AI?

JAMES MANYIKA:

Well that, Aria, that, that’s been a very fascinating time serving on that High-Level Body, which I have the honor of co-chairing. We have 39 members from 33 countries. So you can imagine, very diverse perspectives and members come from academia, civil society, private sector, and so forth. But, but here’s the thing. One of the things that’s made me realize is just how different perspectives and views are about AI from around the world. And if I were to make a big general summarization, I think I would say people in the Global South, the so-called Global South, generally are more positive on AI as a way to transform societies, improve the world, address big societal challenges. However, however, they, they do have some questions and concerns. And generally those tend to be of two kinds. One is, they don’t feel like they’ve been as involved in the development of AI or any of these discussions about governance and all the rest of it.

JAMES MANYIKA:

And then second the challenge of capacity. Meaning, you know, even as basic things as broadband access, connectivity, and so forth, which should allow them to participate. So it’s a very complex picture. The question about the UN itself, you know, one thing I’ve come to appreciate is that the UN has zero power other than what the members give it, and the members are the 193 countries. So I think to the extent that anybody who wants the UN to be effective, it’s the member states who have to enable for the UN to be effective. I’m just, I’m glad to say that so far, at least, the work we’ve been doing has been generally well-received by the member states. In fact, earlier this year, the US put forward a resolution, which was signed onto by most other countries and unanimously endorsed by the UN member states, which put some basic frameworks and principles and norms, if you like, about how to think about AI. But it’s a very complex topic, and obviously more ought to be done.

ARIA:

In terms of the global attitudes, is that because you think that folks in the Global South potentially have more to gain? It could propel them into the middle class, it could give them English-speaking abilities, it could sort of do all those things? Whereas folks in the US might be thinking, ‘I have a pretty good life right now, and I’m nervous about job loss, and I actually don’t want to be disrupted because things are going okay.’ Is that the reason for the difference in attitude, or would you say it’s something else?

JAMES MANYIKA:

I think that’s part of it. I think other parts of it have to do with the fact that, in many places—say in Africa or Latin America, I was just in Brazil recently—often there’s a lack of resources. A lack of access to doctors, you know, maternal health, you know, any number of challenges, books in schools and so forth. And by the way, the views that I hear from those member states, or countries in the Global South, in some ways are not so different than some of the, you might, what you might call “left-behind communities” here in the US actually. Because when you talk to any of the, of the poor, or less affluent, communities here in the States, you hear some of the same things. So I think the question is: How do we make sure societies and communities anywhere—here or anywhere else in the world—do have access to resources, help, when they need it? Whether it’s in healthcare, in education, access to information. And also access in their own languages and dialects. That’s really, really important.

REID:

And what do you think some of the key learnings have been? Because, part of the reason I’m asking this question is, you and I talked about this bunch, because I rely upon you, as part of the expert, to participate in the UN conversations and that kind of thing—what are some of the principles that we should think about as technology, inventors, entrepreneurs, et cetera, that are part of the inclusion for the interests of the Global South and the world?

JAMES MANYIKA:

Well, I think a few things. Part of it is giving communities everywhere the ability to participate in these opportunities. I think those come in two parts. One are the access to, if you like, the ingredients for AI: access to compute, access to models, being able to get access to data that they need to tailor and customize things in their context. And on the access to models, I think that’s also the reason why there’s an interesting kind of tension around this question of open source. But I think open source is one of the ways to give access to many more people to develop, participate, and be part of it. So that’s kind of one of the things that are directly about AI. Then you’ve also got a second part, which are some basic infrastructure needs. I think you and I—Reid and Aria—we can sometimes forget the fact that not everybody has access to broadband, actually, in some parts of the world.

JAMES MANYIKA:

And in fact, when I listen to some of the members on the High-Level Body, they’ll even add basic electricity to that. There are some places where without basic broadband and connectivity and infrastructure, none of the, all of this is mostly academic. So I think all of this is very important for inclusion and participation. And then you’ve got other things, like recognizing access to languages and the rich cultural diversity of data and context that come from any of these extraordinary places around the world. I think all of that is very, very important.

REID:

Yeah, no, it’s important for people to know. And you and I talk about it all the time in a number of contexts. So now now for a quick AI experiment: James, as co-chair of the UN’s AI Advisory Body, you were involved with the publication of a 2023 report entitled “Governing AI for Humanity.” We ran this paper through one of Google’s very cool new AI tools, the virtual research assistant NotebookLM, and asked for a summary, specifically tailored for an entrepreneur. NotebookLM turned around bullets of relevant opportunities and challenges for entrepreneurs. And here is one of each:

NotebookLM:

First, collaboration and open source. The report underscores the potential of open source AI models and international collaboration to level the playing field. Entrepreneurs can leverage these resources to access valuable tools, data sets, and expertise, fostering innovation and accelerating their ventures, and also competition and access to resources. Despite opportunities for collaboration, the report acknowledges the competitive nature of the AI landscape. With large technology companies holding significant resources, entrepreneurs may face challenges accessing crucial data, talent and computing infrastructure.

JAMES MANYIKA:

Well, I think that sounded pretty good. It sounded pretty good. I think perhaps things that we may have added to that from an entrepreneur standpoint, having spent some time talking with entrepreneurs in Latin America and in Africa especially, I think I would also add the need for more robust AI ecosystems, including funding, access to resources and investment. I mean, Reid, these are things you and I have discussed in, in the past and continue to talk about all the time. So the more we can empower entrepreneurs in those places to be able to build companies, participate, and, quite frankly, access much larger markets and opportunities, I think that that’s something I would add.

REID:

So before we move on to other areas of AI, say a little bit about your specific uses of NotebookLM. Because this is actually one of the things that I think we try to do with the AI thing — is people go, “well, I’d say, cool, I can create a sonnet.” And they don’t realize, like, today, tangibly, in the stuff you’re doing, you can make this really interesting. And here you are, you know, running, you know, important AI stuff at Google and you’re using it. So say a little bit about how.

JAMES MANYIKA:

Oh, I use it a lot. First of all, let me describe what it does. So it’s kind of based on the Gemini Pro model. So it’s a, you know, it’s a very capable model underneath it. It also happens to have this multimodal capability, and it has this incredibly long context. And by having a long context, that’s what gives it the ability to be able to do the virtual research “assistant-y” thing. So I can actually put, you know, create a notebook that has all my notes around a particular topic. I can add up to 50 sources. Each one can have up to 500,000 words. So if you think about all of that together, that’s like 25 million words, and that’s all my stuff. So now it’s grounded in my stuff. So now I can have a dialogue with it. So I’ll tell you what, at least two or three examples I’m using it for right now.

JAMES MANYIKA:

I have a notebook on AI and science, where I’ve put in all the papers and stuff that I’m trying to learn about in science. I’m not as good a biologist—you’re a better biologist than me, Reid. So I’m trying to learn as much as I can about biology because of the incredible things that AI is trying to do in biology. So when I put all that in, into my AI and science notebook, I can now have a conversation and query. I can ask questions. I can ask for connections. I can draft things. And by the way, it has a particularly cool feature, which is—I can cite my sources. So the nice thing about doing the, doing things in the long context, whatever it gives me, you probably saw this in what you did with the UN report, you probably had citations.

JAMES MANYIKA:

So whatever it summarizes, if you click on one of those citations, it’ll actually take you to the sources and the words in the ground, in the materials you gave it, so it actually cites them. So you don’t have hallucinations or any of that. So it’s an incredibly useful tool. In fact, there actually quite a few people using it to do extraordinary things. Walter Isaacson, for example he, I think he treated this a while back. He’s using it for the work and research he’s doing analyzing Marie Curie’s journals. Steven Johnson, you know, he got a big data dump of all the Apollo mission transcripts from NASA, and he threw all of them in and basically is now having a dialogue with all of that for the research and book that he’s trying to do. It’s very useful. It’s very cool.

ARIA:

I love that so much. And I think the, the lack of hallucinations and the fact that you can cite sources, I mean, is a total game changer for folks who are using it in their work context. So, we’ve talked about ways that you are using AI personally. We’ve talked about some of the limitations, especially in the Global South or under-resourced communities. Are there other, sort of, positive impacts that you’re seeing right now from a global perspective, from AI? Like, the show is Possible, we’re going to get into the future, but what are you seeing right now in terms of ways that AI is actually helping globally?

JAMES MANYIKA:

Oh I can think of several things, Aria. So let me list a few. First, languages. One of the things that’s extraordinary — first of all, people have been using Google Translate for a very long time. In fact, up until about maybe two years ago, Google Translate was doing like, maybe what, 38 languages? And now it’s about 276. And if you think about the world as a whole, you know, if you want to get to, I think something like 96% of all the languages spoken in the world, you’re going to have to get to 7,000 languages. So we actually have a moonshot to get to at least a thousand languages pretty soon. So I mentioned that because all of a sudden, people in very different places around the world are able to converse. I’ll give you another example. As the world start to face the effects of climate change, you know, we have many, many more extreme weather events.

JAMES MANYIKA:

One way that’s actually members of my team in Google Research, what began as a small pilot about a year and a half ago, using AI systems to predict floods, what are called riverine floods, e started this as a pilot in Bangladesh. It worked. It worked in the sense that you can give people up to at least five days advance notice, which makes a huge difference. And so we went from that pilot in Bangladesh to where today, we are now doing this in over 80 countries — giving alerts in areas that are, that cover something like 560 million people. This is extraordinary. So, you know, it actually is a way to help. And by the way, of those people, something like, I think we, we cover something like 800 locations here in the US. It’s not just in the Global South, but also in the U.S. There already are incredible ways that this is already making a difference that don’t have to wait for the future. But it’s already happening right now.

ARIA:

I have to admit I love that and I just want more people to know about it. It sounds trivial, but honestly I’m thinking about the global blockbuster Twisters from this past summer where they were trying to have scientists predict when twisters were coming, so they’d give people a heads up. And to me, this flood thing is the same thing, except it’s, it’s not science fiction, it’s science fact. We’re actually doing the things that the global blockbusters are about. People just don’t know about it. And so I, we just, we need people to know.

JAMES MANYIKA:

No, we, need to talk about it more. In fact, this summer, actually a month ago I mentioned wildfire boundaries. We know we have many more wildfires. In fact, we’ve now gone from, up until a month ago, we were covering about 15 countries. It’s now 22 countries, both in the US and also in Europe, but also in the Global South, where we can give people information about wildfire boundaries. People may not fully get, when they get the benefits of that, that oh, AI did that. They just say, “oh, I’ve got a fire alert.” Or, “hey, you know, I’ve got better treatment.” but I think it’s important for people to know that there’s a lot of this that’s being enabled by AI.

REID:

You know, one of the things I think you’ve commented on before is kind of capitalism’s challenges in two particular areas. One is kind of economic wellbeing for all. Another one —climate change. So, you know, how do you see the tech community trying to engage AI and science, help mitigate these failures? What’s possible? How do we get the tech community more on target with that? How do we get capitalism better aligned, the relationship between AI and capitalism, and how should we be steering that?

JAMES MANYIKA:

Yeah, I think that’s a very important question, Reid. I mean, I’d start with thinking about the quest of AI in the economy. I think first of all, AI has an extraordinary potential to drive productivity growth, which drives economic growth economic, which is how societies and communities prosper, basically. In fact, by most estimates, the economic growth from productivity potential is enormous. But it’s also important to say those gains won’t happen automatically. And this is where, what entrepreneurs, companies—and in fact, even what investors—have to think about, and some of the ways to get those gains to happen include encouraging wide adoption, especially in some of the largest sectors in the economy. It’s not enough to for the tech sector by itself to be using these tools. It matters that we get to sectors like healthcare, retail, and others.

JAMES MANYIKA:

So wide adoption’s important. It’s also important that we’re building and focusing on applications that are actually productivity enhancing. And I think this is important for entrepreneurs, to the extent that we’re enabling workers and people working very high-productivity-potential use cases, that becomes very important. Then I think you’re making sure that, you know, we’re building worker resilience and preparedness is also important. Keep in mind that it’s going to take a whole lot of other complementary investments for all of this to happen. So I think all of those are things to think about. I would say though, Reid, this is something I know you and I have talked about all the time: Just because—even if these economic gains happen, there’s no law of economics or guarantee that may, that’ll say, “therefore everybody will benefit.” That’s going to take work by all of us to make sure that even when the gains happen, everyone benefits.

JAMES MANYIKA:

So part of that is going to include things like making sure everybody can participate, making sure that all communities are part of this, making sure that we’re not leaving communities and places behind. And also making sure that workers themselves are ready with skills and the capability to take advantage of this. We’ve talked about Stanford, our friends Erik Brynjolfson and others like David Autor—they’ve been doing some very useful and important research on this. And what I like about some of that research, as you know, is that, it may be that for the first time we have a technology that does something different here than what previous digital technologies did. You know, the economists listening to this will know skill-bias technical change, which is what older technologies did. We showed that it historically tended to be the more skilled expert workers were able to take advantage of technology.

JAMES MANYIKA:

However, this time around it may actually be flipped. I mean, David Autor has done some extraordinary work along with Erik—it shows that, it now looks like the less skilled workers are the ones who actually benefit the most from these technologies. David actually wrote a wonderful paper where he suggested that this could actually lead to rebuilding of the American middle class, which is really exciting. So I think we, we’re the precipice here of some exciting possibilities. But, you know, as you know, Reid and Aria, it is going to take some smart policymaking. It’s going to take some smart actions by entrepreneurs, companies—all of that is going to happen, is going to have to happen together for this to turn out well.

REID:

Yeah, one of the ways to kind of make that last point that you used, you know, the David Autor point, like tangible — is if you say, well, if you kind of look at what the generative AI systems are doing today, they create a great assistant at, call it 90% plus of, you know, kind of percentile of human capabilities. You could have a person-plus-machine still be much better. But also when it gets that skill gap of lower-skilled or higher-skilled, you go, “Who is benefited by an amazing assistant that does all of the basics, you know, very well?” The answer is the entry skills. And that, that creates a great kind of leveling, you know, for how this, how this operates. And that’s one of, I think, the points of optimism.

JAMES MANYIKA:

Absolutely. Absolutely. And by the way, a lot of this which you’re describing, Reid, is not just theoretical. The early empirical evidence—there have been a few empirical studies—actually show exactly what you’re describing That in fact, I mean, we see instances at Google where a lot of programmers now work alongside these AI systems, and it actually makes them better. They can do more. So I think all of that is quite, quite exciting, but I think it’s also going to be important to, to realize that it’s important that we have smart policymaking here, and investments, to guide this, to get to the right place. You know, “How do we get this to go right?” I think is very, very important.

REID:

Let me go to one particular other place of frequent misunderstanding in the press, which is, you know, because people go, “okay, power, data centers, you know, is this going to be one more nail in the coffin of climate change? I don’t understand how AI systems, you know, actually can work very positively on this.” So one, you know, how much substance is there to the AI as a negative climate change thing? And then two, what can AI be doing to help with climate change? To be helped with greenhouse emissions, et cetera.? 

JAMES MANYIKA:

Yeah, I think this one, this one’s very important. First of all, data centers in the world make up something like about 1% of all the electricity used in the world. I think it’s important to keep that in context. That’s 1%, what it comes from, data center usage of electricity. And of that data center usage, something like about 10% is actually AI. So I think it’s just important to understand that, first of all. But at the same time, it is worth acknowledging that in fact, a lot of the AI systems we’re developing are very compute intensive and therefore energy intensive. And in fact, the rate of growth of that compute intensive energy is actually quite high. So that’s why it is important to acknowledge that, and also highlight the need to come up with better, more efficient systems, better efficient training algorithms, more efficient architectures, and so on.

JAMES MANYIKA:

And I think we’re all working very hard on that. In fact, you’re going to see that start to bend, those curves, start to bend pretty soon. Having said that, it’s also important to think of the other side. I think by many estimates—including, I think there was a recent study by BCG, Boston Consulting Group, which actually highlighted that AI itself can mitigate, and help address make, you know, at least, at least a 10% difference to carbon emissions through any number of ways. And I can list a bunch of ways that we can do that. For example, look at climate modeling. There’s been some extraordinary breakthrough research by us and many others, and that’s appeared in scientific journals, about how AI systems are improving our ability to model climate change. Then you look at mitigation itself. There’s an enormous number of things we’re doing to make data centers more efficient.

JAMES MANYIKA:

There’s stuff we’re doing, for example, in traffic patterns and driving. You wouldn’t imagine that the stop-start traffic that happens in cities contributes enormously to greenhouse gas emissions. So we have this project called Green Light, which is now live in, I think, 18 cities now, where traffic lights and patterns are supported by AI systems to reduce the stop-start traffic. It’s a project called Project Green Light, which again reduces emissions by more than 10%. So I could go on. I’m sure you’ve got examples too, Reid. But I think it’s important for people to realize that yes, of course we should reduce how compute-and-energy-intensive AI itself is, but it’s having an outsize effect on addressing and mitigating the effects of climate change. So we, if you keep those two things in balance, you suddenly realize that the net benefit far, far, far exceeds the energy use from AI. That’s not to say we shouldn’t address energy use from AI. We should.

ARIA:

I also love that, you know, if you’re deciding what to do with your life, and you care about climate, like so many young people do, in the past, maybe you studied political science and became an activist, or you went to law school so that you could fight against, you know, oil and gas companies, or file, you know, EPA claims. And I think it’s really important to tell people, “Hey, if you want to fight climate change, you could become an AI researcher. You could go into technology.” Like, these are ways that we can hugely affect the outcomes of this. And they don’t have to pit it against each other. It’s not sort of AI versus climate. It’s like, this is actually the way that we’re going to solve some of these challenges.

JAMES MANYIKA:

I love that, Aria. One of the things that’s exciting to me, and I think to many of us, is the fact that the intersection of AI with so many things is everywhere. AI and climate, whether it’s climate change, climate science, climate modeling. AI and biology, AI and physics, AI and quantum, and in fact it’s, you know, many, many aspects and disciplines that people are very excited about now have this extraordinary intersection with AI. If you go to any biology research institute—I happen to be involved with MIT and Harvard—much of the work there now is computational biology. In fact, you find that people, you know, close to at least a third or more of the researchers there are actually computational biologists making use of these techniques. You find the same thing in climate science now. So I think this is true in many, many, many disciplines.

ARIA:

Mhm. And so, to zoom out a little bit, we’ve been talking about technological advancements, and you know, they should and can be good for the progress of society. We have to get policy involved. We have to make good decisions. We have to have government work with businesses. But also, especially with AI—because as we said, it’s, you know, it’s an intelligence—it sort of stimulates these larger political and philosophical questions. How has working in and around AI for so long like changed how you think about humanity? Or what have you learned about humanity in the context of all these AI developments?

JAMES MANYIKA:

That’s a, that’s an important question. I think there are questions about risks and complexities which we should discuss, but there are also some quite deep philosophical questions. And those, to me, include the questions about the ethics of all of this and the societal implications of all of this. But even, you know, the question of, you know, I think AI in some ways is, is putting a mirror in front of us, allowing us to look at ourselves as humanity. Questions about: What does it mean to be intelligent? What does it mean to be creative? What does that mean to be empathetic and thoughtful? Because in many ways, AI is, kind of, having us reframe many of those questions. One of the things that’s been fascinating, Aria, to me, is if you look at the history of AI, which has been quite documented by many. I love—there’s a book, by the way, Pamela McCorduck’s book is a classic I’d encourage people to read.

JAMES MANYIKA:

It’s called Machines Who Think. She kind of chronicles this issue of the field. But one of the things you get from that is how a lot—for the last 50 years, as AI’s made some progress, greater or less, it has caused us to reframe what it means by creativity. So there was a time when we used to think creative meant being able to do these kinds of things, or being intelligent meant doing these kinds of things. Then you’ve suddenly find that AI is now able to do some of those things, and then you kind of reshift and reframe the question. All of this is giving us ways to rethink and look at ourselves, and rethink through these questions. Even take these issues of bias and fairness. It’s easy for us to say, “we want systems that are fair or not biased.” Okay, what does that mean? Humanity has been trying to figure that out for thousands of years.

JAMES MANYIKA:

So I think there’s some deeper questions that we are having to grapple with here, and I think it’s a good thing. I call it a point for reflection. To go back to the UN thing, it’s actually been quite useful for me, by the way, hearing from so many people from around the world who don’t sit where I sit, who don’t do things that I do, who don’t see, necessarily, the world the way I do—to hear both about their excitement, but also concerns about this technology, and realizing that there are ways for us to work together to get this right.

ARIA:

So are there specific things that you think we should be doing more across the industry to mitigate risks? Like we all agree, there’s enormous potential, and talking to people from 33 countries on your advisory panel is critical. What should we be specifically doing to ensure there’s no bad actors to ensure that we get everyone’s point of view, just to create this best possible future? 

JAMES MANYIKA:

I can think of a number of things, Aria. I think part of that is agreeing as a community and an ecosystem on standards, safety mechanisms, principles for evaluating the systems we’re developing. Protocols for when things go wrong. Finding, you know, ways that are appropriate to share information and experiences. I think it’s also important to agree on things like, you know, “When is it appropriate to do what?” Can we all agree, for example, you know, as the technology develops, if we can do things like watermarking, if we can do things like provenance, and share information about all of that — we should do that. I also would say that in a different direction, I think it’s going to be quite important to make sure we are necessarily going to be making sure that these systems and capabilities benefit people who don’t normally participate and communities who don’t normally benefit from this.

JAMES MANYIKA:

So, I often want to tell my colleagues just to assume that commercial interest alone will get to everybody. I don’t think that’s the case. I think there are some applications, some uses, that should be considered in the public interest, that, you know, simply commercial approaches won’t get to. Right? You know, if—I’ll use my language as example: If, you know, if, if we only care about languages and translating languages that only benefit the places where you can “monetize,” you’d probably stop at about 40. Language number 40. You wouldn’t bother to go to a few hundred, right? Or aim to go to a thousand. So I think many more examples of going to those in the public interest uses and use cases, I think is is another thing that this community can do. I’ll mention one other quick thing, Aria. I think, more than ever, this is going to become a very interdisciplinary field. I think we have to involve philosophers, ethicists—not just purely technologists. I think these questions have become so multifaceted. I don’t think any discipline, computer science, has any monopoly on how to think about these things. So I think the more we can involve a diverse set of, you know, people from different areas of expertise, I think is really, really important.

REID:

Yeah, I think one of the things, people naturally think what this kind of AI revolution is, is a further kind of quadrupling down of how important CS is and machine learning and every else, and of course those are very important. But what is almost uncommented on, which you’re gesturing at, is actually the importance of a variety of the humanist disciplines. We need to figure out how to get, you know, the question about the human search for truth is allowing a discourse between positions and allow, you know, like people to have that discourse, is what’s most important. And that’s what we’re going to be needing to build into these AI tutors. Like, all of these humanist disciplines are now even more important in the age of AI.

JAMES MANYIKA:

Oh, absolutely. Absolutely. And the other thing that’s also going to be collectively important, making sure that the questions we are exploring and trying to understand keep up with where the capabilities of the technology are. Sometimes there’s a sense that the questions we’re grappling with, whether it’s ethically or philosophically have to do with the outputs of these systems. Sure, that’s true, right, in terms of the content and all the rest of it. But that question is already just one of several other questions. As these systems become more agentic, for example, they’re going to be questions about what actions are appropriate. So I think the questions keep evolving and shifting as the technology advances. So I hope that we can find ways to, you know, also be in dialogue with philosophers and policymakers and others as the technology advances, so that we’re not solving for technology crises from 10 years ago, but actually solving for the ones today.

ARIA:

And so, internally at Google, how do you grapple with these questions? And then also, what’s to come? How is, you know, NotebookLM going to be rolled out globally? What is the future of that going to be for Google in particular?

JAMES MANYIKA:

Oh, in the case of NotebookLM actually, it’s pretty cool. So it’s now available in 200 countries already, which is very cool. You know, it’s going to have [and now has] this audio capability. So you can actually get audio overviews that create a conversation, you know, discussing whatever’s in your sources, with AI-generated voices, which is actually also very cool. But I think to your broader question about how are you thinking about this at Google, I think, you know, as I mentioned earlier, we’re trying to do two things at the same time. On the one end, we are trying to be as ambitious as we can, and as bold as we can, about: Can we advance all the ways that we can benefit people and society? So in that regard, we think about benefiting people. I mentioned, you know, trying to get to a thousand languages.

JAMES MANYIKA:

We’re trying to help, you know, people with disabilities use this technology or to—not to be left out. So there’s all these things for people. Then there are all these things to power economies from, you know, small businesses to large companies that hold economies. So the economy is important. And then all the science things we’re trying to do, you know, from AlphaFold to material to climate, et cetera, and all these societal things. So we’re trying to be bold and ambitious in that way. At the same time, we’re trying to be responsible. And this is, you know, hardwork. We’re trying to, you know, improve the performance of these systems, get it right. We don’t always get it right. Make sure that, you know, there’s accurate, they’re not biased. They don’t give you weird results, which they have, you know, on occasion—we’re trying to work and fix that. But we’re also trying to make sure we’re responsible on how we deploy these systems to avoid misuse.

JAMES MANYIKA:

So we’ve been working on things like SynthID, for example, to watermark outputs of images and so forth. So we’re trying to work on that. And then on some of these societal complexities, we’re trying to also help with skilling and workforce capability and make sure workforces are resilient. So we’ve got this, kind of, two-part set of things we are trying to do. The thing about those that, those kind of bold things, the responsible things, Aria, is: We don’t see those as in tension. We actually like the constructive tension that that creates, actually. And we want to embrace both things at the same time. Doesn’t mean we get it right all the time. But that’s kind of our North Star.

REID:

How do we think kind of epistemology, learning, and truth are going to evolve through the use of our modern AI tools? And, you know, what’s the way that people should understand it? What’s the way that we should shape it? It’s obviously pretty core to, you know, the Google mission, but it’s also, of course, super important to humanity. So a few remarks there, and then we’ll get to rapid fire.

JAMES MANYIKA:

There has to be a way to truthfully and factually understand the real world and the world as it is. So that, that’s got to be how, I mean, that’s how society and humanity has progressed. There’s got to be a shared sense of truth and objectivity that reflects the real world. That’s why I think being focused on factual, truthful science-based perspectives, I think has got to be a North Star for how we think about this, even in the age of AI. You know, I’m always amused, Reid, by, I think one of the—I remember getting a critique from somebody who said one of our AI systems was biased because it said climate change is real. Right? And you think, okay [laughs]. Right, but I think those are much more normative societal questions. I just hope we keep to the idea of testable science. Those kinds of foundational principles are quite important.

REID:

Alright, so, rapid-fire: Is there a movie, song or book that fills you with optimism for the future?

JAMES MANYIKA:

Well, movies: Star Trek. I mean, Star Trek is always basically very optimistic, just because it’s got technology in it. It’s optimistic. It’s thoughtful. It’s complex. I always, I always feel, I like that.

ARIA:

And so, is what is a question that you wish people would ask you more often?

JAMES MANYIKA:

What could go right? I wish people could ask that more.What could go right? A second question, I’d say, I think it’s important to ask, people to ask me and us, “What are some of the exciting things in AI outside of chatbots?” I think somehow we found ourselves into this corner where we think all of AI as chatbots. It’s a much wider field, and there’s so much more going on. I wish people could ask more about that.

REID:

Yes. Well, we didn’t ask you one question about chatbots in the entire podcast.

JAMES MANYIKA:

Thank you. [Laugh]

REID:

You know, I know you can answer this question in many different ways, so I’m curious which way you’ll do it, but where do you see progress or momentum outside of your industry that inspires you?

JAMES MANYIKA:

In science. I think this is an exciting time for science, actually. AI is enabling part of that, but there’s so many other exciting scientific possibilities ahead of us.

ARIA:

I mean, this whole podcast, I’m like, I gotta go back, and I gotta get a degree. I mean, the science is so fascinating. And so for our final question, can you leave us with a final thought on what you think is possible to achieve if everything breaks humanity’s way in the next 15 years—and what’s the, what’s the first step in that direction?

JAMES MANYIKA:

I think, we’ll find ways to include everybody to participate in this exciting thing. And I think access to language, access to opportunities, I think will revitalize growth of the middle class. I think David Autor is perfectly correct, and that we have a chance here to revitalize the growth of the middle class, I also think that science is going to make an—the impact of science in society is going to be pretty profound. I think all of these things could really break well for humanity. I’m very excited about all of that.

ARIA:

I love it. James, thank you so much. We really appreciate it.

JAMES MANYIKA:

Well, thank you both for having me.

REID:

I look forward to seeing you again soon.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Adrien Behn, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Ben Relles, and Parth Patil. And a big thanks to Karrie Huang, Rachelle Lacroix, Kerry McHugh, Sarah Portik, and Little Monster Media Company.