This transcript is generated with the help of AI and is lightly edited for clarity.
REID:
I am Reid Hoffman.
ARIA:
And I’m Aria Finger.
REID:
We want to know what happens, if in the future, everything breaks humanity’s way.
ARIA:
Typically, we ask our guests for their outlook on the best possible future. But now, every other week, I get to ask Reid for his take.
REID:
This is Possible.
ARIA:
Reid you opened our conversation with Daphne Koller last week talking about the early days of AI. We all learned from your speech at Perugia that you first got inspired by AI by reading Gödel, Escher, Bach in high school, and again, while at Stanford, talking with Professor John McCarthy, who actually coined the term artificial intelligence. And of course, since then you’ve been investing and building in the AI space. So tell us more about your early history with AI and what you remember from those days.
REID:
Obviously, in each technological age, each technological area of invention, there’s a new concept of human intelligence. So you—like, when we were doing fluid mechanics — “It’s the brain works through fluids, you know, and electricity. It’s all electrical.” And of course, with computing, “It’s all computation.” And that’s of course what’s kicked off the modern field of artificial intelligence. And then when I was an undergraduate at Stanford, what I got into this, this major called Symbolic Systems and AI, is because it’s like thinking about what is it that makes something intelligent and sentient? How do we understand ourselves, the world, our relationship with each other? I kind of started at the very early days getting into AI with that as an interest. And one of the things that’s particularly historically poignant is my, you know, key advisor at Stanford was a professor named David Rumelhart, who was one of the earliest co-authors and proponents of PDP — parallel distributed processing — and doing these, you know, along with Geoff Hinton and some other folks.
REID:
And so this modern approach to AI is something that I was experiencing as, you know, a sophomore at Stanford. Now, part of what I concluded at Stanford was that we were actually, in fact, very far away still from understanding intelligence, from being able to build AI devices. What got me back into it was this kind of early work from the DeepMind folks Demis Hassabis, Mustafa Suleyman, Shane Legg — where they started understanding how you could apply scale compute Not just, you know, because there is evolution and change in the algorithms, but the algorithms were already had been baseline created, but creating scale compute. Now, the first things that these DeepMind folks did was to say, “Hey, we can do self play of computer games. And with that self play of computer games, we can create, you know, kind of AlphaZero and kind of equivalent.”
REID:
And that allows you to suddenly apply scale, compute in ways it hadn’t done before. And then part of how that continued to evolve with what OpenAI was doing with the transformer and large language models was say, “Look, let’s just apply scale learning algorithms through the transformer, through convolutional neural networks, through human feedback learnings on a scale amount of data. And we can make something that’s super important.” And that’s the kind of thing where my, I say, “Ah, actually now we’re here, you know, we’re just at the very beginning of this.” Because obviously some people have had some ability to kind of play with these large language models and see what kind of magic they can do in this kind of interaction. But our application to what it means in, you know, very serious things — medicine, you know, as we were talking about, drug discoveries, we were talking about with Daphne. But a whole stack of things were just at that precipice of it, and obviously is part of what’s kicked us off here through this. But the early days was heady in similar discussion, but they hadn’t realized scale, compute, data, et cetera yet.
ARIA:
And we also spoke with Daphne about navigating the space between academia and industry — like between the conceptual, but then how do you get the concrete technology? And I love the story she shared about her postdoc at Berkeley, Stuart Russell, who challenged her to think about applications of her very conceptual thesis by asking like, “What would you do with this if you had a brilliant team at your disposal?” And so I will, I will turn to you, Reid, with the same question. Who is someone who has helped you pause and think about human outcomes of technology that you’re excited about?
REID:
To kind of answer a few in this arena, Fei-Fei Li and the Stanford, you know, HAI, you know, Human-Centered AI Institute. You know, I think Fei-Fei was one of the earliest people to start really clearly articulating, “Look, it’s not just about kind of like the creation of a robot, the creation of AI, it’s human-centered AI.” So Fei-Fei is definitely, you know, kind of one in this arena. Mustafa Suleyman, my co-founder for Inflection, is another, because part of when he was at DeepMind, part of how we ended up going deep — pun mildly intended — on a set of things was: Not only do we need to get the function of governance correct, e.g., how is it that it’s good for a broad range of human beings, that it isn’t, you know, an instrument of a, you know, totalitarian regime or totalitarian, you know, monopolistic company.
REID:
But how do you do it? But it’s also the process by which you get there — is important to your moral legitimacy in various ways. So, you know, Mustafa and I’ve had a huge number of conversations on that. And then, you know, maybe, you know, maybe talking about Kevin Scott, who you know, was the CTO and VP of engineering that helped us take LinkedIn public, and then is now the CTO of Microsoft. And you know, part of the thing that, you know, Kevin’s specific move, which was his excellent book — here is how AI can help kind of people in, you know, kind of rural America, reinvent their economies, reinvent their economic prospects, and can be really good for them. And it’s part of the thing that from his own, you know, kind of background, you know, he speaks very authoritatively about was, it kind of got me to the generalization of in all technology and including AI, whenever you see a problem, see if you can use technology — and for example, you see a problem with AI, can you use AI to be part of the solution? So those are three, but I could probably go on for an hour listing additional people. [Laugh.] Right?
ARIA:
So Reid, you know, you have done so much deep thinking about the ways that AI can benefit humanity, but still a lot of people are nervous about AI in general. And yet I think one of the places where we can see so much hope is actually in the drug discovery field. So how do you think we can most effectively communicate these advancements in AI and biomedicine to fill people with hope about this area for AI?
REID:
The short answer — and part of course, the reason you and I are doing Possible and the podcast — is to say, let’s figure out what are the really good things, the things we should be rowing to, the things we go, “That’s really exciting.” Because like if you take for example, our conversation with Daphne and say, “Hey, there’s new drugs that can be created; possible new therapeutic hypotheses for oncology and other kinds of things. And that might lead to something really different.” People go, “Well, that’s good.” And you know, her specific, you know, wave a wand thing — which is something I also have agreed with — which is, can we get people saying, “Yes, I would love to contribute my data to help us solve collective problems.” Because one of the problems, things that most people encounter when they’re like, I am suffering with disease X or genetic condition Y or something else, it’s like, if you can learn something from my condition and my suffering to help other people, it may, it adds meaning to what I’m doing.
REID:
And so most people actually in fact, really want to do that — and so enable it, make it happen. This is the kind of like, and, but people say, “Oh, collective data could be really bad. Something bad could happen.” And it’s like, Yeah, but by the way, something good — like curing cancer — could happen [laugh], right? And that’s really important. And so the fact that there might be one isolated foot fault or fumble or even, you know, fender bender or car crash doesn’t mean we should close the highway [laugh], right? I mean, we should continue to do that. And so I think you communicate by going here is what could be great, here’s what could be magical. And then it doesn’t mean critics should, should stop speaking, but it means that they should shape their criticism with a thought about, “Look, it’s really important to get down the highway. It’s really important to get to the drug discovery. It’s really important to get to the cancer fixing.” That, getting there is what’s most important. How do I contribute to our navigating risks and negativity while we’re doing it, versus going, you know, banging pots and pans together, going, “Danger, Will Robinson!” And I think that’s the, that that’s, that’s at least a beginning to share the hope that we see and, and make that more public and more common.
ARIA:
Yeah. And you would hope — especially on the sort of data front and drug discovery — that besides the polarization, et cetera, that, you know, the COVID vaccine should be something that everyone can say, “Oh wow, that was an incredible drug discovery that we got because of years of research. That was an incredible cooperation between governments, private industry, academia.” Like, these are things that we need for the common good and that at least benefited, you know, millions, billions of people around the world. So Reid, thank you so much. Appreciate it.
REID:
Awesome.
REID:
Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Adrien Behn, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.
ARIA:
Special thanks to Karrie Huang, Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Ben Relles, Parth Patil, and Little Monster Media Company.