This transcript is generated with the help of AI and is lightly edited for clarity.
REID:
I’m Reid Hoffman.
ARIA:
And I’m Aria Finger.
REID:
We want to know what happens, if in the future, everything breaks humanity’s way.
ARIA:
Typically, we ask our guests for their outlook on the best possible future. But now, every other week, I get to ask Reid for his take.
REID:
This is Possible.
ARIA:
So Reid, obviously everyone is talking about AI and specifically a lot of people are wondering how we should regulate this new technology, if at all. And so how would you characterize how the U.S. and other governments are regulating AI, and what would you like to see more of?
REID:
Well, the good news, unlike typical technology regulation issues, is that a lot of very good progress has been made and is heading in good ways. So you’ve—and you know, obviously while I’m a massive fan of the Biden administration on many different fronts for what they’re doing for everyday Americans, you know, I think the leading dog in this race so far is the U.K. I think the U.K. setting up the AI Safety Institute and kind of getting off the ground has been spectacular. Now, Secretary Raimondo came over and helped with that because part of, one of the things that is obvious, that is just awesome about her capabilities is, she goes, “Look, we just have to get the job done. We have to make sure that we’re benefiting Americans. It’s not about a foot race for, you know, who does what first, but is that we all like—that Americans get there in a really good way.”
REID:
And obviously, you know, our ally countries and our partners—you know, part of the role of America always is to play a great catalyst and partner and enabler, and Secretary Raimondo does that. But the U.K. AI Safety Institute, you know, got up, got funded. We’re still working on the funding for ours. This is one of the challenges that we have of separation of, you know, the three branches of government and making that happen. So it’s not, you know, Secretary Raimondo’s fault, but like, you know, the U.S. is working very closely and playing a global role, but, you know, is kind of next in line. Now that being said, both the U.S. and the U.K. were enormously benefited by Biden’s process to the Executive Order, which was: first, call the companies in, spend a lot of time in intense dialogue, get a set of voluntary commitments, derive from those voluntary commitments—what are the right things to do that protect against risks while encouraging and getting the innovation where our capabilities will be greater in the future?
REID:
Then looking at those and going, “Okay, well what is the thing that we can move on quickly?” Hence Executive Order. Because it’s a subset of the things that you could potentially do, because it’s the things that they’re—you know, the Biden administration’s very careful about. Like, you know, where do the Constitution and rule of law give us our powers, which are things that we can do that are legal and, and operational? So we’ll do these. But they’re based off of the volunteer commitments, but now across all U.S. companies and how they’re playing a global role. And then that of course creates the stage for how Congress can talk about this, what kinds of things can happen, what are the considerations? And then when you get to other governments also in the world that are doing good work. You know, Macron, the French—we’re going to have a kind of safety summit coming up.
REID:
They’ve been playing a very good job within the, you know, kind of within the European context, about saying, “Hey, look, let’s innovate into the future and innovate into the right integration in the society versus try to stop the future.” Too often the regulation is like, “Thou shalt not, thou shalt not, thou shalt not. Let’s try to make as little of the future happen as possible and as slowly as possible,” which is the general kind of European regulatory response in this stuff. And instead it’s like, “Look, there’s so many different benefits we can get, you know, medical assistants, tutors, industry, cognitive industrial revolution, let’s be playing forward.” And fortunately that’s also being shown out within a set of startups within Paris and the kind of French economy and outside of the U.S., China and London—Paris is the next place that is.
REID:
And you know, obviously their work is to get an equal contention with London as the next place where’s like, “Well, we’re building stuff and executives in all industries want to talk about AI,” and they want to say, “Look, you know, how do we get, you know, the AI companies providing the industry and providing the society, and what kinds of things we can be doing?” And this is exactly the kind of thing that the government encouragement has been enabling and has been helping shape. And so this is the reason why it’s good across not just the U.S. and Secretary Raimondo’s excellent leadership, but also the U.K. and France and you know, the work that’s going on in the G7. One of the surprising things is the first time a Pope has addressed the G7 because, you know, Pope Francis, back in 2015, 2016, was already like, “Okay, what is this AI thing?
REID:
What does it mean? How do I help on a positive basis?” And so has already been working on it, and so was in a place where that when the G7 said, “This is what we’re going to meet on,” they could invite Pope Francis. And Pope Francis could come and say, “Here’s the things that we’ve already been doing within the Vatican context, and here’s the things that we think are important questions, important considerations.” And again, you know, navigating the risks is super important, but part of navigating the risks is getting to these really amazing future possibilities. And obviously part of how Pope Francis has been playing into this is to say, “Look, the most natural thing within the developed worlds and technology companies and everything else is to play for the wealthy markets. How do you build these services for the wealthy countries?”
REID:
And he’s like, “Look, let’s make sure that we’re providing at equal time and equal capability to the Global South, to many other markets that don’t have the same rich economies, but could be benefited in their industries and their services—in medical service, tutor service, education—you know, make sure that, you know, questions around climate change and other, and market participation or other things that are front and center. And by the way, not only the countries, but also all of the people in the countries, even the low income people in the countries.” And Pope Francis and the Vatican have been consistent in that, in conversations I’ve had with them since 2016—of saying, “Let’s make sure that these are front and center considerations.” So overall, this is one of the areas where work has already started. It’s making good progress. It’s in motion. There’s obviously still more work to do, but some of the governments that we care about are in motion in smart ways.
ARIA:
It seems that, as with so much in our sort of dialogue these days, it’s either black or white. It’s either people saying “zero government regulation ever,” which seems pretty silly on the AI front; or way too much, and we’re going to stifle innovation, just like you said, “We’re going to slow down, we’re not going to get the growth to a medical assistant that we need.” And I feel like a lot of people are making comparisons to social media, and some people think we got social media wrong. You know, the government should have stepped in more and regulated more. Is that leading to people sort of overreacting and calling for too much government regulation because they’re comparing it to the social media issue? Or how do you compare those two?
REID:
Well, it’s definitely leading to the vigorous call-to-arms to strongly regulate because they went, “Look, we now know that there were clear problems with social media and we could have done something in advance.” Now the problem is, most of the stuff we wouldn’t have known in advance, and so even on the social media call is actually in fact the wrong call. Now if you said, “Well, what would you do if you called your younger selves and said, ‘do something on social media’?” What you’d probably say is: “Work to be more vigorous about how you’re protecting potential harms from children.” Now, maybe we didn’t fully notice it until the pandemic happened and that all intensified it. And that’s part of the reason why even that phone call to our younger selves might not have been, you know, kind of fully accurate, but that would be the phone call you’d say is like, if you could make a quick phone call and say, “Hey, start early,” you’d probably start on that kind of thing.
REID:
Because—generally speaking—when you’re looking at technology in the future, you could list out 200 things. And the question is, could you pick the one to three that actually really mattered? You say, “Well, way to pick the one that you think really matters. Do all 200.” It’s like, “Well, then, let us just not have any innovation. Let’s not go to the future.” And they said, “Well, but I know the right one.” It’s like, yeah, you and the other thousand people each have your right one or two, and you have this mayhem of discussion around it. So that’s the balance of these things. And that’s part of the reason why when you get to the AI side, it’s like, “Look, let’s try to be focused and short on what the really key things are that are the only things that are absolutely critical to get right.”
REID:
Because most often, most of the things you’re listing actually naturally come out from the regulations—naturally kind of play out. And that’s part of the reason why, you know, what the Biden administration has done with the voluntary regulations and Executive Order, executive rights, is like let’s be very focused. And then on the ‘thou shalt’s and ‘shalt not’s, it’s like, “Well, let’s make it be that you’re working on red teaming and alignment and let’s make sure that you’re reporting on it. And let’s make sure that you’re telling us when you’re doing things that are a certain scale that might have some implication.” And let’s just start there, right, in terms of the kinds of things that we’re doing. And it’s exactly right because people say, “Well, we should go, then go and tell them exactly what they should do and they shouldn’t do.”
REID:
It’s like, well, not only do you—anybody in the government, anybody in regulation—not know that. Even the industry doesn’t even really know that. The experts don’t even really know that. So you have to kind of proceed and dialogue towards it. And that’s the reason why, you know, asking questions, getting reports, tell us what you’re doing, tell us what you’re learning—that is actually in fact the right approach. Which is of course the reason why the U.S. and U.K. AI Safety Institutes are the right kinds of things to stand up—is to have organizations that are in dialogue that are, you know, kind of persistent outside of, you know, just the usual politicking that happens—which is, you know, shocker, politicians want to grandstand on whatever the current media issue is. And that doesn’t really help us as a society, let alone as industries and companies in terms of doing that.
REID:
But standing up these outside institutions that were, be persistent gathering knowledge, talking to academia, talking to the industry, talking to leadership, talking to special interest groups, talking to other kinds of things to say, “Okay, what are the right issues?” And let’s start from a learn basis and from a watching basis and kind of go, you know, “Oh, this may end up turning into the one really serious issue. Maybe we should intensify our dialogue first really much about that.” And then kind of see how it’s playing out and see how it plays out within, you know, the large scale companies, the startups, you know, global competition and all the rest. And that’s the reason why I think it is intensified, because—look, I’ve literally heard such absurd things, like the same people who brought us social networks are bringing us AI and that’s why we should have some mistrust.
REID:
And you’re like, “Well, actually, in fact, a lot of the people who are building AI weren’t building social networks.” There’s so much seeing red that they go, “Well, Meta’s doing AI and is a social network.” And you’re like, okay, there’s one. And now of course Elon is doing his thing too, you know, with Twitter, but Elon’s Twitter—which I think has a lot of problems around, you know, bots and, you know, virulent attack speech on women and a bunch of other things that are real problems that should be addressed. And, you know, I think it’s a travesty in the media circle that that’s not what’s being talked about. But like, that’s not the old school. He wasn’t the person who brought Twitter to you in the first place. Right? But he’s now building AI too. Anyway, so, it’s just like, they see red and they don’t understand that it actually, in fact, they’re actually just different problems. And the principles that you can learn from the social network side are not—really have only incidental things about what we’re doing on the AI side.
REID:
Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Adrien Behn, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.
ARIA:
Special thanks to Karrie Huang, Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Ben Relles, Parth Patil, and Little Monster Media Company.