This transcript is generated with the help of AI and is lightly edited for clarity.

REID:

I’m Reid Hoffman.

ARIA:

And I’m Aria Finger.

REID:

We want to know what happens, if in the future, everything breaks humanity’s way.

ARIA:

Typically, we ask our guests for their outlook on the best possible future. But now, every other week, I get to ask Reid for his take.

REID:

This is Possible.

ARIA:

So Reid, some of the criticism recently of AI is that it is in the hands of the big players, you know, whether it’s Microsoft and Google developing their own AI, or buying smaller companies, or siphoning off developers. And some could say that’s a good thing. We have these enormous companies who are spending enormous budgets on research and development that, candidly, other folks don’t have. So can you talk about like—what does productive collaboration between big tech developers and everyone else actually look like?

REID:

You know, it’s one of the areas where there’s obviously a lot of turmoil and kind of different perspectives on this. Here’s some basic attributes. So one, it’s very good for human society and for entrepreneurs and for consumers that there is a number of different companies that are competing with each other. You know, that competitive landscape, I think, creates a bunch of—not just competition, but potential for collaboration. Because, for example, large tech companies go, “Well, we want to have developers and startups working in our ecosystem and our cloud. You know, how do we provide tools?” So like, the earliest stuff from OpenAI and Microsoft was a set of APIs in order to enable, you know, small companies and entrepreneurs, in terms of doing this. You know, there’s a question around, you know, kind of an interest in global scale and providing a set of things.

REID:

You know, I think that one of the things that is kind of most important that is happening, but gets under-described in terms of what its importance is, is this kind of phrase around iterative development—which is, you know, like get stuff out there and get people to engage with it. Because if you get stuff out there and get people to engage with it, then that helps solidify what this landscape of—these are really positive possibilities and these are really things to avoid in terms of what’s going on. And that iterative development is, I think one of the things that’s so important in what’s going on. And frequently it can get lost in a slowdown and, you know, have blue-ribbon commissions before you do things. But you actually don’t—like, the tools for making this thing more of an amplification in humanity, of steering towards very positive outcomes and away from negative outcomes, the toolsets richer in the future.

REID:

And so all of that impulse tends to—not to incorporate those understandings. And that’s why the iterative development is very important. And, you know, obviously, as James went through in some good depth, it’s important that it isn’t just, you know, like for example, only people in Silicon Valley [laugh], right? And not just only people in the U.S. It’s, you know, how do we get this to a, you know, broader scope of it? And it’s one of the things that I think, you know, OpenAI has done extremely well. It’s one of the things I think is happening through the entrepreneurial ecosystem.

ARIA:

I think what a lot of people who are outside the technology field misunderstand is specifically the iterative deployment. Because in some fields you would never put something out there until it was perfect. And this sort of idea that you were getting user feedback and iterating, is there a tension between that iterative deployment and red teaming, which is so critical in terms of safety vulnerabilities and making sure that we’re putting out things that are safe for consumers to use.

REID:

Actually quite the opposite of tension. I think there’s, there’s reinforcement. It’s a positive reinforcement loop because yes, you should clearly, as per, you know, the Biden-Harris excellent Executive Order, you know, have some red teaming in there. You should be considering what could go wrong, right? And you should be red teaming. And then by the way, you red team, you do iterative deployment. You see what was right and wrong about your red teaming. You go, “Ooh, we missed this and this was an important thing. And we wouldn’t have known it without, really, iterative deployment.” Like, one of the metaphors I’ve used is, you could say, “well, we’re not going to launch the automobile until we understand every thing.” Well, maybe we might have gotten the seatbelts before we launched the automobile. But we probably would’ve had a hundred other things that were completely irrelevant. And then we never would’ve launched the automobile. So part of the iterative deployment is, yes, you launch it, you get the automobile. And then as you get the automobile, you go, “Ooh, seatbelts are a good idea. Let’s have seatbelts.” Right? That’s the kind of the tangible metaphor for this kind of thing. And that wraps back into red teaming. Like, “What could go wrong here?” That gets better shaped by what you’re learning from iterative deployment.

ARIA:

So thinking of iterative deployment, you know, Google was obviously in the headlines last year when Gemini was generating from the funny, like 90% of ice hockey players being people of color and women to, you know, more, more disturbing, you know, lots of Nazis being people of color, et cetera. And so, some people might say, “This was egregious.” Other people might say, “Yeah, it was bad headlines for Google, but it didn’t really hurt anyone. They fixed it quickly. Like, this is what we do. We put something out and we fix it.” Like, what’s your take there?

REID:

So look, it’s, it’s definitely in the category of, you know, fender scrapes. Like, it’s kind of like, yes, is it embarrassing? Yes. Was it—is it the kind of thing that you should have red teaming, you know, never allow to happen? It’s like, well, no, that doesn’t actually, in fact — you know, when I, when I tend to think of like, what are the things to prevent never happening, it’s things like break the entire system, you know, cause, you know, mortality or deep injury, you know, that kind of stuff. And so that’s, that’s what you’re primarily trying to do. And so unforced footfall to errors from Google in, in trying to do the right thing. It’s always the storm in the tea cup in these kinds of things. 

ARIA:

But thinking about the right thing, you know, last week when we talked with James, we talked about, you know, what is fair? Different societies have different values on that. And you could imagine that, you know, when you’re, when you’re asking, you know, AI about climate change, like, you don’t want to talk about both sides. Like you want to be like, “Nope, there are facts that climate change exists. There are facts that vaccines work.” Like you don’t want to get into a bad place where we’re both-sidesing everything. But, on the other hand, immigration—is immigration good for the United States? Well, you know, there’s, there’s many smart arguments on both sides and how our system should work, et cetera. Like how should that be taken into account by big tech? They’re creating these very powerful machines.

REID:

Well, it’s funny because I do agree that there is an importance about being rigorous about truth. The easiest one is scientific method — hence vaccines, climate change, et cetera. And in that rigor about truth, you know, kind of having a cohesion and drive, even if you have political antagonists, whether those political antagonists come from economic interests, or political antagonists come from religious interests, or political antagonists come from cultural, “no, don’t want to change anything”—I mean, there’s, there’s a wide variety of religious sects that say, you know, “we, you know, we don’t believe in the germ theory of disease.” You know, it’s like, okay, [laugh], right? Like that’s important. Now you said immigration. I actually think immigration’s one of the ones where there’s some just clear truths as well. And so for example, you know, what has contributed to centuries of American prosperity is a set of different immigration.

REID:

So I think immigration has had a historical, you know, proofpoint throughout history. Even if you look at the micro of it and you say, let’s, let’s just focus on like high-end immigration. Like say for big tech is say, look, big tech’s going to hire software engineers. And the question is, you want them hire the software engineers in the US or do you want to hire them in Canada? Or do you want them to hire in India? Do you want them hire in Europe? And the answer is, you’d rather have them hire them all in the U.S. for the rest of the U.S. society interest, because if they’re paying, you know, an engineer call it $400K a year, you know, then that engineer rents goes to restaurants, dry cleaners, lawyers, accountants, [laugh], right? Everything else. It essentially on shores money that goes into the entire ecosystem for everyone.

REID:

So any argument that is not trying to get every single one of these people hired within U.S. Shores as possible is actually just foolishness. Now from a big tech perspective, to wrap back to the question, you know, big tech is going to, and you know, technology generally, they’re going to hire as many technologists as they can because all industries are in the process of becoming technology companies. There’s the purest ones that are the furthest outage in technology, but everything—hotel companies, not just Airbnb, but like hotel companies and so forth. Everybody is in the process of becoming more and more of a technology company. And so it’s super important to have that talent. And you know, there’s, there’s wide swaths of, of areas in the world that have a whole bunch of technical talent. And the only question is: Are they going to be employed there, or are we going to get some percentage of them employed here and then benefit the entire rest of American society by having them employed? 

 

ARIA:

So Reid, I tend to agree with you on what you just said, but I still think you sort of sidestepped the question. There must be an area where you think it’s tricky—like, affirmative action or, I don’t know, there—the value of independence versus communal living. Like, there’s just different, we have different values, as Americans, than maybe they hold in China or Japan or other places. Like, do you think that affects this discussion? Or, you know, you don’t? You think that the facts are the facts, and we actually don’t need to worry about those sort of cultural differences?

REID:

It’s both. This is the complication. Questions around, for example, you know, various conceptions around human rights in different kind of cultural things and what do those human rights mean in terms of, you know, agency and control over your data, or individual versus group and so forth. And, you know, part of the shape that we are navigating here, and the shape that’s, you know, building technology and all the rest, you know, comes to that. Now I think there are really important issues there. And I think there’s important issues to learn from. So for example, as an American, there’s a whole bunch of things that I think are super important within, you know, kind of the way that American values have some areas where they are a beacon of leadership: you know, individual freedoms, do your best work democracy, you know, with the exception of Trump and January 6th, peaceful transition of power in voting—you know, like these kind of things I think are valuable.

REID:

But I learn intensively from China, from India, from Europe, in things that, you know, part of the basis of, “Oh, here’s the way that even we as America, we as humanity, as society, should improve.” And you should ultimately take that kind of learning mindset. To me, it’s kind of like, well, look, these differences are an opportunity to learn and an opportunity to have effective cognitive tension. I still have this kind of belief that we will sort out to a—with a zone of truth. Like, I tend to think that individual human rights are really important, even if you have a society that says, “no, they’re not.” Like, no, actually, in fact, I think they are, right? And I think they’re manifested, even if you don’t see them, in the behavior patterns that you see. But by the way, counter, like — you know, one of the reasons I frequently call myself pro-individual, anti-libertarian, like, actually, in fact, how we come together as a society is really important too. And that’s part of the reason, like in my first book, The Startup of You, I said it’s like, “I to the we.” I and we both matter. And, you know, it’s not the I or we. No, no, no: I and we is kind of key, so.

ARIA:

I love it.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Adrien Behn, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Ben Relles, Parth Patil, and Little Monster Media Company.