This transcript is generated with the help of AI and is lightly edited for clarity.
REID:
I am Reid Hoffman.
ARIA:
And I’m ARIA Finger.
REID:
We want to know what happens if, in the future, everything breaks humanity’s way.
ARIA:
Typically we ask our guests for their outlook on the best possible future, but now every other week, I get to ask Reid for his take.
REID:
This is Possible.
ARIA:
Reid, obviously the U.S. has been a dominant force in software for the last few decades—maybe that’s an understatement. You’ve recently been in the U.K. launching Superagency to the U.K. audience. We know that France, of course, has a lot of amazing AI companies. We see incredible engineers coming out of Poland. But still, the U.S. is the home for software development and innovation. And so for a long time, you know, that sort of software has been soft power. People have talked about that. People have talked about that with American culture, we’re exporting it to the rest of the world. With all the volatility, though, happening right now in American foreign relations—and perhaps a particular tech leader’s manipulation of platforms—how does America maintain its soft power through software going forward?
REID:
I mean, one of the problems with the current general administration approach of “how to lose friends and alienate people”—the opposite of Dale Carnegie—is that that will create substantial problems for all American industry, including the tech industry. And so having all these discussions with people about like, okay, what kind of technologists here say, “Well, I’d rather buy a BYD car than a Tesla,” one is, is kind of saying, “Hey, I’m just trying to be a stable partner,” and the other one’s saying, “You’re my enemy.” And so there’s a whole stack of things, and this goes all the way through. And part of what I’m trying to do here is persuade folks that the U.S. actually, in fact, can be a continuing stable partner, despite randomness of tariffs things, and the randomness of other kinds of things. That the business of America being business is still actually in fact something that we hold—a bunch of us Americans—hold dear and try to operate and try to say, “Hey, there’s still bridges we can build here.”
REID:
And so I think that’s critical for America to do. I think the companies will naturally try to do that because then the companies will naturally want to be globally selling their products, setting standards, having American technology be the foundation of a lot of what’s happening in the rest of the world. There’s reasons we want AI to be “American Intelligence.” And it’s really, that was a kind of to keep a leadership position. I do think it’s actually better for us as Americans, and us as Silicon Valley folks, that we have more Western democracy centers of innovation. And it’s not just really good for them, whether it’s the British or the French or the Polish or the Singaporeans or the Japanese—but it’s also, in each of these cases, I think, good for us as Americans. It’s part of what creates the multilateral system that makes a much healthier, stable global society. Now that being said, I’m not sure how deep that knowledge and perspective goes within the current administration. I mean, obviously most of the actions, and most of the gestures, tend to be the—when you come over to Europe and say, “It’s an America, not just America first, it’s an America only kind of position,” that obviously breaks alliances and gets people to think about like, “Well, who else should I potentially ally with?” And I think that’s not very good for us as Americans, but also not very good for global society.
ARIA:
I mean, I think it’s so interesting, because it’s just a fundamental misunderstanding of capitalism as a zero-sum game. And it’s like, no, actually throughout history we’ve shown that markets can grow and everyone can benefit. And as someone who wants America to win—sure, I live here, my family’s here—it’s just so clear that that’s not how we do it, going alone.
REID:
It’s one of the things that I’ve done for a number of years, as kind a minister from a, you know, an allied country—a country that shares our values about individual rights and human rights, and the way that we come together in a democracy, and a bunch of other things—and it’s like, okay, I will try to help them with. If we could build more Silicon Valleys, it can build more technological innovation centers, I think that’s actually a really good thing.
ARIA:
You’ve said many times that technology happens slow and then fast. And I think a lot of people when they think about AI—ChatGPT launched two years ago—and maybe for most Americans, their lives haven’t changed that much. But for software developers, I would argue that’s a different story. And just last week, Dario from Anthropic said that in three to six months, 90% of code was going to be written by AI, but in a year or two, a hundred percent of code is going to be written by AI. So do you agree with that? And how do companies prepare for this future where potentially a hundred percent of code is AI generated?
REID:
Well it’ll be interesting. I mean, Dario is super smart. He’s one of the co-architects of the OpenAI Scale Hypothesis, which says all you really need to do is be scaling these kind of deep learning machines, and that that actually, in fact, gets you a massive way along the lines to artificial general intelligence—he obviously believes it gets you the whole way, there’s some bid-ask spread there. And Anthropic, along with OpenAI, and Microsoft, and Google, and a number of others, believe very intensely that the right way to be accelerating along this path is to be building code generators, whether those code generators are in the Microsoft side, Copilot, or whether those code generators are actual agents of code generation themselves. There’s a developmental spectrum there—which things are this quarter, which things are three years from now, and how do they operate? And even when you have a code generator, how does it work with human beings, if at all?
ARIA:
We are still going to need humans in the loop. And even if it’s 90 or 95%, to your point, so many more people are going to be creating code that they’re going to have that last few percentage points of the sort of humans doing it, whether it’s through Copilot or all these new professions, that they can have coding assistance.
REID:
I do think that we will see a greatly increasing amount of code being generated by AI. But I also think a lot of that code being generated by AI will be in co-piloting with humans. Now, it may very well be that within a couple years—or even one year—you deploy a software engineer–they’re deployed with multiple coding assistants. So it’s almost like: Imagine a person deploying with drones, like a person deploys that way. And by the way, the agents will be talking to each other and you might say, “Go research this and generate some specific code and do this, this, and this,” and then come back to it. As opposed to the co-pilot being the line-by-line agile programmer who is working with you. So there’s this whole range and scope of how these things work. But one of the things I want to conclude this particular theme with is: There’s a number of reasons why all of the AI companies are going after code intensely.
REID:
One is obviously the acceleration of productivity, the acceleration of software engineering, the acceleration of coding. Two is that it has a fitness function that is easier to determine is this code good or not? And unlike, for example, while there’s stuff going specifically at legal and medical, there’s a lot more fuzziness there and a lot more tight of a fitness function to code. But as you get the coding right, that will also help with understanding the fitness function for all these other professional areas—medical, law, education, et cetera. And, this is one of the things I think perhaps way too few people understand, is that a part of the amplification—whether it’s for being a podcast co-host, or a CEO of a kind of the largest team philanthropy network in the world, or any of these other things—having a coding co-pilot, a coding assistant, will make any of these functions much more productive. Now, obviously you have to get to the point where you’re beginning to think about, ‘Well, what is the way that I would want the coding on this?’ But, by the way, it’s parallel to a management problem, which is: When you say, “Hey, I’d like to direct Sarah or Bob to doing this,” well, what direction would you be giving them? And, as part of that direction, that’s essentially how you’d be directing your coding assistant too.
ARIA:
Well sort of a related question that I think a lot of people are thinking about. Recently Andrej Karpathy tweeted—and I’ll read it—he said: “It’s 2025 and most content is still written for humans instead of LLMs. 99% of attention [is about to] be LLM attention, not human attention.” So again, sort of a related issue: Before we were talking about coding being 90 or 100 % AI-generated, and now he’s talking about the fact that all of the data, that these LLMs are going to be hoovering up on the internet, will be generated by AI. My question is: We know that they can train on synthetic data—you’ve talked about that, you don’t necessarily think that’s an issue, or please disagree—but what is the incentive to create new data anymore? If everything is going through an LLM, what is the incentive for The New York Times, or any other website, to be creating this data when the only business model is people using their AI to access that? Is it going to have to be BD partnerships between The New York Times and the LLMs? Between your website? Between WebMD? How will that work from a business perspective?
REID:
Well, part of it, even if it is a very low percentage of human attention, the question is where do the economics get generated from? So if you say, “Well, the economics get generated by advertising impressions and humans seeing it?” then it isn’t so much a percentage question. Because one of the ways you derive the correlation to what Andrej is saying is that there’s this massive growth in LLM data and LLM reading data, but you still have a whole universe of humans that may be still what your primary target is for the advertising model. Now, it’s also possible that the LLMs will have economic models themselves associated with data. Could be the equivalent of SEO—search engine optimization—which is one of the things that the content sites generate a lot today. And so it’d be like, “Okay, well I’m SEOing the agents.” Or, “I am providing things that when the agents are doing stuff that has an economic loop relative to my business model.”
REID:
So I think that there’s all of that. Now there’s, I think also part of the whole thing is actually, in fact, having, shaping, how these agents are parsing the world, what kinds of things they think are important to do. Just like SEO—there’s undoubtedly going to be some really interesting economics around that. And so that may be one of the things that is the reason why people would be generating content with a LLM focus, with an LLM intensity, even if it’s not a focus. And I think all of those kind of play out. And I do think that one of the things that—the more subtle thing that’s deep about Andrej’s comment—is that I do think that, more and more, content that’s being generated will both, A) Be generated by AI, and then B) Be generated by finding AI.
REID:
And actually, in fact, this is one of the things that you’re trying to do because you say, “Well, if I generate the content for the AI thing, this is the way that my benefit is—my benefit could be I share in the ad revenue.” The benefit could be: I intellectually shape the space of the AI, such that when it goes, “Oh, you’re looking for travel to Turkey. Here’s some things,” because it found, that’s part of the data that was trained on, and everything else is operating. So it could be a whole range of different things. I do think that the likelihood that it’s, call it, a hundred percent, for agents seems very unlikely, especially when you consider that the agents themselves are broadly most valuable when they’re talking to humans.
ARIA:
So even if it doesn’t go to a hundred percent—again it might be 95%, 96%—do you think this will homogenize everything that’s on the internet? Everyone’s worried that it’ll actually converge, and so we won’t have the outliers, because it’s made for AI. Do you see merit in that?
REID:
Well, the question comes down to what the incentives are for the content creation. If the incentive for the content creation is to fight for the generic stuff in the middle, then yes, then it’ll be huge. But, by the way, just like markets, you go, “Okay, well, so those big companies are doing all this stuff for the generic homogeny thing. So maybe there’s an opportunity for me for breaking left.” When it’s a little bit of the, “Hey, I’ll create an anti-woke AI, and maybe then I’ll get some attention, given all the attention is being given to other people,” as kind of this thing. So I tend to not really think that the homogeny thesis is a danger and a worry because of these kinds of incentives.
ARIA:
And so amidst all of this AI boom, I feel like especially in the last two weeks, everyone is talking about “vibe coding.” I will go on the record that I would like the backlash to the term vibe coding to begin now, so tipping my hat on that one. But just recently people have also been talking about “vibe revenue”—that there’s this excitement, people are trying out something for a day, realizing it doesn’t stand up to what they want; they’re testing out, this is an AI craze, and then in 18 months we’ll have all these zombie revenues who thought they saw a path to amazing ARR and growth, but it was just noise, it wasn’t signal. So I guess I’d ask: Do you think this is any different than any previous tech shifts whether it’s mobile, internet, et cetera? Like, we’re always going to have companies that are excited and then they don’t work out, but we still build generational companies amid those ones that don’t work—or is there something different here? That because it’s so easy to start, that there might be more failures, amidst the “vibe coding” and “vibe revenue” shift?
REID:
Well, I think each new technological generation—of which this is the largest, in part because it builds upon the internet, cloud, GPT revolution, mobile, et cetera—that may mean that there is a ton more in the unusual, eccentric, not very workable, camp on these things. It’s like: The AI juice machine. This kind of thing. And there’s going to be a stack of these things. But I do think that what tends to happen is people say, “Oh look, there’s this foolish thing. It’s a bubble. It’s a foolish thing. This isn’t going to work.” And, well no, actually, in fact, there were tons of foolish things in the internet, mobile, and everything else. And by the way, some of them were foolish because they were just idiotic, and some of them were foolish because they were too early.
REID:
You know, Webvan was trying to do a capital play when it was too early, and now we have Instacart and DoorDash and other things like it. And so I think that we will see a lot of vibe coding, vibe means, vibe revenue, and that will play out. By the way, a few of those may even go the distance. Now, one of the things I usually say to investors—as you know, because we’ve talked about this—is, “If you can’t spot the sucker, you are the sucker.” It’s like poker. And so you should actually have some knowledge of what you’re doing. And so obviously there will be a whole bunch of investors who lose money by investing in nutty things that even look like they have great revenue growth for the first month. Because that first month of great revenue growth is like a small pickup, and then it flattens out because it’s no longer of interest to people. Or the vibe changes, or that was the level of interest, and that interest goes away. So there’ll be a bunch of different money loss circumstances too. But I think overall I don’t think—I don’t see any reason that there’s a category difference. It may just be larger because it’s larger, but it’s still playing the same general principles as the earlier investment waves and technology.
ARIA:
So that’s from the investor standpoint, but if you’re a founder, do you have advice for how they sort of see the signal through the noise. When they’re trying to figure out if this revenue is real, or if, “Oh, they really do have product-market fit,” because they saw the revenue come in?
REID:
Typically when you look at it, you go, “Well, really how scalable is that revenue and why?” And just because for example, “I’m being paid by individuals, and there’s 8 billion individuals in the world, so it’s scalable!” It’s like, no, no, no, no. It’s like, who are the people buying—because for example, usually when X people buy, Y people don’t—it’s exposure. So you have some, what is the percentage of X of Y? And you kind of go, okay, why is that? And how scalable is that? And does the value proposition scale? And can you reach them with a go-to market? And a bunch of other things. And so you tend to look at revenue things as: What is a scalable business model? What is the engine of scale? You know, there’s some science to this, but there’s also some art to it. Because you’re predicting the future, in terms of how this works. And so I think it is actually, in fact—you can derive a good theory of the game, a good prediction. But you’re going to be bringing a bunch of knowledge and assumption and hypothesis to your theory.
ARIA:
So Reid, thank you so much. Always great chat.
REID:
Always a pleasure.
REID:
Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jimenez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.
ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Thanasi Dilos, Ian Alas, Greg Beato, Parth Patil, and Ben Relles.