This transcript is generated with the help of AI and is lightly edited for clarity.
REID:
I am Reid Hoffman.
ARIA:
And I’m Aria Finger.
REID:
We want to know what happens if, in the future, everything breaks humanity’s way.
ARIA:
Typically we ask our guests for their outlook on the best possible future, but now every other week, I get to ask Reid for his take.
REID:
This is Possible.
ARIA:
So we recently talked to Anja Manuel and elsewhere, she has essentially called for an FDA for AI that requires testing and approval for new AI tech. What do you think of this idea? What are the pros? What are the cons?
REID:
Well, I’m going to cheat a little bit because Anja and I talk a lot, because she’s, you know, so spectacular and in this zone of area that I try to help, and she is deeply expert. So what I know is I can interpret what she’s saying in FDA for AI, differently than people would normally reflex. Because they normally reflex to say, “Oh, across the board, everything that happens vis-a-vis how AI touches consumers, hence FDA.” Whereas Anja is actually starting with a kind of viewpoint of, “Actually, in fact, when it touches possible global infrastructure, weapons, other kinds of areas, we should have what’s being done today with the U.K. and other AI safety institutes. How do you make sure that the essentials are done there for preserving, kind of, global social integrity?” And that, I don’t necessarily mean that, on misinformation and media sources, but actually the infrastructure of society.
REID:
And that I think is a very good idea. It’s one of the reasons why I helped, both on the U.K. and the U.S. sides, stand up their AI institutes. It’s part of the reason I’m going out for the French Action AI Summit in Paris. I think that kind of global cooperation, in particular with our allies, and as we were talking about in our Possible episode with Anja, the global swing states, I think it’s important. But it’s also of course, important with, you know, our competitors and our antagonists, such as Russia, and others, to also do this because I think it’ll be very important. Now, the pros and cons, get to the pros on doing this–its really important for global stability. The cons are that one, it’s obviously going to take a bunch of time and be difficult to do, and whether or not, you know, kind of how implementable it is–which we talked about some in the episode.
REID:
There’s also this question around, you know, regulation tends to accrete. And one of the things is–per our episode on Superagency–one of the key things is innovation into the future, and what we’re getting in terms of safety and other benefits is really important. And so you don’t want the regulation to too naturally accrete, to really prevent those really good futures. All of which Anja would actually agree with. It’s partially because she’s so focused on, “This is what it means for geopolitics, this is what it means for a global stable order for humanity.” And kind of what is the decades of institutions we built since World War II that have made the post-World War II era, one of the most safe and kind of least warfare-like global environments for humanity, in that timeframe, and why preserving that’s really important, and why the institutions and treaties, and including where AI applies to terrorism, rogue states, cyber crime, et cetera. It’s important to do this.
ARIA:
Well, so when we think about the sort of global race for AI. AI is moving so quickly and one of the things we want to make sure is that the U.S. continues to be one of the leaders. Looking back over the last year, or even honestly the last few months, has your opinion changed on where the U.S. is in sort of the balance of power, whether we’re on top, or people are gaining on us? How do you see that and how has it changed?
REID:
Well, as Anja mentioned, part of what we’ve seen in the last couple weeks–right before we were recording this–is things like Deepseek out of China. And all of us technology watchers, who previously were accused of being a little too histrionic about how close and how intense the race is with China, it’s a demonstration that it’s actually game on and the race is there. You know, Deepseek has a bunch of different new features in the efficiency of how it was trained and operated, that is useful, and one of the things that puts them in the race. You know, one of the things–that as you know, Aria–that I’ve been talking about some is how important it is that AI is American intelligence. Because to embed our values to preserve the world order that the U.S. has been so instrumental in helping create and lead since World War II.
REID:
And that not only is AI amplification intelligence, but also should be American intelligence. And so I think that the short answer is the gap has closed between Chinese capabilities and American ones. And that’s part of the reason why it’s important that we re-double all our efforts across the board. I think the efforts that we’ve been doing–to try to get TSMC and semiconductors in Intel fabs going is important–I think it’s going to be really critical to do the refactoring–various energy permitting and development regulations. You know, not just nuclear, which will be very important for clean, but also just getting a lot more electricity provisioned and built effectively, along with data centers–to how do we enable both our startup ecosystem and our hyperscalers to continue to develop at speed? And the good news is both of them are very strongly in the game, but it’s not a question of, “How do we contain Hyperscaler X?” It’s a question of how do we get those developments and deployments at full speed and benefit the rest of the American society and the American world order?
ARIA:
I think, you know, Anja made a good point when we spoke to her that the U.S. is still the hotbed for innovation around the globe. China is catching up and they’re closing gaps, but we still have our companies here moving as fast as possible and doing some really exciting things. And in the last few months we saw the arrival of o1. Just give us a little bit about what you thought of that, and then any sort of thoughts, excitement, predictions for what’s to come for these American AI companies.
REID:
A lot of o1 was kind of under-commented. So the usual discussion around the AI race is the next level of scale models. And that’s partially because we are conditioned on that from GPT-3, to 3.5, to 4, and kind of 4.0–you know, with 4 and 4.0, being very similar in scale and size. And o1 is actually a retraining of kind of GPT-4 or 4.0, and put in a way that gets a set of different 4.0 models trained in, and tuned in, specific ways, and thinking together, and thinking in multiple parallel chains, to generate the chain that is deepest. And that was one of the things that Sam Altman and Satya Nadella were gesturing at earlier last year.
REID:
Because they’re saying, “Well, what happens if you give more time for these AI models to think,” as opposed to like: type in prompt–get answer. You kind of give it a more detailed kind of task, and then it works on it for a while, in a very, very quick proxy to how human beings work. Because one of the superpowers, of course, of 4.0 is its really fast, and o1 generates a little bit more slowly, but can do the various things like detailed planning, chain of thinking. And the places where o1 can be magical has been deeply under-commented, and it’s actually in fact a contributory vector to the whole scaling model. Because you think, “Oh, well we got o1 with GPT 4.0, but like, okay, we could have o1 with GPT 5.0.” And so these become two different parameters for increasing.
REID:
And it’s one of the things that even when you get to–because you know, Anja was also clear that there’s a set of different technological areas where China’s in the lead, because it can deploy those technologies, because it has a hardware manufacturing edge, relative to us on some very important vectors, drones, others–but it’s one of the areas where it’s like, “No, no, no, that’s still an area where we’re clearly, so far, ahead.” But one of the things I commented on in my book, Blitzscaling, is the only place that I’ve gone in the world which makes me feel that Silicon Valley is moving slow, is China. And it’s part of the reason why that speed to motion and going with intensity and clarity is actually in fact so important. The only place I’ve seen large companies work at the speed that the startups work in Silicon Valley is China, not in Silicon Valley.
REID:
Silicon Valley has other ways to stay competitive and do stuff, but they don’t work at the same speed that Xiaomi, or a Tencent, or a Baidu, or others, tend to operate. And so anyway, I think o1 is under-commented because of the amazing amount of, call it professional and cognitive capabilities it adds across a wide range. And so predictions for 2025 are obviously, well, I think we’re going to see an intense amount of stuff being developed on how coding is accelerated. I think we’ll see all of the major players–not just Microsoft and Google and OpenAI, but also Anthropic and others–working intensely on coding. And coding is not just important for its own sake–because there’s kind of almost an infinite demand for software engineering–but it’s also the lens onto how a whole bunch of other work gets amplified.
REID:
One way of conceptualizing that lens is that part of what we’re going to have, all of us, every single professional,–you know, Aria, you, me, Reid–is a coding copilot in our phone, in our PC, in terms of how we’re operating. Also, once we’ve solved some of the coding acceleration, it gets to a slightly less clear fitness function. Things like law, medicine, other areas, that can also essentially make that work. And so part of the acceleration of coding–which I think we’ll also see from o1 and other things–will be this acceleration of all of these other areas. And so those are, I think, some of the key things to watch and track in 2025, and as those accelerations happen, we can then see what other accelerations will be coming in the near future after that.
REID:
Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates and Paloma Moreno Jimenez. Jenny Kaplan is our executive producer and editor.
ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Ben Rellis Thanasi Dilos, Melia Agudelo, Parth Patil, and Little Monster Media Company.