This transcript is generated with the help of AI and is lightly edited for clarity.
ANJA MANUEL:
If everything breaks our way, the U.S. And China will find a way to get along as two great powers. And think about what would happen. Here’s an alternative future: both countries are comfortable enough with each other that we can pull back our military stances, we have to spend less money on our militaries. That frees up resources for pensions, for healthcare, for collaborating again on innovation, and science, and technology; all of the things that are going to help all of humanity. Goods flow more freely between the U.S. and China, and the rest of the world, and we get on a pretty positive path. It feels even almost naive to say that, but I think unless you put a big positive goal out there, you’ll never achieve it.
REID:
Hi, I’m Reid Hoffman.
ARIA:
And I’m Aria Finger.
REID:
We want to know how together we can use technology like AI to help us shape the best possible future.
ARIA:
We ask technologists, ambitious builders, and deep thinkers to help us sketch out the brightest version of the future—and we learn what it’ll take to get there.
REID:
This is Possible.
ARIA:
The year is just getting started, but already some uncertainties are present in the conversation: political division, economic security, and geopolitics. And people seem concerned. In late December of last year, Gallup reported that the majority of Americans anticipate more conflict, not less, in these three areas in 2025.
REID:
And at the same time as AI continues to advance, it’s clear that this technology will deeply intersect with those three areas. International security, political division, and economic stability, all influence the advancement of AI both inside and outside the U.S. That is to say, if you are concerned about those things, AI should also be on your mind. So where do we stand and what should we be looking out for on the foreign policy and international stages?
ARIA:
Few understand these topics better than Anja Manuel, a former diplomat author and one of the leading voices in AI and foreign policy. Anja has had a remarkable career thus far. She’s the co-founder of a strategic consulting firm alongside leaders like Condoleezza Rice and Robert Gates. And she currently leads both Aspen Strategy Group and Aspen Security Forum. That’s two of the most influential spaces driving the conversations on AI and its impact on national security.
REID:
Anja brings a wealth of knowledge on how AI is reshaping the geopolitical landscape. Whether it’s the global race for semiconductor supremacy, or the potential threat of autonomous weapons, she has the unique ability to bridge the gaps between technology, policy, and diplomacy. Her critically acclaimed book, This Brave New World: India, China, and the United States, explores these global dynamics on a grander scale.
ARIA:
In today’s episode, we’ll explore her insights on international AI governance, China, potential threats of bio-terrorism, and their strategic decisions that the U.S., and its allies, must make to navigate this complex landscape of innovation with regulation.
REID:
Here’s our conversation with Anja Manuel.
ARIA:
Anja, let’s start with a little bit about your own background because it is pretty unique. You grew up in a diplomatic family–you spent a better part of your childhood in Pakistan near the border of China. Do you have a story from your upbringing that you think helped shape your view of the world?
ANJA MANUEL:
That is such a great way to start. Yeah, I did grow up–I’m German–but I grew up partly in Pakistan, actually near the border with Afghanistan, and we had an Afghan refugee camp in our backyard because it was during the Soviet conflict. And I also grew up with a German grandfather and an American grandfather who fought on opposite sides of World War II. And so that is what got me interested in diplomacy and foreign policy, and thinking that war is the dumbest possible way for humans to resolve their conflicts, and it’s that simple. That’s why I got into this world.
REID:
So let’s dig in. You’ve said, on AI development, that countries with the technological edge tend to lead the world. Where would you put the U.S. in the great AI race? Where does China stand, and how do you see the jockeying for AI leadership between these two countries playing out?
ANJA MANUEL:
Well, the race is on, oh my gosh. And actually, when I write about technology and countries leading, I mean it even broader than that. So if you start with Roman roads, Chinese gunpowder, British steamships. If you look at what countries end up being the most powerful in the world, they tend to be the ones that are technologically ahead. And as the United States, we, with our friends and allies, have been the undisputed leader for the past three, four decades. And now the Chinese are giving us a run for our money. And maybe let’s start on AI, but I’m happy to talk about some of the other technologies too. So on AI–I know you all follow this really closely–I think everyone was a little bit shocked when the latest DeepSeek model came out. Apparently, and all of our friends who are testing it say, it performs extraordinarily well. On par with some of our own best foundation models in the U.S. And it was trained using much less compute, using the chips that were just under the border, and not export controlled [laugh], and much less expensively than what we’re doing with our training runs here. So, never count out the Chinese.
ARIA:
And can you say a little bit more for folks who haven’t been totally paying attention? Tell us a little bit more about the DeepSeek model, and how are they able to do this, you know, more cheaply? Is it just because they’re ahead of us? What is the state of that game?
ANJA MANUEL:
Yeah, I think people are still figuring it out. And of course DeepSeek isn’t the only–it’s a brand new company, I think it was created in 2023. They have only about a hundred employees and people were really quite surprised. There’s also other fabulous Chinese AI companies, JIPOO, you know, all of the big Chinese tech firms, also are racing in the large model, foundation model, world. But I just want to take a step back, because the podcast we’re doing today is around AI, and national security technology, and national security. Everyone focuses on the foundation models and the race that’s on there, and that’s super important. But in my view, when you look at how AI intersects with other technologies, in all of those key areas, that could be dual use or even military use, the Chinese are doing great. They’re amazing at computer vision, they’re unbelievable at surveillance, they’re unbelievable at sensing, they’re unbelievable at autonomy, and drones, and advanced manufacturing. So even taking aside what’s happening on the foundation model side, they’re doing pretty well.
REID:
And what do you think are some of the things that it’s really important that we do as the U.S., as America? You know, obviously we have some pretty strong assets in Silicon Valley, large scale tech companies, and some other things. But on the full stack, as you’re kind of going through, kind of call it, the modern world of technology broadly, we have a lot of deficiencies. What do you think it’s important for us to do?
ANJA MANUEL:
You know, AI is what everyone is focused on, it’s front and center. It’s amazing. It’s so fun to live in Silicon Valley right now with this Cambrian explosion. But there are other technologies that matter. And different people that you ask would, may do the ranking slightly differently, but for me, here are the technologies where we, as the United States with our friends and allies, need to remain world class and or in the lead: so obviously semiconductors,because it’s the building block for everything; artificial intelligence–we’ve already talked about general purpose AI, it’s going to drive all of the innovation of the future; 5G and 6G,because that is the rails on which everything is already running–the internet of things, self-driving cars, lots of back and forth with the AI models themselves; Bio doesn’t always get put in that group, but I think it’s very important, especially in how it can intersect with national security; and then finally, I would put on that list–partly because I work with some of these innovative FinTech companies–financial technologies, and the economics rails on which value gets sent.
ARIA:
Well, can I actually step back and ask, and maybe this is a naive question, but I can imagine some of our listeners are thinking to themselves, why does the U.S. have to win? When we talk about Silicon Valley and technology, we often talk about nonprofits like Endeavor, or Kiva, who are building entrepreneurs around the globe, and we want so many places to be technologically advanced so that, you know, there’s innovation happening everywhere. So it might be a simple question, but why is it important that the U.S. stays in the lead for all these technologies?
ANJA MANUEL:
It’s important, first and foremost, because if we are in the lead, we control the values, and that was really automatic. You know, when you had the beginnings of the internet in the nineties–you know, when Reid and I were in college in Silicon Valley–it was just automatic that the U.S. was in charge, and it was this very optimistic view of, “Oh, the internet is going to open the world, it’s going to create democracy.” And even in the early days of social media, people were like, “Well look, we’re helping the 2011 revolutions in the Middle East. Our technology is going to bring democracy to the world.” Well it turns out that all of these technologies in themselves are neutral, and they can be used for good and for bad. And so the more we are–and I also, by the way, Aria, it’s a really good question, I don’t think we need to be in the lead in absolutely everything. That’s why I try to be a little bit narrow about the key ones–but I think on those that really intersect with national security, it’s very important that we drive the values.
REID:
I completely agree with everything you just said and it’s, you know, the part of it is not just kind of military power, not just economic power, but also kind of the question of the values that are engaged in the technology that provides the kind of the infrastructure of life, right? Like, how do you think about the world? How are people enabled? How are individuals enabled? You know, a classic kind of question is, “What’s your view of human rights? Or the human condition for that?” It’s not just the pure, kind of, thing of being ahead in what I’m calling the cognitive industrial revolution–which I think is very important for the prosperity of our country, our children, generations to come–but also the question about what kinds of societies flourish in the global world and how the human condition evolves? And now to return to the question of well, what should be the kinds of things that we as thoughtful leaders in, kind of call it, western democratic traditions should be doing? Whether we’re business people, technological innovators, government folks, you know, intelligent strategists like yourself, press people–what should we be doing to try to advance the ball?
ANJA MANUEL:
So we talked a little bit about why it’s important for us to stay in the lead on some of these technologies, on the soft power way, right? The hard power way is we are really in an absolute race, with the Chinese in particular, but to a lesser state with the Russians, around technology being used for hard power. So we’ll get back to that in a second. But what should we do to win for both of these? Look, I give both the Trump and the Biden administration a lot of credit. I know it’s hard to say that, I’m being bipartisan [laugh].The Trump administration really called out how, that the race was on, and that the race was on with China, and that we needed to think about it. And they started the beginnings of trying to put a moat, if you will, around some of our technologies.
ANJA MANUEL:
You know, they were tough on SMIC,which is a Chinese semiconductor manufacturer. They took the first steps to move one of the TSMC, which is another big semiconductor manufacturer, to Arizona. The Biden administration really doubled down, and they did it in two ways. One: hobbling the other guy so we can run faster. So that’s all the export controls you’ve seen, you know–another huge swath came out just this fall, and it’s not just on the most advanced chips, but it’s on semiconductor manufacturing equipment, the components of that, they try to be tough. I think that approach is, was, important, but it’s reaching the limits of what it’s going to be able to do. So to really answer your question, Reid, now it’s about us running faster, and thank God we have some of the most innovative people and companies in the world.
ANJA MANUEL:
I mean, the billions and billions of dollars that are being invested in R&D, that’s AI specific, that’s semiconductor specific, all of these things, is amazing. However, we could do more to coordinate our efforts. We could do more in the government. I’ll give you just some examples for government. Permitting for electricity–nightmare. Takes years. Where are we ever going to get the electricity, and the data centers, and the GPUs to do all of these training runs? Now, people are already talking about they’re going to be faster, cheaper, and as you saw with DeepSeek, clearly it can be done, but, that is one place where I would start with a new administration; of what are we going to run faster? A second piece is something that only the U.S. government can do, really, frankly better than any other government on earth is fund basic research and development, basic science.
ANJA MANUEL:
In the Cold War we were spending up to 2% of our GDP, on how you count on that. Now, it is just a fraction of that. And Congress did an amazing thing, in a bipartisan way, they passed the Chips and Science Act. All we ever heard about was the chips part–because that got all the lobbying–but the R&D portion of that was supposed to put $170 billion to work over five years in basic R&D. And that has been chronically underfunded. So I hope that this next administration will go in, and starting with those two important things, they could do a whole lot of good. And I’ll give you one more. And that is if you think just about the labs creating the foundation models–it’s an imperfect analogy–but I sometimes think it’s as if multiple companies, mostly American, but some Chinese, some in the Middle East, one in France, all created nuclear energy at the same time. It can be used for good and it can be used for bad. And it’s not really controlled, or controllable by governments. But so one thing that our government should be doing, is helping our companies make sure that their cybersecurity is up to snuff, that they can look for insider threats–people who are researching who might be spying for foreign governments, we know it’s happening–and then a super basic one, that I’m amazed, there are laws that if one of our companies fires someone because they’re an insider threat, they can’t tell their competitors–so that guy just gets hired across the street two weeks later. So those are really pretty basic brushing and flossing things that we should be doing.
ARIA:
And so are there other ways–I think, you know, when people talk about the dangers of AI, a lot of people actually go to defense. They say, “Oh my gosh, we’re going to have these autonomous drones, AI is going to be doing all these things,” and people get nervous–Yet at the same time, there’s so many technological innovations or just actually getting rid of like red tape bureaucracy waste that is created right now in our government, and the Department of Defense. Are there examples you’ve seen of AI being integrated right now into national security work, and are they using these AI services in the Department of Defense, or what would you like to see?
ANJA MANUEL:
We’re starting, but it’s a trickle, not a flood. You know, we could be doing a lot better. There are some really cool pilots happening within DOD. One, that I thought was hilarious, is they’re using AI just to analyze the, you know, enormous stack of regulations that they have to follow for every single thing they do. And the AI says, very quickly, “Oh you can do this or you can’t do this.” It’s sort of ridiculous, but [laugh], but that’s a good one. And as you know, there’s the Replicator program, which has been put in place under the leadership of Kath Hicks. That’s going enormously well. From basically a standing start, we’re now doing a lot to both manufacture and acquire inexpensive autonomous weapon systems, and integrate them with everything else we’re doing. So we’re starting to do it both in the tooth–which is what we in the national security community call it–in the weapon systems, and sensing, multi-sensing, “How do you react faster? How do you command and control better?” All of that is starting to happen, and in the back office, making production more efficient. Here’s another one for you–just using AI to understand when you need to do maintenance, so you don’t do it too fast and you don’t do it too slow. I mean there are just so many processes. This is a huge lumbering organization. Over 3 million people, $850 billion budget. Oh my god, could AI make it more efficient.
ARIA:
I love how the things that need to be done, to your point, are like some of the boring stuff. Like, “Any bureaucracy can be made more efficient and AI will help us do that.” What are the impediments to doing it? Is it legislation? Is it financial? Is it just stasis? What do we need to be able to move faster in this regard?
ANJA MANUEL:
So Aria, you asked about the impediments: there are too many to count and list. And it’s everything. From this is an organization that is extraordinary, the people in the U.S. Pentagon are patriotic, they are kind, they are good human beings, and they have the welfare of the United States at heart. And they’re not on the cutting edge of technology. And I know, Reid, you tried [laugh] at the Defense Innovation Board. I’ve tried at the Defense Policy Board. There’s just problem after problem. Very hard to get technologists in there, in the acquisition side, in the user side. Very hard to get people trained up. Very hard to get people allowed to experiment and use AI. And then it’s impossible to get money to do it because the procurement processes are slow, and painful, and there’s a lot of red tape. And so it’s just problem after problem. So when you look at some of the young, cool defense tech startups that are making inroads, what they tend to do–and this gets really national security nerdy–is they go to one of the combatant commands, like basically the special ops command, because they’re more tech forward, and they get small contracts with them, and then they try to get into the mothership. But every Secretary of Defense that I’ve ever interacted with has tried to reform that process, and I know the Trump administration is going to try, and, oh boy, it’s not easy.
REID:
And part of this is this question which we’ve talked about before, including at the Aspen Security Forum, which is how do we get kind of the tech forward companies and industries that the U.S. excels in, better connected to the Department of Defense’s mission and engagement? Because the procurement process is a terrible way of doing this. The prime contractors are so far behind in software, I don’t know what decade they’re in, and that of course goes to everything else. And so, what do you think about–you know, it’s the startup contenders, it’s the scale up contenders like Anduril and Palantir, it’s the hyperscaler companies and other startups–what’s some of the ways that, if historical constraint weren’t an issue, you would be saying here’s kind of ways that we can be bold to try to leverage these American assets, for the kind of American capability?
ANJA MANUEL:
Right. So Mike Bloomberg and a small team is just out with a report on exactly that. They have some really good recommendations. My out of the box one that I always tell the DOD, and they sort of smile and they’re not going to do it, is why don’t you hive off a super tiny part of the acquisition budget–the RNE. So the acquisitions that–we spend $280 billion a year as the Pentagon buying things–about half of that is things you need for soldiers: housing, food, transportation. The other half is research and equipment, so that’s the actual arms, okay? Of that, let’s say you take 1%, that would be $1.4 billion, and you treat that like you would a venture portfolio, and you tell the people who are in charge of that, “Make some investments, buy some stuff. We’re going to assume that nine out of ten of these things are not going to succeed, and that one in a hundred is going to be a total game changer.” And just get the muscle going for people to think a little bit differently. So, you know, we’ll see. Maybe the new team coming in will push on that. I’m actually pretty optimistic that some of the people who are going into the Pentagon are people who’ve been in industry, and been in tech, so hopefully they can drive a little bit of change.
REID:
You know, part of the question will be how do we get the organization to rate itself based on some amazing successes, not in the average result? And what do you think the necessary thing to try to get that org culture evolution, even for the small part, into the right shape?
ANJA MANUEL:
My guess is, and you and I can riff on this all day long, you need to hire some new talent, and put them in with the current talent. Like in a way that when you saw DIU version 1.0–
REID:
Pi, could you explain?
PI:
In the context of American government, DIU stands for the Defense Innovation Unit. It’s an organization within the Department of Defense that helps the military adopt new commercial technologies more quickly. It was founded in 2015, and has been called the Pentagon’s Innovation Experiment.
ANJA MANUEL:
They brought out, to Silicon Valley, a bunch of really well-meaning smart folks from the Pentagon, who had no clue how the valley worked. And then they moved to version 2.0, and they had people who were in the valley, and who were making inroads. And now we’re kind of on version 3.0 and it’s working pretty well. But I think it’s small, like all cultural change in organizations–you’ve led lots of big companies–it just is small and incremental, and you get there by notching up a few wins, and then driving leadership from the Secretary of Defense level. And I would tell you, Bob Gates, who’s my business partner–we’ve been having lots of conversations about DOGE, you know, Department of Government and Efficiency, and the new things happening at DOD–and his view, which I think is very wise, is, you can drive change, but it has to happen from the Secretary of Defense level, and you have to bring your organization along. You can’t just shove it down their throats because otherwise you will just build too many antibodies.
REID:
So, you know, obviously part of the thing that we’re navigating this technology is there’s both huge opportunities, but also risks, and things to navigate. And so, what’s your kind of–we’ve been addressing some of the opportunities, and some of the things we can do–what are some of the risks that you particularly think are important, especially within the national security range arena, and the kind of U.S. position in the world, and what are the things that we need to do to potentially start thinking about navigating them?
ANJA MANUEL:
Just how AI is going to intersect with a lot of other technologies to do an enormous amount of good in the world, it will intersect with other technologies in ways that are deeply worrisome from a national security perspective, and let me give you a couple of examples. Cyber, you’re already seeing it happen. Soon, if not now, the best AI models will be able to write their own malicious code. They’re already doing that. What you’ve seen in the last few months is an insane uptick in how good, and tailored, and specific phishing messages are. I mean, I feel like I get them every single day from fake DocuSign–[laugh], don’t click on those [laugh]–and, so that’s kind of the beginning. And, you’ll also then of course see the countervailing, the defense side. I know some of our cyber companies, Palo Alto Networks and others, are already totally revamping their product suites, so that the cyber defense is AI enabled, just how the offense and the bad guys are AI enabled.
ANJA MANUEL:
So that’s one. Two, how AI intersects with chemical and biological weapons. Whenever I say this, people say, “Well don’t you still need a wet lab? And it’s hard,” yes, but the difference here is it’s not like someone tinkering and looking up a Google search of, “How do I make a bio weapon?” It’s already like having a PhD student perched on your shoulder and allowing real novices access, and non-state actors, really unsavory folks, access to knowledge and technology that would have been unfathomable two years ago, frankly. So that’s chem and bio. Then autonomous weapons–we started talking about it in the positive way, and I do think this is in large part really positive–you are seeing an unbelievable race. Like I’ve never seen a scramble happen so quickly. From when you first started using–the poor Ukrainians were using essentially hobbyist drones, [laugh], and then the Russians were using electronic warfare to get those out of the sky, and so they’re using better ones–and like the amount of innovation that has happened in the autonomous weapons space in the last year, or 15 months, has been just unbelievable. So that’s all good, and the U.S. in particular should run faster on that. It also has a really dangerous downside, because if the AI is always going to be faster than a human, the incentive is always to escalate, in our terms. So if the AI fighter pilot is faster than our fighter pilot, just shoot them down. Don’t wait for the human in the loop. So it’s very escalatory. So that’s three. Then of course we’ve all talked previously about models jailbreaking their own safeguards. That’s an issue not just for national security, but just in general.
ANJA MANUEL:
And then people always ask about nuclear. I have to tell you, I don’t know why, I worry slightly less about this. AI being used in command and control of nuclear weapons, that’s what people are terrified about. And deterrence breaking down because of these sensors, and because you might be able to get at vulnerabilities, and other people’s nuclear weapons system. There are people who are very smart, and who worry about this all day long in the bowels of the Pentagon. And there are people who are very smart and very worried about this in China and in Russia. And I think one of the great last things that President Biden did is he created an agreement with Xi Jinping, just last November of 2024, saying nuclear command and control, there will always be a human in the loop. So that’s one important area where we’re actually talking to the Chinese.
ARIA:
So I’m asking the obvious question. I hear about cyber, I hear about bio weapons, and you know it makes your brain go to terrible places. What can we do? What are the ways that we can fight back, or use AI, to make us safer against all of these things that AI makes us unsafer about?
ANJA MANUEL:
Yeah, very good point. So A, of course, we’re going to be working on good AI to defeat all of these negative things. I think we can do two really important things. One: narrow national security focus testing of the largest foundation models. This is what the U.K. AI Safety Institute did so effectively. The U.S. Now has a very nascent, fledgling, little AI safety institute. I would hope that the new administration coming in keeps that in some form. Look, tear it down, rename it, call it the “National Security AI Institute” or the “Anti-Woke AI Institute,” but do super basic national security focused testing on these issues that I just ran through. Can it jailbreak? What does it do with chem and bio weapons? What does it do with respect to cyber? What does it do? Is it malicious? Is it tricking, [laugh], the humans? Right? Those types of things. And frankly, the large model companies have been really impressive. Almost all of them have voluntarily made their models available for testing, and the testing was happening quickly, days or weeks. So this isn’t like an FDA process that’s going to slow innovation for months and months. This is fast, and we should keep it going. Second point: we have got to talk to the Chinese. So that’s really hard, and whenever I say this, people say, “Well, won’t they just be milking us for information?” A little bit. But, I’ll give you an example. The U.S. was first to the nuclear race. We dropped bombs on Hiroshima and Nagasaki in 1945. In 1946, the United States came out with what’s called the Baruch Plan–it didn’t go anywhere–but we basically said we need to start having conversations about how to control, and think carefully about, the safety of this amazing new technology.
ANJA MANUEL:
I think that’s kind of the era we are in now. Nuclear–it’s a very imperfect analogy–but you already have the very beginnings of this. You have scientists speaking to each other. Stuart Russell, and some others, have what’s called, in our world, a Track II, so non-government and non-government scientists talking to each other. Reid, you and I are a part of a dialogue with the Chinese about AI. There was a government to government dialogue. So far they’re not getting anywhere, but that’s okay, because in the beginning you just build trust. You make sure that the right people are at the table. You see that the other side has people of goodwill, and then gradually you move to ideas about how you might make sure that neither of you destroys the world.
ARIA:
I think that’s good. I think that’s a good place to get: neither side destroying the world. Love it. You talked a little bit about this, that the U.K. is sort of ahead of the U.S. when it gets to sort of regulating and safety. Do you think the U.S. needs to catch up to where the U.K. is? Is there a fear of overregulation in the face of, “Well, if the Chinese aren’t regulating and we are, then we’re actually falling further behind?” How do you see those trade-offs?
ANJA MANUEL:
Yeah, so what I described with the U.K. AI Safety Institute is so narrow. This is not like the E.U. AI Act–which I think in some ways is good, in some ways it probably goes too far–it’s just focused on where AI intersects with other technologies in a way that could really harm people in the physical world. This has nothing to do with human rights, it has nothing to do with discrimination–all of those things are probably important–but I’m just saying from my perspective as a baseline that’s important. I think it’s fine that the U.K. is first. They’re more well funded than we are. We should probably scale up our effort. There have been 10 or 12 little copycat AI safety institutes–including the Chinese have a little baby fledgling one. All those guys are now talking to each other, and that is all positive. We should just be doing more of that, more rapidly funded better.
REID:
Well, I think part of what also, very fortunately for us and for the world, the U.K. AI Safety Institute deliberately kicked off was for global exchange–so, you know, the next summit, after it was in Seoul, this one coming up in February, in Paris. I know that they’ve had great collaborative relationships with the U.S. and others, so I think it’s the right thing, and the kind of saying, “Hey look, let’s just try to make sure a very trusted environment to make sure that the safety side of these things, at the high end, is really well managed.” I mean, one of the challenges, in international treaties in this arena, is it’s really hard to validate. It’s really hard to actually know and monitor what each other are doing. I mean, we instituted under nuclear a whole bunch of different protocols to try to keep countries aligned. Which worked really well in nuclear, seemed to not work quite as well in bio for exactly this reason. So what’s kind of the current edge of thinking about making treaties work here?
ANJA MANUEL:
Look, I’m an international lawyer by training. I’m a big believer in treaties, international agreements, executive agreements, even if they are sometimes abrogated. It’s better to have a law and have some people violate it rather than having no law. However, I think we are so far from a treaty. We just are not there. You know, back to my imperfect nuclear analogy, if this is 1946, the first real limitation on nuclear weapons didn’t happen until 1963–and we had to go through the Cuban Missile Crisis, and really terrify ourselves [laugh] before we got there. So what I’m proposing is super narrow. There are just the safety tests, very limited. There is talking to each other about safety and how you would monitor safety. How do you do ongoing monitoring of the models–and there’s some different really interesting technological ways that people are thinking about it. And then just having the scientists, in particular, and some of the policy makers, get to know each other and trust each other. That sounds really fuzzy, but it’s really important. Ash Carter, who’s a former mentor of mine, always talked about when he was a young scientist, he would go to the Soviet Union to do these safety talks around nuclear weapons. And part of what happened is the people started liking each other. And so they would call and say, “Hey, we’re worried because this is going wrong. Can you help us with this?” And that’s what we’re trying to establish here. It’s small, it’s piecemeal, but I think it’s unfortunately where the world currently is.
REID:
I think one of the things that may be helpful for people is to kind of get some sense of why the soft connections can actually be so important to ultimately doing this. This is one of the reasons why you’ve led a number of track two dialogues–doing some now–why I participate. Say a little bit about how–because it may seem a little odd to your average person–about the importance of Track II dialogues, and how do they function?
ANJA MANUEL:
Yeah, let me, actually–that’s a really good question–let me back off and talk about how the relationship is currently with China, because they’re only important because the relationship is so terrible. Because we went from knowing each other pretty well–you know, five, six years ago, all of us were in China constantly. We knew a lot of the senior leadership on the political side. We knew a lot of the senior leadership on the economics and tech side. And so you had fewer misunderstandings, because people were just exchanging all the time. And then if you don’t have misunderstandings, you’re less likely to come to blows. Okay? During the height of the Cold War, the U.S. and the Soviet Union always had really good military to military communications. So, when something went wrong, as it inevitably does, everybody knew who in the theater commander calls whom, all the way up the chain, to the Secretary of Defense, and to the leaders of the countries. And that worked pretty well. We diffused a lot of problems.
ANJA MANUEL:
The Chinese are famously very hesitant to do that. And they’ll say, when we bring it up, they’ll say, “Well, if we do that, you’ll just take more risks. We just want you not operating near our shores. We want you out of the South China Sea. Out of the East China Sea. Out of the area around Taiwan. So stop doing that. And then we don’t need to communicate.” Our view is, this is freedom of navigation, we’re going to do it anyways, we need to communicate. And that’s one example on the military side, but it carries over into everything else. If we, on this AI Track II, or three or four that are going on in this, can start understanding where the Chinese science is going, what they’re pushing towards, what they’re worried about. Can we help them understand that we are not trying to threaten them? And then can they help us understand how they’re not trying to threaten us? Because right now, man, if you read the Chinese press, the vitriol against the United States is extreme. And here it’s not quite as extreme, but boy, you never hear anything good written about China.
ARIA:
If five or six years ago, we were in a better place with that relationship, do you think it’s possible to get back there, to get to that good relationship with some of the countries that aren’t considered allies anymore?
ANJA MANUEL:
You know, in international relations, personalities matter, especially when you have an authoritarian regime. And you have, in Xi Jinping, the most authoritarian leader of China since Mao Zedong. And I firmly believe that if China had continued to have leaders more in the mold of Hu Jintao, the previous ones, or some of the ones throughout the 1980s, who were–you know, they were authoritarians. [laugh] Don’t get me wrong, they did the crackdown on Tiananmen Square–but they were opening the Chinese economy, and they were allowing some rule of law to happen. And I’m going to be really careful here, because the U.S. foreign policy community has really beaten itself up over the last five years saying, “Well, we just assume the Chinese, if they were open economically, then they would declare democracy, and they’d become just like us.” I think a lot of people didn’t actually think that. But if you’re a country that’s open economically, even if you are an authoritarian regime–light, like you know, Singapore, [laugh], the United Arab Emirates, there’s a lot of regimes like that that we deal with–we would’ve been fine, and we would’ve had a much more positive outcome.
REID:
But actually one of the things also, in addition–like just on the optimistic side on AI. In addition to, you know, “What are the capabilities, how does it make us stronger?” I think a couple things I might add in, and see how you reflect it. One is, AI, part of what it’s actually very helpful for global stability is having good intelligence and good monitoring, and so obviously AI can add a bunch to that, even within the military sphere. I presume that would be something you’d go, “Yeah, yep, that’ll work.”
ANJA MANUEL:
Yeah, there is about to be, or already is, an AI revolution in intelligence gathering–so both the collecting, because now sensors are so ubiquitous, between satellites and civilian sensors and everything you have–and then analyzing what has been collected. So that’s really great for our intelligence community. It is also really convenient for authoritarian regimes who want to do, or rail in, kind of surveillance of their own people, right? So here we have again with the yin and the yang. And it’s the same technology, it can be used for really positive things, and, kind of, for darker.
REID:
Agreed, and it’s part of the reason why being the technology leader is so important for the course of humanity throughout the globe. Another one is, I actually think it’s very important for us to win the various parts of the economic race, because too often–and you know, kind of national security and defense, especially within the hollowed walls that you and I have tread on occasion at the Pentagon–too often considered as material, physical, you know, hard power, et cetera, and not as much kind of the global economic sphere. And I actually think that the fact that we’re leading this revolution on the economic side is actually, in fact, a very good thing from a national security point of view. And I’m curious also for your commentary, kind of the pros and cons and the detail of that.
ANJA MANUEL:
It is a hundred percent important that we’re leading on the economic side. Look, sometimes when you read our press, they paint a very dark picture of what’s going on in the United States. We’re the fastest growing large economy on earth [laugh], right? Our innovation system is going gangbusters, right? It’s not, not everything is perfect, but by and large we’re doing really pretty well. And if we can translate these huge gains that we’re getting in innovation into labor productivity–you already see that there’s a really good report out, by I think Accenture and Microsoft, that talks about just how much more labor productivity they think they will get, and how much it will add to GDP, I think they have something like $3 trillion over 10 years, like enormous–and all of that is great because, frankly, you can’t lead just with military power alone. You lead by being the beacon of the world in your values, in your innovation, and your economy. And all of that I think is deeply positive.
ARIA:
Well I think related to that, as of this recording, there’s a new administration who has come into office. And so what is the advice that you would give to them, for how to sort of look forward and deal with this new intersection of AI and national security?
ANJA MANUEL:
We talked about it a little bit already–and I think this is advice for the new administration, the past administration–our most innovative companies could use some help running faster, and doing that in a way that is actually safe and secure. So we talked about one: electricity permitting. Two: making sure that the data centers and the GPUs are mostly either in the United States or in friendly countries. Bringing swing states along–let’s come back to that–but I think the Biden administration made a good start on that. We could do probably more. And then on the flip side: helping our most innovative companies–they don’t have to be shut down. I’ve heard someone write, you know, “You should nationalize the biggest AI companies.” That’s a terrible idea. [laugh], I’m more of a small government person–but helping them think through, giving them real advice, on insider threats, on people spying on information sharing without instantly having legal problems, on thinking carefully, frankly, about the antitrust strategy–rather than just having that be, “Oh, they’re too big, therefore we must break them up.”–we just need to think about all of these things from a whole of society perspective, rather than in those narrow silos.
REID:
I completely agree on everything you were saying, because in part it’s like how do we harness these positions of leadership to the benefit of America, the American global order, and a set of things? And I think that engaging with having these large tech companies deploying in ways that are positive, one of the things I think that intersects to, is something you gestured at, which I think it’s very important to also be winning the alliance, the political support, the hearts and souls of the swing states. And I think that’s a key thing to do, and so could you go a little bit more into that?
ANJA MANUEL:
So we’re doing really well in AI. The Europeans, much less than we would’ve hoped, frankly–I would hope that there’s more innovation there. But who’s doing well? The United Arab Emirates. The Saudis are beginning to. They’re putting a huge amount of huge, I mean if you look at the numbers, they’re just staggering–a hundred billion in a new AI fund in Saudi Arabia. More than that, in a new AI fund in the UAE. The UAE has a very good foundation model in Falcon–when I was just there a month ago, you know, all of their best and brightest want to work for G42, which is their big AI company, or want to work at the new AI ministry–and they’re really impressive. And then there’s India. India doesn’t have as much money to throw at the problem, but boy do they have some good engineers and they have some people in the government who are thinking very strategically about it. So for example, the Indians have put together, or they’re in the process of putting together, a national compute resource in the way that we’ve talked about here, so scientists can also have access to chips and GPUs. The Indians are actually doing it. So there’s some good things happening in all those places, and you could imagine a situation where with our friends and allies, we say, “This is going to be one interoperable ecosystem and we’ll work with you.” So let’s say there’s a huge amount of geothermal free electricity in Iceland. Can you put some of the data centers there? Can you protect them? Maybe? Do they all have to be in the US where permitting is terrible? You know, you could imagine some coalitions that are really positive. You have to be a little careful on the flip side, that the swing states are not playing China and us off against each other, to get more goodies, which you see in some cases. And so let’s cooperate, let’s just not be naive.
ARIA:
And just for our listeners who think of swing states as Michigan, Pennsylvania, and Nevada, how would you define a swing state? This is a country who has some innovation, but is not yet sort of chosen a side? How would you define that?
ANJA MANUEL:
Oh that is a very good point. I would define swing states, in the national security context, as ones that want to have close diplomatic and economic relations with both the U.S. on the one side, and Russia and China on the other side. And by the way, there are a lot of those. Whenever I hear, “Oh, this is a new cold war,” in our press, you know, between the U.S. and China, I cringe, because this is actually very little like the Cold War with the Soviet Union. I think that is a very oversimplified analogy, because the Soviet Union was self isolating. They just traded with themselves into the other Warsaw Pact countries, and they had some good scientists, but they were all employed by the government. There wasn’t really a natural innovation ecosystem. The Chinese are the opposite of that. They’re the largest trading partner of something like 120, 130 countries around the world. They’re much better at trading than we are. They are unbelievably innovative and impressive–as evidenced by the fact that for the past year we’ve all been talking about how far behind they are in AI, and it turns out they’re not really behind [laugh], right? So I would never count them out. And that means–I’m so glad you talked about swing states–swing states are people who haven’t quite made up their mind who they’re with, and frankly in a lot of cases don’t want to. They’d like to be able to work with both of us.
ARIA:
And is it in our interest not to force them to, because if we force their hand, they might not choose us? Are we okay with having partners, whether it’s India or the UAE, that are sort of choosing both sides? Or do you think we have to make them make a decision?
ANJA MANUEL:
Not in all things, but in certain very narrow things that impact our national security–I’m sorry, you’re going to have to choose. If you want our most advanced chips, and you want to build models there, then you should not be leaking those chips to China, and you should not have, you know, just like we were talking about safety for–by safety I mean safety from spying, cyber, all these kinds of intrusions for our companies–I think our allied and friendly state companies should also be taking the same precautions. I mean, I’ll give you a perfect example. My son, who’s in high school, spent last summer working at the Pasteur Institute in Paris. Every researcher was Chinese. Which is great, on the one hand, that’s wonderful. You want basic science interactions, but is there also a little bit of checking what’s going back, what shouldn’t go back? It’s a hard problem.
REID:
Yeah, I mean I do think that the question of the Track II and building bridges, et cetera, is extremely important. It’s actually even more important with the global swing states. Given the international lawyer question, I think I’ll kind of refocus on the cybersecurity thing, because to some degree my optimism about an AI global treaty probably would be preceded by optimism about a global cyber treaty. So, what would be–if we were going to try to do a cyber treaty–what would be key things to try to make that work? Because I think most people don’t realize that essentially there’s almost like a state of piracy and war in the cyber environment, and it’s weird just because it’s not visible.
ANJA MANUEL:
The cyber environment is very unusual because the incentives are all to use cyber weapons all the time, at a low level. Because attribution is so difficult, you don’t ever quite really know who’s doing it. Often the people who are targeted by it don’t want to admit that they were targeted. We don’t have very good information sharing. Or if we do, then we punish companies for saying that they’ve been hacked. So it’s a particularly difficult one. You know, if I were running the world [laugh], what would be in a cyber treaty would be everybody abides by the Geneva conventions. So the Geneva Conventions, very famously, are these international law treaties put in place about a hundred years ago, really after the end of World War I, to talk about things that are beyond the pale. So killing civilians is beyond the pale. Not returning POWs, shooting POWs, is beyond the pale. Bombing civilian infrastructure, when those people aren’t combatants, it’s things like that. That by the way, in cyber war, everyone’s violating all the time.
REID:
So if you were to be able to waive a wand, how would you try to make that work?
ANJA MANUEL:
I think we’re just not there yet. It’s just too difficult, and actually, now I’m going to sound really tough: I think we haven’t really brought the fight to our adversaries who are attacking us all the time on this cyber side. I think we are much more hesitant to do what’s called “offensive cyber operations” than they are to us. And maybe if they get thumped a few times, they’d be more willing to come to the table.
REID:
Now let’s move to rapid fire. Is there a movie, song, or book that fills you with optimism for the future?
ANJA MANUEL:
Yes, and in fact Kim Stanley Robinson wrote it, The Ministry for the Future, and I saw that he was recently on your podcast. That is an amazing book. It is all about how humanity does everything wrong with climate change, and then we get our act together, and using technology that’s currently available, we actually solve it. And it’s particularly lovely because using cryptocurrency, and blockchain, that kind of saved the world from the Salesforce tower, San Francisco plays a big role. So anyway, I love that book.
ARIA:
Rapid fire question number two, what is a question that you wish people would ask you more often?
ANJA MANUEL:
The one you all asked right up front: why are you, you know, what is it about my background, or anyone’s background that you’re talking to, that motivates them to do the things they do? Because there usually is something, and for me it really is as simple as, I wasn’t personally affected by war, but I saw war very up close as an observer very early, and it really impacts the future of your life. So that’s why I like to be involved in diplomacy.
REID:
Where do you see progress, or momentum, outside of your industry that inspires you?
ANJA MANUEL:
So, I am not in the AI industry, and the progress in the momentum I see there is so unbelievable. I’m going to just give you two examples. My husband works at Google X. What they are doing at the intersection of AI and clean tech. They have an amazing AI that looks at the grid–outdated, very large machines, our electricity grid–all around the world, and makes them way more efficient, way more resilient, much clearer when you need base load, when you don’t need base load, so that’s totally amazing. And then, I mean, I don’t need to tell you all because you’re steeped in this ecosystem, but how AI has democratized science? So my high school kid is literally working on identifying new proteins–and I don’t even understand what he’s talking about because it’s not my world–but you know, he’s a high school kid, who’s playing around with AI models and building some of his own to think about the biologics of the future, and that’s really fun. So if you’re a young person, going into science and technology today, what an amazing time. I think there’s never been a better time.
ARIA:
Well, can you leave us with a final thought on what you think is possible to achieve if everything breaks humanity’s way in the next 15 years, and what’s our first step in that direction?
ANJA MANUEL:
If everything breaks our way, the U.S. and China will find a way to get along as two great powers. And think about what would happen if–instead of this pretty dark path that we’re currently traveling down, with ever greater distrust, harming each other in the cyberspace, sabre rattling and everything else–here’s an alternative future, okay? Both countries are comfortable enough with each other–we’ll never be friends–but we’re comfortable enough with each other that we can pull back our military stances. We have to spend less money on our militaries. That frees up resources for pensions, [laugh] that need to be shored up in both countries, for healthcare, for collaborating again on innovation, and science, and technology–all of the things that are going to help all of humanity. It frees up an opportunity to really work on climate change together–currently those efforts are really pretty moribund. And it allows us, in a way that protects our workers, and thinks about how we can prosper–all of us, not just those of us in Silicon Valley–but goods flow more freely between the U.S. and China, and the rest of the world, and we get on a pretty positive path. So that’s what I’m working towards. It feels discouraging now, and it feels even almost naive to say that, but I think unless you put a big positive goal out there, you’ll never achieve it.
REID:
Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.
ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Thanasi Dilos, Ian Alas, Greg Beato, Parth Patil, and Ben Rellis. And a big thanks to Janet Om, Tiffany Freeza, Melia Agudelo, Clean Cuts, and Little Monster Media Company.