This transcript is generated with the help of AI and is lightly edited for clarity.

GINA RAIMONDO:

Right now I assess that most Americans are more scared about AI than they are excited about AI. When you talk to the everyday American about their life, their job, their job security, will their kids be able to get a job?—the anxiety’s pretty high. Even if AI could replace every lawyer―and I’m not saying it could―how does that serve our country?

REID:

Hi, I’m Reid Hoffman.

ARIA:

And I’m Aria Finger.

REID:

We want to know what happens, if in the future, everything breaks humanity’s way. What we can possibly get right if we leverage technology like AI and our collective effort effectively.

ARIA:

We’re speaking with technologists, ambitious builders, and deep thinkers across many fields—AI, geopolitics, media, healthcare, education, and more.

REID:

These conversations showcase another kind of guest. Whether it’s Inflection’s Pi or OpenAI’s GPT, or other AI tools, each episode we use AI to enhance and advance our discussion.

ARIA:

In each episode, we seek out the brightest version of the future and learn what it’ll take to get there.

REID:

This is Possible.

ARIA:

Even if you aren’t a hundred percent up to speed on artificial intelligence, you know that AI as a technology is evolving quickly. Let’s talk about what that means. The processing power of AI systems has been doubling every six months since 2010. And the same amount of processing power is going further and further than before. Every nine months, better algorithms essentially double AI systems processing efficiency. That’s twice the power, going twice as far on very short timelines.

REID:

When we think of industries known for rapidly adapting to technological innovation, government is not generally at the top of the list. Yet our government plays an incredibly critical role in shaping our future. American businesses depend on clear guidelines to compete on a global stage. Our workforce depends on equitable rules and regulations. And our national security depends on staying ahead of, or at least in line with, the international curve.

ARIA:

Today we are so excited to be sitting down with U.S. Secretary of Commerce, Gina Raimondo, who works to shape America’s AI policies on both the domestic and international levels. With a career that spans law, venture capital, and politics, Secretary Gina Raimondo has been instrumental in driving AI initiatives, including the recent expansion of the U.S. AI Safety Institute leadership team.

REID:

Adept in navigating the intersection of technology, commerce, and public policy―she’s overseeing the implementation of President Biden’s executive order on AI. Her work is focused on ensuring that AI development is safe, trustworthy, and beneficial for all Americans.

ARIA:

And now here’s our conversation with Secretary Gina Raimondo.

REID:

Secretary Raimondo, it is awesome to have you on the Possible podcast. Tell us a little bit about the kind of setting the landscape. What’s the federal government’s current approach with regards to AI? What do you foresee as the next major action from the federal government in AI? And how might everyday Americans experience the impact of these developments?

GINA RAIMONDO:

I think what, when we think about how the federal government’s approaching AI, what I’m trying to do is make sure that we have as much focus on enabling innovation as we do on keeping a lid on what’s dangerous. And those are the two buckets of our work. So on the, keeping a lid on what’s dangerous―I’ll just throw out a few things and we can go wherever you want to go―here at the Commerce Department, I’m right now in the process of standing up an AI Safety Institute whose job, and staffing that with primarily scientists and engineers, to really get into the science of AI, to figure out, you know what is adequate red teaming? What is adequate watermarking? What are best practices for how AI should be developed? You know, that’s kind of one bucket of quote unquote safety. I also spend a lot of time thinking about, “How do we keep our most valuable national assets―whether that is models, model weights, or most advanced chips―from getting into the hands of our competitors and adversaries so that they can’t overtake us?”

GINA RAIMONDO:

So those, that’s the two things I would put in the, you know, “keeping us safe” bucket. But as you can imagine, we have to―my own view is that the United States leads the world in AI right now, and we got to stay there. Right? This thing can’t, we can’t just play defense. It’s all about playing offense. So what does that mean? It means we can’t be too restrictive, because we have to have allies around the world. It means we have to be creating standards that enable adoption of AI and use of AI, because that’s the way, you know, for, for more innovation. It means we need to make chips in the United States, because AI runs on chips. I’m also running a new initiative called Tech Hubs. We’re going to be looking for places all around the country that have―outside of Silicon Valley and New York City and Austin, Texas―but, you know, places like, you know, Chicago and Denver are putting a ton of money into quantum. Well, we want to encourage that and maybe one of those will become a tech hub where we put money into that. So the bottom line is simultaneously keep a lid on the risk, but also provide the resources and the standards and the guidelines and the job training―I could talk all day about the job training―to make sure we run faster than anyone.

REID:

In the, you know, kind of both innovation for the future and the safety, you know, a lot of your focus is how does this help, you know, the hundreds of millions of Americans. And, obviously it’s through industry and various kinds of interactions, but say a little bit about how each of these map to kind of what your hopes and goals are relative to kind of American citizens broadly.

GINA RAIMONDO:

Right now, I assess that most Americans are more scared about AI than they are excited about AI. I was in St. Louis yesterday, I was in Illinois last week. When you talk to the everyday American about their life, their job, their job security, will their kids be able to get a job? The anxiety’s pretty high. Parenthetically, my niece just graduated from Columbia Law School. I went to her graduation. It was all exciting. You know, that’s pretty fancy, Columbia Law School, and she’s like, “Auntie, am I going to have a job with AI?” So I think that’s high anxiety. And, and by the way, we cannot let, even if AI could replace every lawyer―and I’m not saying it could, but let’s say even if it could―how does that serve our country? Right? People need to have good jobs. That we can’t maintain a democracy without people having an opportunity to have a good job.

GINA RAIMONDO:

So what’s my point? My point is the greatest challenge of our time―and I believe this from this point forward―is figuring out how do we use AI to discover medicine more quickly? To bring healthcare to people in their home more effectively―at a lower price? To educate children at the highest level―no matter their income level, or whether they live in rural or tribal America or New York City. You know, how do we do all that exciting stuff without endangering people’s way of life, making sure that people have a good job and without, you know, some of this truly scary stuff that I was talking about. And we’re in the early innings. You know, like right now as we move towards an election, there really aren’t any penalties for deepfakes. I would—deepfakes as it exists, but AI is that to the, you know, 10th level. There aren’t penalties. There’s not a lot of enforcement. So misuse of AI―I worry, I worry that misuse of AI will get out of control before the government has any kind of real, you know, regulation in place. And that will sour people’s experience. And if it sours people’s experience, it’s tough to get adoption. And if you don’t have adoption, then we don’t have innovation.

ARIA:

So I love how you laid it out with those two different sides of the coin, because I think when most people think of government, they think regulation. They think, “Oh, okay, great. She’s in government, she’s doing the regulation.” And it is amazing to keep us safe. And there’s some, you know, really scary things that are happening and we have to do. But I love, again, the focus on, “Oh no, we can also create all of these things that help everyday Americans.” And so I think a lot of people are asking like, “How do you think the U.S. government’s approach to AI advancements differ from the handling of other digital innovations over the past 30 years?”

GINA RAIMONDO:

I’m not sure we’ve gotten it right with say, social media. Like on the one hand, I’m glad we’re not Europe. I’m glad we have great tech companies. I’m glad we haven’t had a set of policies which have stifled innovation. I travel all over the world. The fact is we have the most innovative ecosystem in the world. The deepest capital markets, some of the best entrepreneurs and the most leading tech companies in the world. We need to jealously guard that, invest in that, preserve that, and grow that. And so, you know, whatever policies that we can, you know, have to make that happen are important. That being said―and as I say, you know, Europe is an example where they have, you know, faltered. 10 years ago, their GDP and our GDP were roughly similar, and now we’re twice as big. So there’s a lot of reasons for that.

GINA RAIMONDO:

But you take my point. That being said, that being said, I think we can hopefully, probably all agree that the United States needs federal privacy legislation. It’s a problem that we don’t have that. We need—should have had better regulation of social media because we’ve seen the bad effects on kids’ mental health. It’s undeniable. So what’s my point? Like, we’re in the early innings of AI. I think―very early innings. Let’s not wait so long as we did with social media and privacy, such that bad things happen before we legislate. But let’s also not, you know, put a lid on innovation in the first or second inning of this game. And that’s not easy. And that’s why I think everything we do has to be deeply science and tech based, deeply collaborative, and really truly a partnership with the public sector and the private sector.

GINA RAIMONDO:

And that is hard to do. So, for example, in building our AI Safety Institute here at the Commerce Department, we’ve recruited a guy named Paul Christiano who was at OpenAI. We added a guy named Adam Russell, who’s at USC. Robert Reich from Stanford. We’re hiring a bunch of engineers from industry. We have a consortia of private sector partners with over a hundred companies from the biggest to the smallest. We have disability advocates, civil society, universities—like that’s what it’s going to take to really get this right. And, and by the way, I don’t just want businesses to come here and talk their book.

GINA RAIMONDO:

They’ll do that. That’s good. They have to do that. I get that. I don’t begrudge that. But like, come in here and help us get this right for the good of humanity and quite frankly, for the good of your business. Because if this gets over-regulated or wrongly regulated too soon, that’s bad for everybody. For, and also, if it doesn’t get regulated at all and spirals out of control and has pernicious unintended effects, then there’ll be regulatory backlash. And that’s bad for everybody. So I do feel like this is kind of a, “Everybody, you know, get religion, that this isn’t business as usual. This isn’t any old technology. This is kind of the whole game.” And let’s break out of our traditional ways of operating and thinking as between private and public sector, and lead the world in innovation and in governance of this stuff.

REID:

One of the great things you guys are doing at Commerce is managing the CHIPS Act. And you know, when you look over the last three decades, U.S. companies have increasingly outsourced labor and manufacturing. And partially in response to this, you’ve said you want to make building hardware sexy again. We love that [laugh]. The administration is aiming to make 20% of the world’s advanced chips domestically within the next six years. Massively ambitious. So where are we starting?

GINA RAIMONDO:

So I have to tell you something. That, as you might imagine, was not in my pre-approved talking points. I was giving a talk. That’s how I feel. That’s what I said. My 17-year-old son happened to see a clip of that on social media. He calls me, he’s like, “Don’t ever say that again,” [laugh]. He was like, totally mortified. I was like, “Tommy,” (it’s my son). I was like, “It’s how I feel, man. I love it.” Look, you guys know as well as I do―like when I was in the venture business before I got into public service, which wasn’t that long ago. Not that long ago, if you were in venture in Boston, you were investing in hardware. You know, like, and not even just Boston, you know, Cisco was a startup. You know, there were lots of router companies, networking companies, hardware companies―pretty much no longer, right?

GINA RAIMONDO:

I mean, now it’s software algorithms. Once again, not to date myself, but when I was in the business not that long ago, this cool thing to be at MIT was double E. Now half of undergraduates are computer science. Not bad, except we do need to make things. So it’s an ambitious goal, Reid, but we have to hit it. I mean, and by the way, AI―I set that goal maybe a year and a half ago before the world was obsessed with AI data centers and NVIDIA chips. And AI has changed the game on that. AI runs on chips. It’s untenable to buy 93% of our leading edge chips from Taiwan―from one company in Taiwan―and the other 7% from Korea. That’s literally the situation we’re in. So it’s an ambitious goal, yes. Are we going to hit it? Yes. In the past six months, we have announced deals with TSMC and Samsung and Intel to build, you know, huge expansions in the United States.

GINA RAIMONDO:

And I, we’re going to, we’re going to―they’ve committed to it. TSMC is going to do three fabs at scale in Arizona. Same thing with Intel. Same, same thing. Intel’s building an AI cluster in Ohio. Samsung in Texas. The pitch to them is like, “Your customers are here.” Apple’s here. NVIDIA’s here. AMD is here. AI companies are here. Microsoft, the hyperscalers—we need you to be here and we’re going to incentivize you to do it. But if you―you really, this is what I say, like you can’t go about your everyday business and think it’s ho-hum. Just pause for 10 seconds. You are experts in AI. You know what AI could do for us in five years, not just today. It all requires compute. Literally all of the compute that you want is made in Taiwan right now. That’s a pretty scary thing if you think about it deeply. So everybody who cares about United States national security, United States tech leadership, U.S. competitiveness in AI better get religion and help me figure out how to be successful to get to 20%. Because I do think, you know, so much depends upon it. But anyway, I think we’ll get it done. We’ve got the deals in place, we’re working well with the companies, and failure is not an option.

REID:

I a hundred percent agree. It’s basically, it’s not just the U.S. AI industry, because the AI industry is going to be powering key aspects of every industry. So this is a U.S., you know, industrial position across the board—right?—is part of the reason why your work here is so important.

ARIA:

Sure. I mean, it feels like our, our moonshot moment, your point about getting religion, like this is a time to be patriotic and to say, you know, “The U.S. needs to step up so that we can, you know, sort of lead in all of these situations.” And from everything you’re saying, you know, national security, where are supply chains? Like there is global competition, there’s also global cooperation. And so, you know, you said yourself, you’ve been traveling all over the world. You know, we’ve heard about the U.K.’s AI policy implementation around safety. You know, you’ve traveled to Kenya, Southeast Asia, like, tell us about some of your recent trips. Like what does international cooperation look like on AI? Like is there global coalition to be had? Like how can we do this also together with the rest of the world?

GINA RAIMONDO:

You know, that’s a really good question, and it relates to the prior question you asked―one of you―about the government (how do we regulate in the U.S.). The other exciting thing here is that no country has gotten out so far in front of any other, in terms of AI regulation. So with social media and other kinds of technology—privacy, you know, we all have come at it in different ways at different times, different countries. AI, because we’re so early, we have an opportunity to collaborate with Europe, the U.K., Singapore, India, Japan, Korea―to together, set standards, regulations, IP protection, et cetera, for AI. And―I was just in Singapore, for example. We talked about aligning Singapore and U.S. AI standards. It was a great session. We’re doing that. I had a team in Korea a few weeks before that. I’m optimistic that we can, because we’re all starting from scratch.

GINA RAIMONDO:

I mean, that’s the cool thing. We’re all starting from scratch. We, Reid and I, were together at the U.K. summit in Bletchley. Everybody flew into that, to that summit because we’re all looking to collaborate from day one. So I hear that there’s an appetite to do that. Look, I think in terms of threats and the like, I think it’s, it’s legitimately scary to think about AI as applied to nuclear weapons, bio-terrorism, surveillance. And we got to be careful, more than careful about China. We’re ahead of China. We stay ahead of China. So working with our allies to protect us, all of us, from Russia, China, North Korea, from getting this tough to do bad things is, is equally important.

REID:

So I completely agree with an earlier comment you made, which is, “Look, we need to invent and create new forms of public-private partnership in order to both get the innovation and safety.” And that’s just one of the things that you and the administration is doing a very good job of. What are things―and you mentioned one thing that you thought was very important for business to do, which is to bring a kind of broader mind of like, how do we solve this problem together? Not just, you know, like the narrow entrenched interest. What are the things that technology leaders, business leaders you know, technology companies can do to bring to this, to help with this innovation pattern to, you know, help co-create this, you know, potentially, possibly awesome future.

GINA RAIMONDO:

Due to the fact that there is so much anxiety among the average American about their job security, we have to get really serious about that, you know, confronting reality. And I think this is an opportunity for companies―industry by industry, like cluster in an industry―to work together to figure out how jobs in that industry will change due to AI and what are those companies, perhaps with government help and money, going to do about it, so that people really, truly are retrained in ways that they can keep their job or keep a job. That’s a, that’s―and Europe does a pretty good job of this. I think there’s a place for labor unions to be involved in this. You know, it’s, this is a model that I, I think about like, how do we get―and it’s a way that labor unions can work constructively with, with business almost. Like, I don’t know, industry associations. So take an industry―anyone you want, accounting, lawyering, industrial car making, whatever―get those companies together, do a proper analysis of next few years, disruption to what kinds of jobs due to AI, come together—federal government, state government companies, labor—find a solution. Stuff like that, I think goes a long way because then people relax. They have a sense of security, which they deserve. They have a job, which they deserve. And then everyone else is a bit more open to pushing forward with the possibilities of AI.

ARIA:

And can you talk about, again, we were thinking about the scary things that were going to happen. We’re talking about also the innovation that can happen. Like what are you excited about when it relates to AI? Like what are the positive things that can happen for everyday Americans?

GINA RAIMONDO:

I mean everything like you know―so, so one of the things I’m working on at the Commerce Department is investing $50 billion to make sure every American has the internet, has broadband. 25% of Americans who live in rural America don’t have broadband. Think about that for a second. Imagine getting, try to get past nine o’clock in the morning at your house without being able to get online [laugh] and then say, do that every day, all day. My point is―so I, therefore, I spend a lot of time in rural America, tribal America, you know, folks that don’t have this. It’d be amazing if people could have access to a physician, a really excellent high-quality physician at their kitchen table in deeply rural America. If kids, if we could use AI to equalize the gaps in education—you know, both Reid and I have been involved—that’s actually how we first met, I think— involved in improving public schools in America. 

GINA RAIMONDO: 

AI could be an unbelievable great equalizer, you know, bringing new drugs. I, when I talk to like CEO of Pfizer and biotech companies, and they explain to me the impact of AI on the speed with which they can bring new drugs to market, and the speed with which they can innovate, that’s super exciting. So I, you could literally go on all day. I mean, I just, I think it’s fantastic to think about it. You know, people with disabilities, for example, you know, this is a great example. I’ve had sessions with folks with disability advocates. And on the one hand, they’re scared, and I fully understand that―like all the bias that’s in the world today, but on the steroids of AI, super bad, we can’t let it happen. On the other hand, it’s humongously enabling for people with certain disabilities. So anyway, I think overall it is—there’s a million possibilities that make it very exciting.

REID:

What would be the way that you would want industry to help kind of relieve the job fears and anxieties? Because I agree with you, that is the most central thing to help with that transformation. And what would be the, what would be the―as it were―the call-to-arms?

GINA RAIMONDO:

I think businesses need to make certain commitments to put people at ease. And I mean, something like: “Your job may go away due to AI. We are going to―if that happens, we will make sure you get trained for a different job. Assuming you want to, you know, stay with us and learn the skills required.” And that’s why I said, I think the way to do it isn’t so much company by company, but in an industry, you know, because maybe my―like if I’m running General Motors, just to pick a name out of a hat, I might not be able to say, “Okay, hey Reid, I promise you a job here, but you know, Ford, Stellantis, GM, Tesla together—we know this category of job might be going away in the next five years. So here’s what we’re going to tell you:

GINA RAIMONDO:

We together, plus the government, are funding a training initiative. We want you to go through that training initiative. This new category of job is opening up in our industry, and we’re going to make sure you, you’re going to get one of those jobs.” So I think it’s something like that, which is a kind of a bargain we, like a grand bargain or social contract that we have to provide to people. The final thing I’ll say is this. You do hear people talk about, you know, human extinction due to AI. And I guess I would—my one final comment is this: At the end of the day, any technology exists to serve humanity. And shame on us if we allow AI to be so runaway that it would lead to human extinction. Maybe it can, but how stupid are we if we let it.

REID:

One of the things that these critics don’t realize is that AI may also be instrumental in preventing human extinction, like against pandemics or other kinds of things, or figuring out what to do if we have an asteroid issue or other kinds of things. And that’s part of why I think the dialogue on this is malformed. But I a thousand percent agree with you.

GINA RAIMONDO:

People can’t open their ears to the goodness until we help them―all of us―get a little calmer about the scary stuff. But when I hear people talk about the runaway train of “Nobody will have a job, human extinction,” like we won’t let that happen. Technology exists to enhance humanity and health and education and the human experience. So let’s just make sure we have guardrails that―I guess I’ll end where I began―where we maintain our competitive lead in the world and innovation in a way that makes people smarter, healthier, and more prosperous, not extinct.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Adrien Behn, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Karrie Huang, Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil, and Ben Relles. And a big thanks to Sarah Weinstein, Charlie Andrews, and Little Monster Media Company.