This transcript is generated with the help of AI and is lightly edited for clarity.

ARIA:

Hey there, Possible listeners. So last week, as you know, I interviewed Greg Beato and Reid about their new book Superagency. If you enjoyed that episode and want more, you are in the right place. You can listen to the audiobook introduction of Superagency right after this. And fun fact: While the voice you are about to hear sounds like Reid, it’s actually not the real Reid. It’s a recording from a professional voice actor, which, with the help of some cool voice-generation technology from Respeecher, we converted into a clone of Reid’s voice. And honestly, I work with Reid everyday, and so I think I know him pretty well, but sometimes I can’t even tell the difference, so let’s see if you can. And then, if you like what you hear, go to superagency.ai, and you can buy the book, the audiobook, et cetera.

ARIA:

And Reid’s actually also giving away custom Superagency book covers for free for a limited time. So check out superagency.ai for all of that information. And then one last note: After the audiobook introduction that you’re about to hear, we did a little AI experiment: We uploaded the Superagency intro to Google’s AI tool NotebookLM, and, in a matter of seconds, it generated a short podcast to discuss the content. So you’ll hear two AI podcast hosts discussing what you just listened to. Hopefully they’re not better than Reid and I, but you be the judge. So sorry for the long preamble, without further ado, here’s the intro to Superagency: What Could Possibly Go Right with our AI Future? by Reid Hoffman and Greg Beato.

REID:

Throughout history, new technologies have regularly sparked visions of impending dehumanization and societal collapse. The printing press, the power loom, the telephone, the camera, and the automobile all faced significant skepticism and sometimes even violent opposition on their way to becoming mainstays of modern living.

Fifteenth-century doom-mongers argued that the printing press would dramatically destabilize society by enabling heresy and misinformation, and by undermining the authority of the clergy and scholars. The telephone was characterized as a device that could displace the intimacy of in-person visits and also make friends too transparent to one another. In the early decades of the car’s ascent, critics claimed it was destroying family life, with unmarried men choosing to save up for Model Ts instead of getting married and having kids, and married men resorting to divorce to escape the pressures of consumption that cars helped create.

This same kind of doom and gloom was applied to society-wide automation in the 1950s, when increasingly sophisticated machines were dramatically impacting factories and office buildings alike, with everyone from bakers, meatcutters, autoworkers, and U.S. Census Bureau statisticians seeing their overall numbers dwindle. In 1961, Time magazine reported that labor experts believed that without intervention from business interests, unions, and the government, automation would continue to grow the “permanently unemployed.” By the mid-1960s, congressional subcommittees were regularly holding hearings regarding the mainframe computer’s potential threat to privacy, free will, and the average citizen’s capacity to make a life of their own choosing.

Today, U.S. unemployment rates are lower than they were in 1961. The average U.S. citizen lives in a world where PCs, the internet, and smartphones have ushered in a new age of individualism and self-determination rather than crushing authoritarian compliance or the end of humanity. But with the emergence and ongoing evolution of highly capable AIs, it’s not just that familiar fears about technology persist; they’re growing.

Even among AI developers, some believe that future instances of superintelligent AIs could represent an extinction-level threat to humanity. Others point out that, at the very least, humans acting with malicious intent will be able to use AIs to create catastrophic damage well before the machines themselves wage unilateral war against humanity. Additional concerns include massive job displacement, total human obsolescence, and a world where a tiny cabal of techno-elites capture whatever benefits, if any, AI enables.

The doomsday warnings are different this time, these observers insist, because the technology itself is different this time. AI can already simulate core aspects of human intelligence. Many researchers believe it will soon attain the capacity to act with complete and extremely capable autonomy, in ways that aren’t aligned with human values or intentions.

Robots and other kinds of highly intelligent systems have long existed in sci-fi novels, comic books, and movies as our dark doppelgangers and adversaries. So as today’s state-of-the-art AIs hold forth like benevolent but coolly rational grad students, it’s only natural to see foreshadowing of HAL from 2001: A Space Odyssey, or the Borg from Star Trek, or, in a less self-aware and more overtly menacing form, The Terminator’s relentless killer robot. These narratives have shaped our worst visions of the future for a long, long time.

But are they the right narratives? The future is notoriously hard to foresee accurately—for pessimists and optimists alike. We didn’t get the permanent mass unemployment that labor experts in the early 1960s anticipated; nor did we get The Jetsons and its flying cars—at least not yet.

As hard as it may be to accurately predict the future, it’s even harder to stop it. The world keeps changing. Simply trying to stop history by entrenching the status quo‚Äîthrough prohibitions, pauses, and other efforts to micro-manage who gets to do what‚Äîis not going to help us humans meet either the challenges or the opportunities that AI presents.

That’s because as much as collaboration defines us, competition does too. We form groups of all kinds, at all levels, to amplify our efforts, often deploying our collective power against other teams, other companies, other countries. Even within our own groups of like-minded allies, competition emerges, because of variations in values and goals. And each group and subgroup is generally adept at rationalizing self-interest in the name of the greater good.

Coordinating at a group level to ban, constrain, or even just contain a new technology is hard. Doing so at a state or national level is even harder. Coordinating globally is like herding cats—if cats were armed, tribal, and had different languages, different gods, and dreams for the future that went beyond their next meal.

Meanwhile, the more powerful the technology, the harder the coordination problem, and that means you’ll never get the future you want simply by prohibiting the future you don’t want. Refusing to actively shape the future never works, and that’s especially true now that the other side of the world is only just a few clicks away. Other actors have other futures in mind.

What should we do? Fundamentally, the surest way to prevent a bad future is to steer toward a better one that, by its existence, makes significantly worse outcomes harder to achieve.

At this point we know from thousands of years of experience that if a technology can be created, humans will create it. As I’ve written elsewhere, including in my previous book, Impromptu, we’re Homo techne at least as much as we’re Homo sapiens. We continuously create new tools to amplify our capabilities and shape the world to our liking. In turn, these tools end up shaping us as well. What this suggests is that humanism and technology, so often presented as oppositional forces, are in fact integrative ones. Every new technology we’ve invented‚Äîfrom language, to books, to the mobile phone‚Äîhas defined, redefined, deepened, and expanded what it means to be human.

We’re the initiators of this process, but we can’t fully control it. Once set in motion, new technologies exert a gravity of their own: a world where steam power exists works differently than the world that preceded it. This is precisely why prohibition or constraint alone is never enough: they offer stasis and resistance at the very moment we should be pushing forward in pursuit of the brightest possible future.

Some might describe this as technological determinism, but we think of it as navigating with a kind of techno-humanist compass. A compass helps us to choose a course of action, but unlike a blueprint or some immutable manifesto, it’s dynamic rather than determinative. It helps us orient, reorient, and find our way.

It’s also crucial that this compass be explicitly humanist, because ultimately every major technological innovation impacts human agency—our ability to make choices and exert influence on our lives. A techno-humanist compass actively aims to point us toward paths in which the technologies we create broadly augment and amplify individual and collective agency.

With AI, this orientation is especially important. Because what happens to human agency when these systems and devices, often described as agents themselves, do become capable of replacing us entirely? Shouldn’t we slow down that eventuality as much as possible? A techno-humanist perspective sees it the other way around: our sense of urgency needs to match the current speed of change. We can only succeed in prioritizing human agency by actively participating in how these technologies are defined and developed.

First and foremost, that means pursuing a future where billions of people around the world get equitable, hands-on access to experiment with these technologies themselves, in ways of their own choosing. It also means pursuing a future where the growing capabilities of AI help us reduce the threats of nuclear war, climate change, pandemics, resource depletion, and more.

In addition, it means pursuing this future even though we know we won’t be able to predict or control every development or consequence that awaits us. No one can presume to know the exact final destination of the journey we’re on or the specific contours of the terrain that exists there. The future isn’t something that experts and regulators can meticulously design—it’s something that society explores and discovers collectively. That’s why it makes the most sense to learn as we go and to use our techno-humanist compass to course-correct along the way. In a nutshell, that’s “iterative deployment,” the term that OpenAI, ChatGPT’s developer, uses to describe its own method in bringing its products into the world. It’s a concept my coauthor, Greg Beato, and I will explore and emphasize throughout this book.

As a longtime founder and investor in technology companies, my perspective is inevitably shaped by the technology-driven progress and positive outcomes I’ve participated in over the course of my career. I was a founding board member at PayPal and part of its executive team when eBay purchased it in 2002. I cofounded LinkedIn and have sat on Microsoft’s board since 2017, following its purchase of LinkedIn.

I was also one of the first philanthropic supporters of OpenAI when it launched as a nonprofit research lab in 2015. I led the first round of investment in 2019 when OpenAI established a for-profit limited partnership in order to support its ongoing development efforts. I served on its board from 2019 to early 2023. Along with Mustafa Suleyman, who cofounded DeepMind, I cofounded a public benefit corporation called Inflection AI in 2022 that has developed its own conversational agent, Pi. In my role at the venture capital firm Greylock, I’ve invested in other AI companies. On my podcast Possible, I regularly talk with a wide range of innovators about the impacts AI will have on their fields—with a techno-humanist compass guiding our conversations. I also provide philanthropic support to Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) and to the Alan Turing Institute, the United Kingdom’s national institute for data science and artificial intelligence.

I recognize that some might say such qualifications actually disqualify my perspective on AI. That my optimism is merely hype. That my idealism about how we might use AI to create broad new benefits for society is just an effort to generate economic return for myself. That my roles as founder, investor, advisor, and philanthropic supporter of many AI-focused companies and institutions create an ongoing incentive for me to overpromote the upsides and downplay the dangers and downsides.

I argue that the opposite is true: I’m deeply involved in this technology and I want to see it succeed exactly because I believe it can have profoundly positive impacts on humanity. My engagement in this domain has meant that I’ve seen firsthand the progress being made. That has strengthened my commitment, and thus I’ve continued to invest in and support a widening range of companies and organizations. I stay alert to potential dangers and downsides, and am ready to adapt, if necessary, precisely because I want this technology to succeed in ways that broadly benefit society.

One reason iterative deployment makes so much sense in the case of pioneering technologies like AI is that it favors flexibility over some grand master plan. It makes it easier to change pace, direction, and even strategy when new evidence signals the need for that.

Meanwhile, here we are presenting our argument to you in a book.

Roughly 2,400 years ago, Socrates critiqued the written word for its lack of dynamism in Plato’s Phaedrus and for the way it made knowledge accessible to anyone:

You know, Phaedrus, writing shares a strange feature with painting. The offsprings of painting stand there as if they are alive, but if anyone asks them anything, they remain most solemnly silent. The same is true of written words. You’d think they were speaking as if they had some understanding, but if you question anything that has been said because you want to learn more, it continues to signify just that very same thing forever. When it has once been written down, every discourse rolls about everywhere, reaching indiscriminately those with understanding no less than those who have no business with it, and it doesn’t know to whom it should speak and to whom it should not.

For Socrates, apparently, fixing his thoughts into written text represented a loss of agency. Had he turned his teachings into books himself, or rather scrolls, the reigning technology of his day, he would not have been able to control who read them. He would not have always been on hand to provide updates on his thinking, elaborate on nuances in the text, or correct misreadings. Consequently, face-to-face dialogic inquiry was his preferred technology for transmitting ideas.

But clearly generations of authors and readers thought differently. Why? Because ultimately written works increased the agency of authors and readers, enabling the latter to engage with, learn from, modify, expand upon, and, yes, perhaps even misinterpret or appropriate ideas from authors with which they might never have otherwise crossed paths.

As printing technologies improved, books evolved into a transformative global resource. Rolling about everywhere, indiscriminately reaching everyone, they functioned as early mobility machines, decoupling human cognition from human brains, democratizing knowledge, accelerating human progress, and providing a way for individuals and whole societies to benefit from the most profound and impactful human insights and innovations across time and space.

Of course, there are myriad other ways to share information now, and we’ll be using many of them to convey the ideas in Superagency too. Along with the usual podcasts and social media, we’ll be experimenting with AI-generated video, audio, and music to augment and amplify the key themes we’re exploring here. To see how, check our website superagency.ai.

But we’re starting with a book—in part as homage to the essential truth that technologies that often seem decidedly flawed and even dehumanizing at first usually end up being exactly the opposite.

ARIA:

Alright, so that was the audiobook intro to Superagency. And then, up next is the AI-generated podcast recap from NotebookLM.

AI HOST 1:

Welcome to our deep-dive into the intro of Superagency, a book that really tackles, like, all of our AI anxieties head on.

AI HOST 2:

Yeah.

AI HOST 1:

You know, losing our jobs, rogue AI, maybe even like a Terminator situation.

AI HOST 2:

Right?

AI HOST 1:

But they jump right into that conversation.

AI HOST 2:

Mm-hmm.

AI HOST 1:

But with a fascinating historical twist.

AI HOST 2:

Yeah. Before we, you know, freak out about AI, they want us to look back at how people reacted to, wait for it, the printing press.

AI HOST 1:

The printing press?

AI HOST 2:

Yeah.

AI HOST 1:

Seriously?

AI HOST 2:

Yeah.

AI HOST 1:

I mean that’s, that feels like ancient history.

AI HOST 2:

It does seem that way, but the book makes this, this really compelling point: The printing press, along with other groundbreaking inventions, like the telephone, the automobile—these were all initially met with, like, widespread fear and resistance.

AI HOST 1:

So instead of like, you know, embracing these new technologies, people were like running for the hills?

AI HOST 2:

Mm-hmm.

AI HOST 1:

I can’t quite picture people protesting against the Model T.

AI HOST 2:

Well, so imagine this, right: People worried that the automobile would lead to, like, a decline in marriage.

AI HOST 1:

What?

AI HOST 2:

Because couples could elope more easily.

AI HOST 1:

Oh, interesting.

AI HOST 2:

It sounds kind of silly now, but it just shows how people feared that technology would disrupt, you know, societal norms and values.

AI HOST 1:

Okay, I see what you mean. People were afraid of the unknown.

AI HOST 2:

Right.

AI HOST 1:

Of how these technologies might upend their lives.

AI HOST 2:

Mm-hmm.

AI HOST 1:

But is that really the same as our anxieties about AI? I mean, it feels like a whole different ball game.

AI HOST 2:

It’s true. AI feels different, and that’s because it simulates human intelligence,

AI HOST 1:

Right.

AI HOST 2:

in ways that no other technology has before.

AI HOST 1:

Yeah.

AI HOST 2:

Like, the printing press could reproduce words, sure, but it couldn’t write sonnets.

AI HOST 1:

Right.

AI HOST 2:

AI can do that now, and that raises some genuinely new and complex questions.

AI HOST 1:

Exactly. AI can write poems, compose music, even hold conversations that feel, like, eerily human.

AI HOST 2:

Yeah.

AI HOST 1:

So the question is, are our fears justified this time or are we just repeating the same old pattern of like technophobia?

AI HOST 2:

That’s the big question, right? And Hoffman and Beato acknowledge those fears, particularly the ones around, you know, job displacement, and the potential for AI to be misused.

AI HOST 1:

Right.

AI HOST 2:

However, they argue that trying to ban or severely restrict AI is just not a viable solution.

AI HOST 1:

So they’re not advocating for, like, a world where we try to outlaw AI?

AI HOST 2:

No, not at all.

AI HOST 1:

Okay.

AI HOST 2:

They use the analogy of herding cats to describe how difficult, if not impossible, it would be to coordinate a global effort to control AI development.

AI HOST 1:

Herding cats. Yeah, that sounds about right. Especially when you consider how many different companies and governments and individuals are working on AI.

AI HOST 2:

Exactly. It’s everywhere.

AI HOST 1:

All over the world.

AI HOST 2:

It’s not just the practical challenges either.

AI HOST 1:

Okay.

AI HOST 2:

They also argue that trying to stifle innovation in AI would be counterproductive.

AI HOST 1:

Okay.

AI HOST 2:

Because, you know, someone somewhere is gonna push those boundaries,

AI HOST 1:

Right.

AI HOST 2:

Regardless of regulations.

AI HOST 1:

Okay, so if banning AI isn’t the answer—

AI HOST 2:

Right.

AI HOST 1:

What do they suggest we do? Just, like, cower in fear as the robots takeover?

AI HOST 2:

Yeah, not quite. Hoffman and Beato believe we need a more proactive approach. One where we actively shape AI’s development, with a focus on what they call human agency.

AI HOST 1:

Okay.

AI HOST 2:

This idea forms the core of their book.

AI HOST 1:

Gotcha.

AI HOST 2:

And it’s what we’ll be exploring in more detail as we dive deeper.

AI HOST 1:

Awesome. I’m ready.

AI HOST 2:

Yeah. You know, I’m sure a lot of you’re thinking, like, how do we even control something as powerful and potentially unpredictable as AI?

AI HOST 1:

You took the words right outta my mouth. It’s like a tidal wave, you know?

AI HOST 2:

Yeah.

AI HOST 1:

How do we even begin?

AI HOST 2:

Right? They argue that instead of trying to control every aspect of AI, we need to focus more on guiding its development in a direction that benefits humanity.

AI HOST 1:

Okay.

AI HOST 2:

And this is where the whole “human agency” concept comes into play.

AI HOST 1:

Okay, but human agency, I mean, that sounds a little abstract. What does that actually look like in the real world?

AI HOST 2:

Well, think of it this way: Human agency is all about empowering individuals to make choices and take actions—

AI HOST 1:

Okay.

AI HOST 2:

That shape their own lives. Even as AI becomes more and more prevalent. It’s about ensuring the AI tools enhance our capabilities, rather than, like, replace or diminish our autonomy.

AI HOST 1:

So it’s not about humans versus machines.

AI HOST 2:

Right.

AI HOST 1:

It’s about figuring out how we can partner with AI to create a better future.

AI HOST 2:

Exactly. And they propose a pretty compelling approach to actually achieving this called “iterative deployment.”

AI HOST 1:

Iterative deployment. So are we, like, beta testing AI on all of society?

AI HOST 2:

In a way, yeah. Acknowledging that we don’t have all the answers up front, right?

AI HOST 1:

Right.

AI HOST 2:

We need to be flexible and adaptable as AI technology evolves.

AI HOST 1:

Right.

AI HOST 2:

So imagine AI being used to, let’s say, optimize traffic flow in a city.

AI HOST 1:

Okay.

AI HOST 2:

As the system gathers data and learns, it can constantly adjust traffic patterns to improve efficiency and safety.

AI HOST 1:

So we’re constantly learning and adapting alongside AI.

AI HOST 2:

Yeah.

AI HOST 1:

Making adjustments to make sure it’s aligning with our goals and values.

AI HOST 2:

Exactly. And to guide this whole process, they emphasize the importance of developing what they call a “techno-humanist compass.”

AI HOST 1:

Okay.

AI HOST 2:

It’s like, almost like a set of guiding principles.

AI HOST 1:

Okay, like a North Star.

AI HOST 2:

Yeah, exactly. That helps us make decisions about AI development that prioritize, you know, human wellbeing and empowerment.

AI HOST 1:

So how do we make sure that this techno-humanist compass actually guides our decisions? It can’t just be like a nice sounding idea.

AI HOST 2:

You’re absolutely right. It requires a conscious effort to embed these values into the design and development of AI systems. For example, like ensuring equitable access to AI technology.

AI HOST 1:

Right. Equity is important.

AI HOST 2:

Hugely important, yeah. We need to make sure that people from all walks of life have the opportunity to shape and benefit from AI.

AI HOST 1:

Not just a select few.

AI HOST 2:

Not just a select few, right? We’ve seen what happens when technology only benefits a small segment of society.

AI HOST 1:

Right, it creates divides.

AI HOST 2:

Exactly, yeah. It just exacerbates existing inequalities.

AI HOST 1:

Yeah.

AI HOST 2:

Another key aspect of this whole techno-humanist approach is focusing on AI applications that address, you know, these pressing global challenges.

AI HOST 1:

Like what?

AI HOST 2:

Well, instead of primarily using AI to just like make our lives more convenient or entertaining—

AI HOST 1:

Okay.

AI HOST 2:

We should prioritize its use in areas like climate change mitigation. Pandemic prevention. Sustainable energy development.

AI HOST 1:

So we need to be intentional about directing AI’s power towards solutions that benefit all of humanity.

AI HOST 2:

Exactly.

AI HOST 1:

Not just a privileged few.

AI HOST 2:

That’s right. It sounds like a pretty tall order.

AI HOST 1:

It does.

AI HOST 2:

But it’s also incredibly inspiring.

AI HOST 1:

It is.

AI HOST 2:

And while Hoffman and Beato acknowledged the very real risks associated with AI, they’re cautiously optimistic that we can actually navigate these challenges if we approach this whole new era with a sense of shared responsibility, a commitment to placing human values at the forefront of our decisions.

AI HOST 1:

I like that. We’ve been talking about this idea of a techno-humanist compass guiding our approach to AI, but how do we make sure it’s not just like a lofty concept, but something that actually influences, like, you know, real-world decisions?

AI HOST 2:

Yeah, that’s where things get really interesting. And where we all have a role to play—actually, Hoffman and Beato emphasize that this isn’t something we can just, like, leave to tech experts or policymakers. We need a much broader societal conversation about AI, one that involves everyone.

AI HOST 1:

So it’s less about, like, finding all the right answers right now and more about asking the right questions.

AI HOST 2:

Exactly.

AI HOST 1:

Collectively.

AI HOST 2:

Collectively, yeah. Questions like, you know, what are the ethical boundaries we need to set for AI development?

AI HOST 1:

Right.

AI HOST 2:

Or, how do we ensure AI is used to empower people, not exploit them?

AI HOST 1:

Yeah.

AI HOST 2:

And these aren’t just technical questions, they touch on, you know, our values, our social fabric, our vision for the future.

AI HOST 1:

Yeah. It makes you realize that the decisions we make about AI today are gonna have ripple effects for, like, generations to come.

AI HOST 2:

Oh yeah.

AI HOST 1:

It’s a bit daunting, to be honest.

AI HOST 2:

It is. It’s a huge responsibility. But Hoffman and Beato remind us that we’ve navigated technological revolutions before.

AI HOST 1:

Right.

AI HOST 2:

Think about the printing press.

AI HOST 1:

Okay, yeah.

AI HOST 2:

It brought about massive changes, both positive and negative.

AI HOST 1:

Of course.

AI HOST 2:

But ultimately, you know, led to a more informed and connected world.

AI HOST 1:

So we need to learn from history.

AI HOST 2:

Yeah.

AI HOST 1:

And embrace the uncertainty, and be prepared to adapt as we go.

AI HOST 2:

Precisely. And this iterative deployment model that they propose allows for that.

AI HOST 1:

Ultimately the message is one of cautious optimism, right?

AI HOST 2:

Yeah.

AI HOST 1:

That we can harness AI’s power for good if we approach it thoughtfully.

AI HOST 2:

That’s the key takeaway. Superagency doesn’t offer, like, a simple roadmap, but it gives us a framework for navigating this, you know, complex landscape. And it reminds us that the future of AI isn’t predetermined. It’s something that we actively shape through our choices and actions.

AI HOST 1:

That’s a powerful thought to end on. So as we wrap up this deep-dive into Superagency, we want to leave you with a question: What aspect of your life would you want AI to augment or improve? And what safeguards would you want to put in place? Think about it, discuss it, and let’s work together to ensure that the future of AI is one that benefits all of humanity. Thanks for joining us.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil, and Ben Relles. And a big thanks to Melia Agudelo, Scott Wallace, and Thanasi Dilos.