This transcript is generated with the help of AI and is lightly edited for clarity.

GREG BEATO:

We’ve been using AI since the early 2010s as recommendation engines as auto-complete, but we never got to choose to use it. The narrative before had always been, “It’s going to make decisions for you and close down your possibilities.” ChatGPT came along–to me it really flipped that narrative. All of the sudden, the technology, for the first time, was democratized and made accessible.

REID:

Hi, I’m Reid Hoffman.

ARIA:

And I’m Aria Finger.

REID:

We want to know how together we can use technology like AI to help us shape the best possible future.

ARIA:

We ask technologists, ambitious builders, and deep thinkers to help us sketch out the brightest version of the future—and we learn what it’ll take to get there.

REID:

This is Possible.

ARIA:

Hey Possible listeners. Today’s episode is a little different: Reid doesn’t get to ask any questions. Instead he will be in the hot seat along with our teammate Greg Beato, who also happens to be one of my all-time favorite people. Reid and Greg are the co-authors of a brand new book, Superagency: What Could Possibly Go Right With Our AI Future? And today we’ll be getting the inside scoop. Greg is a seasoned journalist who grew up in Silicon Valley and got into culture writing in the early days of the Worldwide Web. His work has appeared in The New York Times, Wired, the Washington Post, Reason, Spin, Slate, The Guardian, and more than a hundred other publications worldwide. We are so excited to have Greg here with us for this special episode about our AI futures. Without further ado, here’s my conversation about Superagency with Reid Hoffman and Greg Beato.

ARIA:

Alright, Reid, Greg—I’d like to try something we’ve never done before on Possible, which is to borrow from our good friend Tyler Cowen and play the game “Overrated or Underrated?” So here are the rules: I’m going to call out a term or topic, and you’ll each take turns declaring it “overrated” or “underrated.” Alright, Reid, we’ll start with you. Autonomous vehicles: Overrated, underrated?

REID:

Underrated.

ARIA:

Greg?

GREG BEATO:

Underrated. Very underrated.

ARIA:

Oh, alright. Reid, do we have one sentence? Why do you think underrated?

REID:

I think that we’re in this kind of thing where everyone thought it was going to happen two or three years ago, and now it’s like, “Oh, who knows?” And they haven’t actually, in fact realized that the entire world of transport, from trucks with Aurora, to cities, is in the process of revolution right now.

ARIA:

I love it. Alright. NotebookLM? Reid?

REID:

Underrated.

ARIA:

Greg?

GREG BEATO:

I’ve only used the one part of it, the podcast part of it.

ARIA:

Yeah.

GREG BEATO:

And I think it’s both underrated and overrated.

ARIA:

Okay. Give us one sentence why?

GREG BEATO:

It’s overrated because it really shows to me like it’s a—it’s a trick, but it sounds great. And it’s underrated because it really reveals how plastic content is in the AI age. One piece of content becomes any format, right?

ARIA:

And it’s going to put me out of a job, so I’m just going to say overrated. Okay, next one. LinkedIn? Reid?

REID:

Underrated

ARIA:

Greg?

GREG BEATO:

Yeah. Very underrated.

ARIA:

Yes. I love it. Alright. Reid, let’s go to you. One sentence. Why is LinkedIn underrated?

REID:

I think most people still don’t know how to actually use LinkedIn fully. They kind of think of it as a, you know, place to find talent or finding jobs or finding sales prospects and the ability to find experts and expertise and kind of participate in a modern changing work world that we’re engaged in. I think most people don’t know.

ARIA:

Alright. I love it. Spatial intelligence? Reid?

REID:

Underrated

ARIA:

Greg?

GREG BEATO:

Underrated now closer to properly rated, but certainly if you had asked me 20 years ago, I would have said, what’s really going to come from all this location based data? And a lot came from it. [laugh] Right?

ARIA:

Alright. And then we’re going to finish it out with a sneaky one. New York City? Underrated or overrated? Reid.

REID:

Overrated.

ARIA:

Ah, Greg

GREG BEATO:

Underrated.

ARIA:

I’m going to go with underrated too. Best place to live. Alright. Thank you both for taking part in our first round of underrated, overrated. And now we are going to get into the show. So, Reid, before we dive into Superagency, I have to ask you about your choice of co-author. So you co-wrote your last book, Impromptu, with ChatGPT. What made you opt for a human co-author for Superagency, with your great human co-author Greg?

REID:

Well, Greg and I have been working together for over a decade. One of the things that I tend to tell people when you say in my writing, you see something that has a witty joke, you know that Greg wrote it. I think I have yet to write a witty joke. Maybe I’ve written un-witty jokes. And you know, so Greg and I have been working together a long time, but you know, obviously, you know, part of it is still GPT-4, and other AI tools were in the background with Superagency because it would be ill-fitting to be arguing that we’re going to get superpowers from amplification intelligence and not be using it.

ARIA:

I love it. I love it. So Greg, let’s, let’s get into it. Your background is, as a journalist, culture writer. You grew up in Silicon Valley back when Apple was a local business. Can you share what made you excited to write this book, Superagency, about technological breakthroughs and the age of AI.

GREG BEATO:

In addition to growing up in Silicon Valley and being sort of just in that, my dad was a programmer. I didn’t really get a computer until college, late college. It was a hand-me-down from him. But I’ve always been in that world. And a little bit later, in the early nineties, happened to, into a job at a web development company in like 1995, at the height of what, Web 1.0. Started writing for a bunch of internet publications in that era. And when all this stuff started happening with AI, and this new form of AI, which we talk about in the book, circa 2021, 2022—and especially with the release of ChatGPT— you know, it kind of started with DALL·E 2, but really crescendoed with ChatGPT. All of the sudden it felt like 1995 again in terms of, ‘How big is this moment? What’s coming? Like what is the pace of development and innovation?’ And, you know, arguably it’s going to be even bigger than that, but it felt sort of nostalgic to me and also unprecedented to me. So that’s why I wanted to write about it. And of course, what I sort of just alluded to, and I’m sure we’ll talk more about, is—even with all the coverage of it—I felt like people were missing a basic aspect of what had happened and the way that the products from OpenAI were changing how AI was being deployed and developed.

ARIA:

I love how your answer is so sort of moment-in-time historical based. Because when people read Superagency, they’ll see that that’s what you get. You get a ride to see some technologies adopted slowly, more quickly. What was the reception from people about these new technological innovations? And so Reid, a similar question you have written other books. You wrote The Start-up of You about helping folks navigate their career and life. You know, Blitzscaling was all about businesses and how you can have them grow at a hyperspeed. So tell me about Superagency and why of all of the topics you could tackle, it was like this concept at this particular time.

REID:

Well, it’s no surprise to anyone, especially the people listening to this podcast, that we are in an amazing technological revolution with artificial intelligence, amplification intelligence. And if you look at most of the discourse, whether it’s academics, press, journalists—even some technologists, technology influencers—this is enormously negative. It’s like, “AI is coming for your jobs, AI is coming for your, you know, kind of personal wellbeing. AI might be coming for your life,” if you listen to some of the existential risk people who’ve watched The Terminator movie maybe one or one or two times too many. And part of the discussion that Greg and I were having was we realized that a lot of these, you know, concerns, disparate from privacy to jobs to, you know, wellbeing, all come down to this notion of human agency. And if you look at the history of these amazing general purpose technologies—which have an interesting acronym parallel with “generative pre-trained models,” GPTs—ChatGPT is, the fact is that this is always how human beings first encounter general purpose technologies, which is, “Oh my God, it’s going to destroy society and what it is to be human.”

REID:

And actually in fact, we are homo-technae and even if we don’t intervene, the massive likelihood outcome is just like any other time that we’ve had this discourse is a superpowers and this amazing elevation of human agency. And what’s more, the way you get to positive outcomes is by rowing, by driving towards positive outcomes. So let’s say what the positive outcomes are with human agency in each of these cases, and say how it is we can get there, how it is we will get there. But of course be smarter than the previous technological revolutions and try to navigate through these transitions, uncertainties, and risks in better ways. And that’s why it makes it incredibly important now to be addressing that. Because if you say, “Look, this is the future that we’re, that the rushing rapids are taking us to,” well, let’s navigate it really well, get through the slalom course to be on the other side where we’ve got this amazing future, and let’s get through it, you know, smartly, fast, you know, minimizing the amount of risks—not to zero, there’s going to be a bunch of fender benders and scrapes in getting there—but let’s try to minimize that as we get to our more human future.

ARIA:

I think a lot of people think in the short-term and they think, “Well, the last big technology that came along was social media. And honestly, I don’t, I don’t love it.” So Reid, can you either rebut me by saying that, “No, social media was good too,” or, “No, AI is different.” How do you respond to the critiques from those people?

REID:

Well, I think the answer is both. For one, social media, when you get to the big giants, has over a billion daily active users. Billion daily active users on Facebook, you know, et cetera, et cetera. And so you go, okay, a billion out of the, like, you know, if you round up, you got 9 billion people in the world—a billion out of the 9 billion people in the world are using it every day. And they’re not like being hog tied, arm twisted, head locked into it. They’re doing it because of choice. Now you could say, well, choice because it has a fear of missing out. Or because we have some emotional agitation about current political circumstances, and then responding to it, and a bunch of other things. And there are of course negative stories that emerge from this. And anytime you got a billion people doing this, you’ve got, you know, tens of millions plus doing pretty bad things in various ways.

REID:

But there’s a billion people, they’re connecting with friends, their family, they’re sharing things and so forth. So there’s a huge amount of positive, where the negative is all reported. Now, I of course believe that we can modify these technologies and social networking to make them better. And I think that actually the discourse around making them better and diminishing the negatives is exactly the right kind of things to do that. Now, I think that the, on the AI side, you know, AI, anytime you see a technology as a problem, you can also see technology as the solution. So people tend to harp on, you know, you know, one of the things that Jonathan Zittrain just published in The Atlantic, is apparently ChatGPT barfs on his name. It’s like, “I’m not going to talk about Jonathan Zittrain, you know, third rail, third rail, third rail!”

REID:

So he is one of the few people who does that. It’s like, “Oh my God, look at this terrible thing.” You know, like, look, there’s these artifacts, these very complicated things, and you go, this is the one person that, or one of seven or one of 20, out of the billions of people that you get this weird response in. But you can have AI be much more compassionate, ala Inflection’s Pi, than your average human being in terms of how it does. And that could be great for human beings’ well-being. It could be great for modeling, like what kind of discourse do we really want—do we want all human beings all, you know, round up to 9 billion people, being more compassionate with themselves, with others, within their communities? And we model based on which environments we’re in. And so I tend to think that, you know, AI is naturally—across a wide range of things—is going to be helping us become even more human in the ways that we aspire to be, even without special purpose intervention. Because there’s natural incentives in it. So the answer is both social media is not as bad as people reflexively think, nor is AI parallel to social media, and it has a huge amount of great superpower vistas in front of us.

GREG BEATO:

I think one of the interesting things about AI, we talk, we liken it to steam power, in terms of the magnitude of the innovation, right? That happened 400 years ago. So this is a once-in-an-every-few-hundred-years level innovation and transformation of society. It’s the first one to ever happen where the internet exists. And you’ve got this tool where a billion people every day can complain about whatever is happening that’s new that they don’t like. That’s something real that needs to be negotiated and factored into, how does the innovation get deployed, right? You didn’t have that when automobiles were happening.

GREG BEATO:

You didn’t have it when the internet was happening. You kind of didn’t even have it when social media was happening. It was happening at the same time, right? So now we’ve got this really granular, complex, powerful system to voice dissent, to do all kinds of different things. And that’s why such a huge focus of the book is about why it’s important to obtain consent of the governed, you know, which basically just means buy-in from the people who are using the technology. And our big thesis is: Well, in order to create trust and buy-in, you have to give people access to the technology, right? And that’s why this iterative deployment mode that OpenAI has been using is what we think is the right way to do it, versus let’s lock up the technology and perfect it before you give people access to it. That kind of thing.

ARIA:

You mentioned the internet. Most people would agree that the internet gives people more human agency. And I think people are sort of split on AI. And so Reid, I’m going to go to you. You guys quoted in the book, you said that the concerns about AI boil down to this quote: “Human agency, informed by the question, in an age of ubiquitous AI, how do we maintain control of our lives and successfully plot our own destinies?” And so when and how did this idea of agency and superagency start forming for you, especially as it relates to AI?

REID:

Well, as I mentioned a little earlier, Greg and I were talking about a bunch of the different threads because I do think that the concerns and criticisms are good to address, full stop. Whether or not they are, because they are transitional issues, whether because they are long-term issues. Because by the way, having those concerns and fears and all the rest is part of the human condition. And so how do you address them? And then as we kind of wove our way through them, and we thought about the history of the printing press, the history of electricity, the history of, you know, the steam engine, the history of the smartphone, and said, “Hey, this is a common pattern,” and it kind of comes down to this question of, “Do I have control and direction over my life and my environment and things that are happening with me?” And part of, of course, the reason why this is complicated in human societies is because the short answer is: We have some things we’re in control of and other things we’re not.

REID:

We adjust to that, because other people have control, and things happen in nature and other kinds of things. It’s like, “What’s the scope of how I express my Homo techne through these, you know, kind of technologies and artifacts and control?” And we kind of went, well, actually, in fact, technology is part of it, you know, all the way back to fire and spears and agriculture, give human beings superpowers. And this one also gives people superpowers. Now, part of the question is you give person X the superpower, you know, give Sue a superpower, you don’t give Bob the superpower, then you know, kind of what happens there and what are the kinds of things that play out? And it’s like, well, but actually the really important thing is to give both Sue and Bob superpowers and give a lot of people superpowers. And that’s where the superagency thing came about when, you know, Greg and I were talking because it was like actually in fact giving a whole bunch of individuals superpowers together.

REID:

And then we realized, of course, that was the ChatGPT moment. That was the reason why it kind of, you know, went from a research release. That’s how it was released is, “Hey, we’ve got this thing we’re working on, we’re going to do this little research release,” to a, you know, global phenomena, and you know, like just completely transformed, you know, kind of like, you know, everything from what world leaders were thinking to kind of like what people were thinking about the future of work and societies, to, “Hey, have you played with ChatGPT yet?” But it’s superpowers and, and, and the agency. And so, you know, mostly, you know, part of the thing that we’re trying to do with this book, with this podcast, it’s like, go engage with it. As opposed to saying, “Hey, I’m terrified of it,” go engage with it and see how it can give you superpowers. And that’s, I think, where the idea came from and what it is that’s so important for people to grapple with. 

ARIA:

Alright, let’s sneak in another quick round of “Overrated/Underrated” before we keep going on Superagency. So, books…Reid?

REID:

Overrated.

ARIA:

Greg?

GREG BEATO:

Books are overrated. Reading is underrated.

ARIA:

Ah, fair enough. What a good promo, everyone listening, to go out and pick up Superagency. Next one. AI avatars. Reid?

REID:

Oh, definitely overrated.

ARIA:

Greg? 

GREG BEATO:

I’m going to say underrated because I just don’t really know. we’re so early that I couldn’t accurately say, “Oh, we need this,” or, “We don’t need this.”

ARIA:

Alright, well, since we’re discussing AI avatars, we’re actually going to bring on Reid AI. Alright, Reid AI, what do you think?

REID AI:

Underrated. Not a big surprise for me here, but I think there’s huge potential for personalized interactions, learning and scaling human presence.

ARIA:

Alright, so we’ll go to Reid AI first ,and then I’ll go to Reid and Greg for the next few. Oh, big topic these days: AGI. Reid AI?

REID AI:

Slightly overrated. Fascinating and transformative if achieved, but the focus on it sometimes distracts from near term AI applications that matter now.

ARIA:

Reid?

REID:

I’m going to borrow from an earlier Greg answer and say overrated and underrated. Overrated in that it’s, you know, this intense storm and drawing discussion. Underrated, in that the level of amplification that we will get over the next few years, with all of us going into superagency together, is wildly underrated.

ARIA:

Greg?

GREG BEATO:

Yeah, I think super underrated because I think it’s really hard to conceive, “Oh, what if we suddenly had a billion AGIs?” Like all of the pictures of AGIs that we commonly know are usually like, it’s one and it’s this, this one robot that’s whatever. And it’s like, no, we’re going to have a billion or two billion, and you know, they could be computer scientists who are working on this or, virtual chemists working on drug discovery. It’s just so hard to conceive of like what that will, what the effects of that might be. So, underrated.

ARIA:

Alright, next one: Bitcoin. Reid AI?

REID AI:

Slightly overrated. It’s a remarkable innovation in decentralized finance, but lacks wide-scale utility and stability as of now.

ARIA:

Reid?

REID:

Underrated. As per my earlier social media posts that we, that I had predicted, we’d get to 100,000 and here we are.

ARIA:

Greg.

GREG BEATO:

Yeah, I think underrated. I think we’re going to really see the value of it. As it gets more broadly adopted, it just becomes more valuable and more powerful. So…

ARIA:

I’m hoping for those use cases, I can’t wait. 

ARIA:

Alright, now back to Superagency. So Greg, in the book you establish that people generally fit into one of four camps when it comes to their stance on AI and the surrounding discourse. So we have Doomers, Gloomers, Zoomers, and Bloomers. And can you start us off with defining each term for our listeners?

GREG BEATO:

Sure, and like you said, they are broad-stroke terms. Doomers and Gloomers, I would say are on the skeptical side of the continuum in terms of, you know, what they think the impacts are going to be. Zoomers and Bloomers are on the more optimistic side. Doomers, which is probably the biggest preexisting category, like, that’s where we, you know, people are already talking about Doomers and then conveniently we’re like, “Oh, there’s a lot of words that rhyme with Doomers. How can we play this out?” Right? But Doomers are the people who do say, “No, this is different than any other technology before, precisely because of its capacity to potentially really operate autonomously and agentically. And we could get to this point where it gets the ability to start improving itself on its own could reach this level of super intelligence that we have no idea.”

GREG BEATO:

That’s the fundamental concern about agency, right? Like on the biggest vector, is, “Are we creating something akin to a new species that will be smarter than us and therefore potentially control us or even destroy us?” Right? So that’s Doomers. Gloomers are much more near-term, pointing out, you know, the bias that’s baked into training data, issues around copyright. All of the sort of near-term things of like: How is this product being developed and deployed? Who is impacted positively? Who is impacted negatively? Is it going to be a big data or big tech thing where it narrowly rewards a small sector of society? All of those concerns are the Gloomers. It’s a wide variety, and they’re all legitimate concerns, right? But they also then, as we talk about in the book, if you’re only thinking through the lens of what could possibly go wrong, you’re therefore narrowing the aperture of what could go right.

GREG BEATO:

And zoomers are the most positive. They think that there’s going to be such great benefits that we should full speed ahead pedal to the metal, no regulation, and perhaps not even a role for the government at all, as a partner. We kind of make the differentiating between bloomers and zoomers in the book as sort of like, well, what role is the government going to play in all this? But I’d also add that you might say that zoomers are also like, “Hey, we’re going to build this and you the public are going to come around to it and adapt to it, or not adapt to it, right?” A bloomer is someone who is positive on, on the tech, wants very light regulatory touch, because we believe in permissionless innovation more than precautionary principle, but then is also factoring in how do you, how do you substantially involve the public in the development process?

GREG BEATO:

And how do you substantially and positively involve government in the process? Both as a regulator and as a user of the technology? Because that’s a big thing. Every country is going to have its own AI strategy and own AI posture, and part of that will boil down to what kind of consensus there is amongst its citizenry over how you use this, right? And we in the U.S. are in the technological lead, in the economic lead. But I think we have a real challenge with how do we get national consensus now? And this is a moment, probably the biggest moment in decades, where national consensus matters on a geopolitical level. The Bloomer mindset is sort of like you, you cannot just say, “Here’s how it’s going to be, folks, let the technologist decide.” There has to be, some, a lot of public engagement and some government engagement in the process.

ARIA:

And to your point, maybe because of social media, it’s one of those times when everyone can have a voice. But it’s perhaps the harder than ever to find one voice that the U.S. is speaking with or to find consensus among, among our citizenry. It would surprise no one that Reid and Greg, you guys consider yourself Bloomers. And so Reid, can you, can you talk about like the earliest moments, I mean I remember talking to you about them, when you really felt like you were witnessing the start of what’s described in Superagency as an era where AI can work for you and with you instead of just on you? What did that feel like?

REID:

Well, part of the thing for people to realize is that to some degree, taking agency is like what your stance is, what you’re doing. Like, you say, “Oh my God, I’m driving and there’s all these people around me, like this person’s cutting me off and I’m boxed in by these people,” and da da da, and that is your angle. You were, you were by your own psychological stance reducing your agency. And like when you’re driving, you go, “No, no, no, actually I’m driving. I control this thing.” And sure there’s other people on the road and I have to speed up sometimes and slow down sometimes. Sometimes I have to wait when I don’t want to. And sometimes this person gets a little closer to me than I like, you know, et cetera, et cetera. And sometimes think, “Oh my boy, that person’s driving really terribly. I want to stay away from them.”

REID:

But you still can take agency in your driving. And then one of the things that’s really magical about AI is what it enables you to now do. And one of the things that we, you know, Greg and I described in the book is kind of this informational GPS, which is, we as human beings navigate this informational space all the time. We make decisions about like, what movie we’re going to see, or we’re going to go to dinner, or, you know, when should we see the doctor? When the doctor tells us something, how do we parse it? Do we get a second opinion? You know, when we’re trying to learn something, and we’re reading something, you know, it’s like, “Okay, well do I believe this? Should I, should I get a second opinion?” And AI can be an enormous asset across everything that you do with information making decisions.

REID:

People don’t realize that we go through our entire lives making tons of decisions everyday, navigating an informational space in all kinds of things. And this is the superpower. And that’s part of when I flipped, with Impromptu, as opposed to just writing a book saying, “Hey, it’s amplification intelligence,” it’s like, let’s put in a bunch of the GPT for prompts and answers as infrastructures to show how you actually amplified it. Showing, not just telling. And that was part of like kind of realizing that realization into that book. And then of course it became the, “Oh my gosh, we need to make sure that everybody knows that they can take the superagency stance.” And part of the superagency is not just me as an individual, but the society that we live in when lots of people all have these superpowers. So like, you know, one of the examples that we charted, was look, when a doctor got a car that gave you superagency, because the doctor could come to you and help you at your house and do a house call, for example. Which was only available to the absolute wealthiest people, and now middle class people can get access to it. And that’s part of superagency, not just, you could drive places with your car, but the doctor could come to you. And that’s what we see with AI because that’s for you and with you. Not just you individually, but other people in the society you’re engaged in now is giving you these kinds of superpowers.

ARIA:

So I think it, again, the sort of agency and what you do with it is so critical here. And Greg, you alluded to it earlier and obviously you talk about it extensively in the book about the release of ChatGPT. And I remember this so clearly because we and so many people in the tech community were breathlessly awaiting GPT-4. We were awaiting the sort of next technological breakthrough, because the technological breakthrough was going to take us to the next level. And not that that wasn’t true. We’re now all breathlessly awaiting GPT-5. But what the big breakthrough was wasn’t actually the technology, it was the delivery mechanism—that all of a sudden millions of people, probably one of the fastest upticks of technology in history, could now use AI at their fingertips. Because they had a portal to use that they didn’t have before, sort of about this, the simple interface that perhaps changed this as opposed to a technological breakthrough. So can you talk a little bit about that sort of shift that chat GPT represented, and what did you learn about that time?

GREG BEATO:

I mean, it was a little bit of a technological breakthrough because it also was trained specifically to be more conversational and relevant. Ironically, a lot of people went immediately to the, “What’s the magic prompt that I can do to make it produce things?” rather than, “What’s this conversation I can have about a subject and learn iteratively?” Probably in part because of my background as a journalist, I’m used to interviewing people, so I really took to that new aspect of it. But to kind of go back to what you said about “for you and with you, rather than on you,” again, I’m old enough so that I had a period in my life where I went to CompUSA and made affirmative choices to buy Microsoft Word in a box on a shelf or Photoshop, right? And a lot of people have never had that experience.

GREG BEATO:

Everything’s just sort of there, and it’s embedded. AI, we’ve been using AI since the early 2010s as recommendations engines as auto complete. You know, a billion people have been using it every day for over a decade. But we never got to choose to use it. And there were also instances of it, and this would goes back to like maybe 2015 to 2018, when the narrative was all around facial recognition, predictive policing, algorithmic decision-making, and the bias inherent in that, right? And so these were all instances of AI that was embedded in systems, whether positive in the form of, “Oh, it gave me a great Netflix suggestion,” or negative, “Oh, it turned me down for a mortgage.” I didn’t have a choice of like, “How am I going to use it or not use it?” ChatGPT gave me that choice. And the fact that Google or Microsoft didn’t release it first, so it wasn’t just something that was already in a system.

GREG BEATO:

I had to go to OpenAI and choose to use it and then figure out, “What did I want to do with it?” That was a much different paradigm than anything that existed around AI before that. The narrative before had always been, “it’s going to make decisions for you and close down your possibilities.” And these are fears that go all the way back to the birth of mainframe computers that we talked about in the book, of like it, and very much, you know, probably 2018 to 2020, was sort of peak Gloomerism in terms of big books —like The Age of Surveillance Capitalism, Atlas of AI, good books, but it was very much like, “AI is going to take your agency and hurt you in all these different ways.” ChatGPT came along, and to me it really flipped that narrative, but no one really mentioned that.

GREG BEATO:

And, in fact, the opposite thing kind of happened. It was like all of a sudden the technology for the first time was democratized and made accessible. And then you had all these things of like, “We’ve got to pause or stop or prohibit this technology in order to save democracy,” right? And so like that was when we really first started talking about writing the book, was when that pause letter came out in like March or April of 2023. And, you know, it sort of seemed like people were saying, “We need a new federal agency to regulate this. We need this and that.” And it was like, “Wait a minute, we just finally gave people access to this technology to participate in its development in a real viable way, and now we’re going to take that away?”

ARIA:

So I think, Greg, you know, I have no doubt that something like ChatGPT will be around for a long time—a general purpose agent that you can interact with. I have also no doubt that they’re going to be super specialized agents that are going to come out to sort of meet specific needs. But also there’s definitely going to be AI baked into technologies. So when you’re using additional applications, you can’t sort of sign in or out of using AI in these additional applications. Does that sort of negate your thesis that right now people are choosing it, but in the future it is going to be baked into everything we use? How do you reconcile those?

GREG BEATO:

I think it reinforces my thesis of why this approach is so necessary. Because yes, those uses are going to happen, and they’re not always going to be default bad either. In fact, we probably want some of them to happen. And part of our thesis in the book is the best way to get consensus on that is to say—to introduce you to the technology individually. You form a relationship with it. It’s valuable to me. Then maybe I can see how it’s valuable in other contexts too, where it’s not, you know, filtered solely through me, but maybe on a more, you know, society-wide level, right? Because again, it gets back to that notion of: How are we going to play this global game when, like I said earlier, we need national consensus on this subject probably more than anything since, like, the Cold War when the Cold War was the Cold War, you know? And yeah, you know, we started writing this book while the pandemic was still sort of in play, and that was a great illustration of, you know, how different countries reacted to that.

ARIA:

Absolutely. Culture matters. People will react differently. Technology, everything that’s going on in the country, whether it’s a democracy or not. So, I mean, both of you have talked about slowing down and how people have requested that we slow down, and some people who are even fans of AI will say, “I agree with you. I see all these incredible technologies. But we’re just going too fast. Like, can we just, you know, take a beat?” And so Reid, you know, one of your previous books is called Blitzscaling, you know, talking about how do we hurdle as fast as possible to this technological future? So what do you think are lessons around speed and scale that you sort of take from Blitzscaling that you apply to the current AI moment?

REID:

Why I wrote Blitzscaling was because of why does global changing technology emerge out of [Silicon Valley], this area of the world that has the number of people, if you round it up, four million people. And that’s not four million people in the tech industry—that’s rounding up to four million people in the entire region, right? Kids, old people, people, you know, running restaurants—I mean, the whole thing. So, a very small number. And yet the number of globe-changing technology companies that come out of this area are just massive. And actually, even when you compare to China, given the populations, it’s by a population, you know, per capita thing, still, you know, massively impactful. And the answer is because one of the things that Silicon Valley’s learned is blitzscaling. And blitzscaling is kind of prioritizing speed to scale over efficiency in an environment of uncertainty.

REID:

That’s what blitzscaling is. And the reason why blitzscaling’s important is because—like people say, “Oh, you blitzscale because blitzscaling is the goal.” It’s like, “No, no, no.” Prioritizing speed over efficiency and uncertainty is a very scary thing, but you do that for competitive response when you have—you’re competing in the globe and you are, and you are, what we describe in the book as Glengarry Glen Ross markets: First prize: Cadillac. Second prize: steak knives. Third prize: You’re fired. Right? That how to get to speed to global scale really, really matters. Now, the principles that underlie this are the same principles as the reason why you kind of think, ‘Well, why is AI going to play out? And why do we need to be doing so at speed? Why do we need to be doing so at speed and uncertainty?’ Because like basically any theory of human beings that isn’t: we divide into groups and we compete with each other—whether it’s, you know, sports teams, to schools, to companies, to industries, to countries—you don’t have a good theory of human being if you don’t include that as a fundamental. 

REID:

And so when it gets to AI, you say, “Well, but we’ll all just agree.” It’s like, well, yeah, like we’d all just agree on something very clear and apparent like climate change? You know, how is that working for you? And so, so it’s, it’s, it’s a question of, no, no, we do this, and people have different views about like, well, “How risky is AI, and how risky is speed? And what are the possibilities and how does it make humanity better? And how does it make me better? And how’s making the company better?” And all the rest of the stuff. And so that’s what sets the clock all the way back to blitzscaling. Now, in Blitzscaling, we wrote a chapter called “Responsible Blitzscaling,” because the classic thing that critics—who don’t care about the truth of how these things happen and everything else—go, “Oh, it’s, it’s, it’s about being irresponsible.”

REID:

It’s like, no, actually blitzscaling is the clock that’s set by human beings competing with each other on things that they’re morally and structurally allowed to do, but you can still be responsible. You look at: Where are things that could be major you know, kind of faults? Where could you break the system? Where could you cause death, you know, injury? And what are the things you do when you’re blitzscaling in order to navigate those while still doing, you know, speed to scale efficiently in an environment of uncertainty? And so there’s very, very deep parallels because of human beings and who we are. You know, frequently when I’m talking about it these people are like, “You are just, you are just talking about, well, China’s going to get to it.” It’s like, well, yeah, China’s a really good blitzscaling competitor. It’s the only area in the world other than Silicon Valley where I’ve learned new things that I’ve put into the blitzscaling cannon. But, by the way, every group works this way, whether it goes from Nigeria, to Russia, to Israel, to—it’s like saying, “Hey, we’re going to play competitive sports, but rather than when we’re playing, you know, call it World Cup football, our team is going to play without shoes and with our, you know, weights tied to our ankles,” and you’re like, okay, see how that works for you.

ARIA:

I love the sports analogies, Reid. A lot of people put, you know, you have safety on the one hand and innovation on the other. And one of the things you guys talk about in the book is actually innovation is safety. Can you elaborate on what that means to you?

GREG BEATO:

So just building on this notion of speed too, another thing we talk about in the book is how immediately after the release of ChatGPT, all of a sudden you see new urgency from Google and Microsoft, and the way that gets characterized as “Arms race. It’s going to be an arms race!” There were so many arms-race headlines in the media, and you know, we, we talk about why not call it a “space race?” Like the way that you categorize it is interesting. And then the other funny thing was that, you know, “We got to pause. We need the six month pause.” Well, we didn’t get the six month pause, but at the end of those six months, then people are like, “Hey, how come GPT-5 hasn’t been released yet? This technology is a dud. It’s going nowhere.” Right? So is it an arms race?

GREG BEATO:

Is it not going anywhere? I think what we’re saying in the book is that, yes, you need to, we need to be moving forward because of all the things, and not deliberately hindering ourselves the way Reid said. But it’s also more methodical than it seems, in terms of how quickly the things are happening, how much research and development is behind it. So the “innovation is safety” part of it, though, comes from this idea that, again, if you’re developing products that people want to affirmatively use, you’ve got to create good products. And it really wouldn’t make sense for a Microsoft or a Google, companies that have billions of users, long-term relationship, and their business model is sort of built on having long-term users over time, to do things in a way that create bad user experiences, right? And the analogy we use in the book was in the automobile era, you know, when cars first came out, you had to hand crank them to start them. And that required both a lot of physical power, and a lot of risk, because it could backfire and you’d end up with a broken jaw from trying to start your car.

GREG BEATO:

So, I think it was like 1912, some inventor came up with the idea of an electric starter. Very quickly that became a standard feature across the industry, not because government regulated it. It was because. “Oh, I want to be able to start my car without getting a broken jaw,” right? So that’s the promise of innovation is safety.

REID:

Well, and as usual, Greg gives a very detailed human, you know, kind of history. Here’s the straightforward  heuristic principle, which is: whatever thing you are worried about with technology, technology can be the solution. And so when you think about safety, you go, “No, no, no, we should enshrine the past.” I was like, no, how should we shape the future? Such as what are the key safety things to innovate on, to have a more human future, to have a more glorious superpowered superagency future? But which elements for safety do we need innovation for to do that? And that will be far better. Which goes all the way back to the question around like, well, what should, how should we evolve social networks? You know, and obviously I think, you know, LinkedIn is doing a reasonably good job with this—and hence underrated. But, it’s kind of, the innovation can also be innovation for safety. And the role of useful critics is to articulate which kinds of innovation for safety could be important versus just, you know, very loud megaphone handwringing.

ARIA:

Alright, we are going to go to rapid-fire. And Greg, this is just for you. So I have the pleasure of asking the rapid-fire questions. Is there a movie, song or book that fills you with optimism for the future?

GREG BEATO:

I’ll choose 1984, which we talk about a lot in Superagency. And I’ll add Brave New World to it. Because I think both of those get invoked quite a bit these days of, “Oh, this Orwellian hellscape we’re living in”—invoked so much that I finally went back and read 1984 since—I think I may have read it originally in 1982, when I was in high school—and I’m like, the world of 1984 is nothing like the world of 2024. And Winston Smith, the protagonist, he would, he could walk down the street of Beijing, even, let’s say, and be like, “Oh my God, what a free expression utopia,” compared to what the world in 1984 is depicted. And I would even, I would say, you know, some people then might say, “well, the brave new world, we’re all just mesmerized and opiate-ized by trivial things,” and it’s like, well, yes, but also people are like engaged in politics on a level like never before. And so neither one of those scenarios are anything like what we have, which, for all our challenges, I think is pretty amazing, what we have at this moment in time.

ARIA:

Alright, Greg, what’s a question that you wish people would ask you more often?

GREG BEATO:

Well, I obviously love talking, but I hate answering questions. So I would say, the question that I would like people to ask is: What do you want to know about me?

ARIA:

Where do you see progress or momentum outside of your industry that inspires you?

GREG BEATO:

I used a music service called—it was originally called Rhapsody for many years. Like I got in 2002. And it changed ownership many times, was rebranded as Napster, which had nothing to do with the original Napster. It just doesn’t work well anymore. So I finally switched to Spotify like sometime this last year. And, for whatever reason, because I didn’t have my decades worth of playlists, I’ve actually been using the algorithm with Spotify. And there’s just so much great music that I’d never—I’ve learned of more new artists in the past year than I had in the previous 20. And, you know, I used to write for music magazines and do music reviews, but my discovery process had really stagnated. And I think specifically because we have this new generation of artists that have grown up on having just like ubiquitous access to the celestial jukebox, that we just have so many amazing performers and talents right now, that it’s impossible to even really scratch the surface with them. And like you kind of hear, “Oh, the music industry got destroyed by technology.” And it’s like, from a listener’s perspective, it seems the opposite to me.

ARIA:

Alright, our famous last question: Can you leave us with a final thought on what you think is possible to achieve if everything breaks humanity’s way in the next 15 years? And what’s the first step to get there?

GREG BEATO:

Oh, the first step is definitely universal basic Waymo. Because like, I think that’s an achievable goal. We’ve had this conversation, Reid and I, about like universal basic income versus like, well, what are the, you know, what might you break down the narrower niches of that? Universal basic Waymo as a first step towards extreme abundance. And my big question is, you know, we talked earlier about that competitive urge that’s baked into human behavior. I think it’s absolutely true. My question is: To what degree might extreme abundance temper some of the culture wars that we have? Because everyone has everything they need. So like, you know, if we had fusion breakthroughs and desalinization breakthroughs, so we just have abundant energy and abundant water, do we stop having battles over environmentalism, and like that becomes a political thing? Or over immigration? Because that’s no longer—like, you know, we’re not competing for scarce resources anymore. I don’t know if technology will triumph over human instinct in that regard, but I”m curious to watch it play out—and hopeful that it will.

ARIA:

I love it. Greg and Reid, thank you so much for being here today. I really appreciate it. And everyone listening: Obviously go out and get a copy of Superagency. It’ll really make you think.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jiménez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil, Margaret Burris, Little Monster Media Company, and Ben Relles. And especially the entire team at Authors Equity, who its been such a joy to work with this book on.