This transcript is generated with the help of AI and is lightly edited for clarity.

NICK THOMPSON:

Will AI make the world less equal or more equal? Then you can argue both ways, right? And you can get very smart people who will argue both ways. And there’s data suggesting that actually it’s going to make the world more equal, which would actually make democracy work better, which would make society work better. And I think there’s a real chance that AI does that.

REID:

Hi, I’m Reid Hoffman.

ARIA:

And I’m Aria Finger.

REID:

We want to know what happens, if in the future, everything breaks humanity’s way. What we can possibly get right if we leverage technology like AI and our collective effort effectively.

ARIA:

We’re speaking with technologists, ambitious builders, and deep thinkers across many fields—AI, geopolitics, media, healthcare, education, and more.

REID:

These conversations showcase another kind of guest. Whether it’s Inflection’s Pi or OpenAI’s GPT or other AI tools, each episode we use AI to enhance and advance our discussion.

ARIA:

In each episode, we seek out the brightest version of the future and learn what it’ll take to get there.

REID:

This is Possible.

ARIA:

We spoke with Nick Thompson, CEO of The Atlantic, which happens to be one of my favorite publications. And that was at a live possible recording focused on AI, human connection, and the future. The discussion was extremely wide ranging, and as the sole host for this particular episode, I had the pleasure of interviewing both Reid and Nick. I did get some help in opening up the episode with an AI-generated version of Reid. So let’s jump in with Reid AI. We’re here with Reid Hoffman and Nick Thompson. Reid created LinkedIn and Nick is one of the top LinkedIn influencers, which is not why we chose you, but kudos to you. And Nick has been talking a lot about AI avatars, and Reid also created recently a digital twin. So I am going to kick it off with a quick video before we get going.

REID AI:

Wow, look at everyone out there. Hello. I know Aria is the host of this Possible episode, so thank you for giving me the opening question. This is a deeply personal one for me. As Reid’s AI avatar, it’s about my future. My question is: What purpose do you think avatars may play when it comes to helping humans go beyond their current capabilities and lifespans? And how will the technology that powers me help you expand your reach, legacy, and longevity?

NICK THOMPSON:

Yeah, I would like to say I appreciate that your avatar had five fingers. That was particularly impressive. I find that they will obviously make us more efficient. They will be able to handle certain tasks. I find the movement in AI to create human replicas a little disturbing. And I’m a little bit weirded out by the prospect that they will be there after we die to talk with us. Maybe that’s because it usually comes up in the context of like my father coming back and I’m always worried if he comes back—whether I’ll be responsible for all the bills that he left when he died [laugh]. But I’m a little nervous about the role that avatars will play in giving us the afterlife. But they’ll also be amazing tools for being more efficient in certain contexts.

REID:

But I would’ve thought that a public intellectual like yourself would be thinking back, for example, to the ancient Greek idea of immortality—that as long as you’re remembered you persist—and that some version of thinking about how you can like have your voice continuing a dialogue into the future might actually appeal to you even if the modality of how this particular weirdness of looking in a mirror or anything else might strike you initially.

NICK THOMPSON:

Great point. Interesting point. You’re definitely—you know, you do live on after you die, right? Like, if I were to die this very minute, I would live on—based on whatever influence I’ve had, on whatever thoughts you had, on whatever thoughts my children had. So you do live on even without these beautiful avatars. I have a question for you, which is: Reid has a lot of different views in his lifetime, right? The Reid of age 30 had different views of the Reid of, you know, age 45. And when that avatar exists, trained on your data, right? And exists after you’ve died, what age Reid will it represent or will it represent some kind of like average of all the Reids?

REID:

Depends a little bit on when it’s minted. One of the pieces of strange feedback that we didn’t anticipate that we got very early from the Reid AI was, “Oh, this is how you’re creating your legacy.” And it’s like this Reid AI, this avatar, will live on past me. One of the funny recollections of that was to say, “Well, actually, in fact, this Reid avatar will have a very short lifespan because AI is expanding at such a point that this one will be replaced by the next Reid AI.”

NICK THOMPSON:

You’re going to kill that poor guy?

REID:

Yeah. You know, [laugh], that’s part of the reason why the Reid AI started with the question and was like, “Well, wait a minute, wait a minute, minute, how long do I live?” And the answer is, “Well, we’re going to have much better technology in X months and that will be the next one.” And so the probable—is it’s not actually in fact, a—it has a certain of a lifetime integration because the GPT is trained on all of the things that I’ve written and spoken on. And so will get added to it and there’ll be—I don’t know if average is the right metaphor, but blend—a synthesis of that. So that will have a blended, not young, not old, but a blend of all of them. But then the question of course is its design architecture will be whenever we’ve made it. So…

NICK THOMPSON:

Right. I would actually, I would, I would be very interested if I could find an avatar of myself at age 21 with long hair and radical leftist political views to actually talk through what that person thinks of the current Nick Thompson. I think that’d be a very interesting experience.

REID:

Yes. Yes.

ARIA:

Totally, that’s what I was going to say. It would be amazing to have that debate. So if the Possible podcast is about creating the brightest version of the future and talking about how to get there, we are not just tech utopia. It is, “If we don’t shoot for the moon, shoot for the stars, we’ll certainly never get there.” And if like in 20 years all we’ve done is killed the killer robots, like I think that’s a really sad future. So we want to talk about what are the greatest things that can happen, and how do we sort of chart our course towards them. And so Nick, you’ve been covering technology for decades. Reid, you’ve been an investor, startup entrepreneur for decades. Before we get to the future, let’s talk about this moment in time. You’ve seen so many transitions, Web 1.0, 2.0—Reid we’ll start with you. Give us just a few minutes on what, like, this moment in technology means to you.

REID:

Well I think the key thing is that technology is always the way that we evolve as human beings. It’s part of the phrase I use is “Homo techne.” And when you look at the industrial revolution, it was physical superpowers. It was superpowers of construction, superpowers of motion and transport. Now we’re going to have cognitive superpowers and as part of the kind of amplification intelligence. And I think that the notion of how we will be navigating the world within five to 10 years; we will all navigate a variety of things in our lives—work, certain life choices, other kinds of things—with an entourage of agents. It won’t be just one agent, it would be multiple agents. And part of the reason why you’ll be doing multiple agents is because the—part of how we keep our human agency and everything else.

REID:

As opposed to saying—because the usual, you know, to borrow from a movie, the Idiocracy thing is like, “Oh, I sit on the couch and I listen to my agent. It makes all the decisions.” Actually, in fact, I think what we’ll have is part of what you can already see in these agents is that they’re very good at taking roles. And what we’ll say is what set of roles, what cabinet of experts do I want in my navigating, my life or in this particular problem? And I will be able to call them up and I will have that in everything that I’m doing. And part of why I think what’s stunning about this moment is, we are small N number of years away from—we can do it now, but we’re a small N number of years where that’s kind of like what everybody will be doing.

NICK THOMPSON:

I think that’s a good vision. This is probably back to your question—it’s probably the second time in my life where I’ve been enraptured with what technology can do for us. The first was when the internet first came around, and I thought about—back with radical, long-haired Nick, who was, you know, focused on trying to bring democracy to repressive countries in the world. There, there was a sense back then that this would bring about freedom and democracy in the world. And you know, to some degree that was realized. And to some degree it was not. But that was a moment where I loved it. And now I have a moment where I’m incredibly excited because of the possibility of learning. And, you know, you go into journalism and the reason I chose this career because of a sense of curiosity and a sense of how exciting it’s to go out and to try to learn something new, right?

NICK THOMPSON:

And to reset your mind, begin a new story, begin a new thing, and the capacity to learn with these agents or however the system works in the future. But being able to be in constant conversation, to have a device that can help you, you know, sort through all the swaths of information in the world, is incredibly exciting. And the idea that my children will go into this world with the ability to, you know, learn languages so many times faster with a personal tutor who can teach them all that and process so much more information than they possibly could before, is quite exciting.

ARIA:

So you said you were excited about the sort of democracy piece and Web 1.0, like excited about the possibilities. So many people are very worried about AI when it comes to democracy or journalism. Your job, your industry. Is AI worrying for journalism in the future of politics/democracy, or are you excited about it? Obviously there’s a lot of nuance there as well.

NICK THOMPSON:

Yeah, it’s profoundly worrying. I mean it’s profoundly worrying for the business model of journalism, right? It’s extremely threatened as you can see as, you know, search disappears. And as you know, go to what’s called the enshittification of the internet, where, you know, you may reach the point where there’s so much stuff created by AI that it’s very hard to find a place for serious journalists. There could be a flight to quality there, which is something I’d be quite excited about—but absolutely worried about the effects on the business model of journalism. Absolutely worried about the potential impacts on democracy if you can no longer tell who’s Reid, who’s Reid AI, who is Russian-created Reid AI and you’ve got the three of them and you’re talking to all of them online, you don’t know who’s who. Like how can you have democracy function? So that’s a legitimate concern. But for each of those concerns, there’s also an upside that I think is potentially higher, which is why I am, you know, net-net an optimist. But yes, definitely. Definitely concerned about Russian Reid.

ARIA:

Reid, we have talked a lot about the fact that, you know, there’ll be deep fakes and misinformation, but AI can be used as a defensive tactic and eventually that technology will keep up. Not talking about that. Can you just talk about the positive use case for AI as it relates to democracy, journalism? Like how will AI just be a good, not just a defense of the bad?

REID:

Look, I think ultimately part of what we need to do is kind of figure out kind of a identity and certification, a bunch of other things, to try to sort out how do we have collective learning together, which is one of the things we need for democracy to function. And it’s part of the reason why a flight to quality or other kinds of things would be good things to have. I do think that when you begin to think about that, you know, kind of a set of agents that are helping every individual navigate, you can imagine that part of—if the agents are provisioned well—that they would have kind of almost all of that kind of journalistic fact checking built into them. And so if you have that now, of course it’ll get politicized. “Well, that agent’s lying, because it doesn’t say that Pizzagate was absolutely real,” you know, et cetera, et cetera.

REID:

Or I have alternative facts about Bowling Green with, you know, kind of a classic Orwellian term, alternative facts. And you know—but if we have that, it can help make these into collective learning systems. Because if in the notion of the same way that we do, you know, creation of science, creation of journalism, creation of content through—mediated through groups of human beings doing, you know, kind of principled knowledge creation, information creation, creation and curation, then I think that can help all of this. And I think that would be the kind of thing that you would be looking at in the solution set.

ARIA:

Mm-hmm. One thing we were talking about just downstairs before we came up here was Pi, which is the agent from Inflection. And it, you know, I love Pi because it’s kinder and it’s quirky and it is nice to you. And actually we have a coworker who says they love talking to chatbots all day because they never feel so good. “Aria yells at me all day, but my chatbot says good things.” And so some people think that AI will make us less human. Nick, in 2018 (I won’t hold you to it), but you said that AI would neither make us more empathetic nor less caring. In the light of chatbots, where we are now, how do you think sort of empathy and AI play?

NICK THOMPSON:

I think that one of the most important things that could be done in the world in the next couple of years would be trying to design or continuing to design a chatbot AI layer to the internet that is like the opposite of Twitter, and that increases empathy and increases understanding. And so I took a crack at this, right? I had a startup called Speakeasy.ai that, you know, a colleague of mine from the Emerson Collective and I worked on, and we ended up selling it to Project Liberty. It was, you know, obviously we did not supplant Twitter as you’ve probably noticed, but the idea was: Can you figure out the mechanisms of a conversation and can you use AI to help bring people to understand each other better, right? So can you take what you learn? And we know for example, from social science, that if people disagree on something, but they have something else in common, if you can get them to talk about that first, then you can get them to talk on the thing they disagree on, right?

NICK THOMPSON:

So, you know, you guys, you know, Reid and I are, you know, went to the same college, but maybe we have different views on, you know, who knows what on abortion, right? And so you can, like, you start by talking about the college soccer team and then you move to abortion, right? Can you use AI to also help guide people when they start to write something that might be toxic—to pull them back? And can you do it in a way that’s not smarmy and you know—. Somebody is going to solve this problem, right? One of the biggest problems of the internet, and one of the reasons why it has been less helpful to democracy than it should have been, is that the underlying architecture has tended, more often than not, to push people into filter bubbles to make people dislike each other. Part of the reason I’m on LinkedIn is because that is better than any of the other social networks at not doing that. But so much of the underlying architecture of the internet has this very corrosive effect, and that’s been a real problem for democracy. And so the question is, with AI, can you reverse that? Right? And that is one of the challenges for our time, and I am fully on board with anybody working to solve that.

ARIA:

Are there any learnings you had from trying that out that you’re like, “If I did it again, I would do X, Y, Z?” Or advice you would give to someone else?

NICK THOMPSON:

Yeah, so maybe not have been so earnest. Like you know, as you know, we went to—I remember one of the moments that will stick in my mind is going to two venture capital meetings, you know, where we’re like, “We have this idea, we’re going to get people to talk and like, look, we have this like proto-community and all these people are having these amazing conversations.” And they’re like, “First one was like, you’re selling kale in a candy shop. How are you going to compete?” And the second one was much better. And what he said was, “Is this going to get me paid or is it going to get me laid? And if it’s not going to do either, I’m not putting money in.” And we were like, “Well, I guess we’re not sure.” And so if we had figured out a way to get you paid or laid, then we could have added on a little AI to make you more empathetic. So maybe solving that problem would’ve been the way to have done it.

ARIA:

Reid, I, I will—

REID:

Thanks for that setup. [Laugh]

ARIA:

Reid, as a venture capitalist to regularly—

REID:

I was not one of the venture capitalists that was in this discussion.

ARIA:

A few weeks ago you were actually both in Perugia and Reid, you gave a speech on a similar topic. It was about empathy, it was about humanity, it was about how AI and humans can coexist. What are your thoughts on empathy as it relates to AI?

REID:

Well Pi—the agent that Inflection created and set of APIs that will now be, as part of the pivot, providing to a number of different providers—emphasizes EQ along with IQ. Because if you take this universe in which we will have a set of agents as an entourage, as an ensemble, as an orchestra for navigating things we’re doing, it’s obviously for the same reasons that Nick was talking about—the kind of toxicity of Twitter and other things. It’s important to have EQ. I think that’s just a question of how you make them. And I think it’s very—like, it takes work and it isn’t as easy as walking down the block. But with some work and with some design, you can make these agents that are not smarmy or dictatorial or trivial, but that can reinforce good human connection.

REID:

They can kind of model and reflect empathetic behavior. I mean, basically all the medical studies that show you know, kind of like if people are embedded in environments where they experience more empathy, they show more empathy. If they experience more compassion, they show more compassion. So that’s the kind of universe that we want to be living in collectively as a group. And so I think it’s totally doable. Now the question is, you know, part of what you’ll, you’ll be seeing is a—the contention of what different experiences look like. And I think part of what we have to try to navigate is what are those experiences that help us, not just as individuals, but also as a group. And it’s one of the things that can mislead the way that sometimes social networks go, because they go, “Well, I want to go for the people who are yelling the loudest, or I want to be the people who are most enraged by the things that are happening.”

REID:

And that, of course, creates a certain dynamic. And you want to have true norths that are not that. And that’s part of the reason why LinkedIn doesn’t do that, because LinkedIn says, “No. Actually, in fact, by being around this area, one of the things that’s in the terms of service is essentially being civil in how you interact with folks. And that you want that to be part of your true north.” And I think the design of agents, you’re going to want the same way. Now the good news for a bunch of the current frontier agents is like, you know, kind of a very central goal for OpenAI, Microsoft, Google, Anthropic, et cetera. And so, so far so good within those.

ARIA:

So when I think of empathy, many people in this room can agree, that’s like the number one thing I want for my kids. It’s like, just don’t be an asshole, be empathetic, be kind. And the secret behind the Possible podcast is that it’s a way for me to get parenting advice. And Nick, I think you have three boys, is that right?

NICK THOMPSON:

I do. I have three boys. Yep.

ARIA:

I also have three boys. And making them not horrible human beings is key. And you said that you think AI is going to be great for your kids. Like you’re really excited about young people and education and just how they’re going to be growing up with it. So like, say more about that. Why are you excited for young people in AI?

NICK THOMPSON:

Oh, I mean, like, there are all kinds of, you know, negative use cases. But if I look at the positive use cases—so what have I done with my kids with AI? So the first thing we did is—I tell bedtime stories to my now 10-year-old and they for a long time involved this thing called the Animal World Cup, where different animals play each other in this long tournament, right? And so, like the centipedes, which have lots of legs, play the owls. The owls won it at night. You know, the bears, which are modeled on the stuffed animals. You know, like the fish are hoping it’ll rain, all this stuff. And so my 13-year-old and I made a book for the 10-year-old of the Animal World Cup, right? And it was based on my stories, but we added like national anthems written by AI and art made by AI.

NICK THOMPSON:

And we printed it out. It was amazing, right? It was really fun and magical. And I’m now telling the same kid stories every night about this kid—it’s now, it’s like we’re 10 years in because I told a lot of these stories to his older brother—about a guy named Eggplant Parmesan who’s on this like epic quest and was raised by chickadees in a bog. And he is fighting this like evil—well, it’s maybe a large bear and it may be like a mouse, who knows. But I’m like, “Help!” every night when I’m like sort of stuck on the plot point I’m using, I’m going into Anthropic and I’m like using it, right? So that’s pretty fun and creative, right? My 15-year-old refuses to use it, despite my insistence, to write his papers. God bless him. Because he doesn’t want to lose those skills, but he uses it for his debate prep, right? So that’s his big extracurricular and it’s incredibly helpful, right, for understanding how the debate prep works. So they’ve all developed sensible use cases. And I mean, I guess also, like they’re also watching TikTok a bit, maybe a little more than they should, which is in some ways engagement with AI. But like they have figured out, and we have talked about sort of sensible good, wonderful ways to use AI. So I’m all for that.

ARIA:

Reid. We were talking about empathy, we were talking about relationships. One form of relationships that is so critical to you is friendships. And it’s something you talk about all the time before ChatGPT came around. I think like my first day of work, you were like, “I’m going to write a book on friendship. It’s so important to me.” How does AI impact the world of friendship? Both just generally human-to-human relationships and friendships, but also feel free to weigh in on friendships with AI and bots and that whole universe.

REID:

We’ll start with the design sensibilities for Pi. So one, I think natural unfortunate misconception, is to say, “Oh, try to design the agent to be the friend.” And maybe at a future incarnation that might be an okay thing to do. But right now if you go to Pi and say, “Hey, you’re my best friend,” it’ll say, “No, no, no, I’m your companion. Let’s talk about your friends. Have you seen them recently? Have you reconnected with them?” You know, that kind of thing as a way of navigating, because the design sensibilities agent should be kind of a theory of what the human good is. And part of human good is having friends, friends who are people who are there for you, but also challenge you. And, and you kind of learn how to be your better self by going down the road with them. And you want that as a key thing.

REID:

Now, AI can help with that, because like Pi can say, “Oh, if you’re feeling particularly lonely, why is it you haven’t thought about calling your friend?” Or, you know, “What’s the thing I like?” “But I don’t feel like I have any friends here.” “Well, have you thought about, you know, how you might find some or make some or other kinds of things? And what are ways to do that?” And you know, sometimes, you know, like one of the things that I was talking to a friend of mine whose kid sometimes kind of goes to, you know, one of these agent providers is—this kid has a couple of close friends. If he is in a down cycle with those friends, then feeling lonely. It will be a temporary source, but he wants to get back to the friends. That seems to be a reasonably healthy pattern.

REID:

And I think that’s the kind of pattern that we want on these things. And just like these LLMs are trained on a large corpus of human knowledge. Obviously, people worry about a bunch of that stuff being toxic. But it’s also a bunch of human reinforcement learning and feedback and that is deliberate design. So with that deliberate design, you can actually make these into being much more patient and empathetic than your average human being in terms of them. So I think it’s very possible to have that. And that’s where you don’t want it to be—it’s a little bit like one of the things I’m always bemused and thinking about friendship is, you know, one of our classic English phrases is, “Dogs are people’s best friends.” And my quick snappy rejoinder is, “Well, I don’t know about you, but I don’t take my best friend down to be spayed at kind of my convenience.”

REID:

So I’m—it’s a little bit of an odd locution for thinking about this. And that’s because a friend is somebody who actually in fact, you know, kind of is present with you as an equal and kind of goes with you on this journey. And part of the value in that friendship is their ability to challenge you as well as support you. And that’s not what the agents are yet. Maybe the agents can be there at some point, but the delusion that the agent should be your friend, because it’s completely there and totally loyalist is kind of a, perhaps a subtle misunderstanding of friendship. And so I think right now the design sensibility should be, “No, no, I’m not your friend. I’m your companion. I’m an agent, I’m a catalyst, I’m a supporter, I’m a tool.”

REID:

And I think that’s the right kind of thing. And part of the reason why I’m so focused on having a set of agents that—the more we think of it as not one agent, but a set of agents—is because part of how we preserve our, our agency, our cognition as we’re going forward in this group, is that we are thinking, “Oh, I’ve got this agent that’s the historian and this agent that’s the skeptic and this agent that’s the intellectual. And I’m sorting out where my beliefs are as they’re each speaking to me as I’m, and my, and who my identity is as I’m sorting through these particular problems.”

NICK THOMPSON:

As you think about designing agents, and as you thought about this with Pi, do you let your agents use the first person singular? Like, or do you make them draw very firm lines so that it’s always clear that you’re talking to a machine and not a person?

REID:

So far—but this is a work in progress—allow the first person singular, but also be clear that it’s not, I’m not a person, right? So that’s part of the thing of like, “Oh, you’re my friend.” “No, no, actually I’m, in fact, I’m a robot.” Or, “You’re so emotionally empathetic to me as well.” “Well, actually I’m very good at interacting with you in an emotional way, but I myself don’t share those same emotions you have.” To have that kind of clarity so that you’re not—that you’re having a kind of an accurate and balanced and truthful perspective of the world while still being very helpful.

NICK THOMPSON:

Yeah, I have come to the belief that all the people designing those core levels should, and they’re not right now, work to always make clear their distinction. To have voices that don’t sound human, or if they do sound human, like be preceded by a beep, right? To not allow the first person either, to have constant reminders. And in fact, like, to make it a very core part of product philosophy that there is a line between this is a robot and this is a human.

REID:

It’s interesting because part of how we learn to be more empathetic is because we interface in various ways in human ways with other people. There’s a reason, not just because of Hollywood storytelling and other things, but that a human-ish voice is actually pretty baked into how we learn those behaviors and how we take them seriously. And you know, like for example, one of the things I think is a mistake in design is the interaction pattern with Alexas can be bad for humanization because you don’t want to learn a pattern “stop,” you know, so far. You want to actually have more of a dialogue. So the balancing of those considerations, which is together with a “How do I, you know, kind of have a kind of a humanizing approach to the world?” is also very important. And so I would look at both design characteristics, but it’s an interesting thought about what the balance of those should be and what the resulting design should be.

NICK THOMPSON:

My kid who spends all this time with the Alexa told me to set a timer and beep in 15 minutes [laugh], and it was a very awkward exchange.

ARIA:

I feel like it’s just a losing battle. I, I like, I—

NICK THOMPSON:

But it’s not, I mean, so it is a battle that is being lost, right?

ARIA:

Okay.

NICK THOMPSON:

And it’s being lost because there is a rush to become the most successful company.

ARIA:

Sure.

NICK THOMPSON:

And clearly having an interface that is more human is more successful, right? So OpenAI, like types like a human because it feels more natural and like you’re responding to somebody’s text message coming in, right? And the voice is very human. In fact, it’s the perfect version of a human, right, because that is what we most like to communicate with. And so there is a competitive and market force to make that, but it’s bad and we should resist it. And we should thwart it and turn it backwards. And I’m glad that Reid is open to this idea. And I think that we just need to—

ARIA:

What’s the method? Is it, is it regulation? Is it government? Is it self-censorship? Like how do you fight that profit motive?

NICK THOMPSON:

As with all things in AI, it—probably a large and unfortunate percentage of it—is self-regulation by the AI companies, right? The governments are going to move too slowly. But it’s also regulation, you know, state, federal, international, whatever you can imagine. Like you have to declare if you are using a system that could be interpreted as a human, you have to have a clear signal that it is not a human. And then it’s user preference. It’s individuals opting out of services that don’t follow these guidelines and opting into ones that do. But I think that to efficiently solve the problem, it would actually have to be like the head of product at Anthropic, OpenAI, who would have to care about this and build it into their products. And certainly they have thought about it and are thinking of it. So, consider one of my small roles in life to put pressure on this issue, because I just think it’s extremely important.

REID:

So I do think it’s really important—and one of the things that Pi does, and everything else—is to not delude “I am human. I’m trying to fake out, fake you out that I am human.” The design for how to do that is a very interesting question. I’m not sure that the deliberate massively non-human voice is the right thing. And I’m certain that the beeps are not the right idea. I think it’s a design problem to figure out. Because like for example, when we do—you know, on Possible podcast, Masters of Scale podcast—we use kind of AI voices. We make sure that people are aware that that is happening, but we don’t tag each use of it because it would be super awkward. And, you know, in these things, you want to go on the engagement of the story and the arc. And if you’re interjecting a beep every two minutes, you know—

NICK THOMPSON:

Or every five seconds. No, I agree with that. And same thing with The Atlantic, right?

REID:

Yeah.

NICK THOMPSON:

We use AI to read our stories because it’s a much quicker process and you have humans read some stories. And there’s this really, I mean, at the beginning we say, “This is read to you by AI.” The really interesting data point is that the completion rate on stories read by AI is higher than that read by humans.

ARIA:

What?

NICK THOMPSON:

Suggesting that at this point—

ARIA:

Even if it’s like Morgan Freeman?

NICK THOMPSON:

That would be an interesting test. It may be noisy data, it may be too early to tell. There may be a confounding factor. Who knows why. It may be like some data scientists will figure out that it’s got nothing to do with it. However, one hypothesis would be AI can make a better voice than a human. And so all of this is going to go that way. But I do think it’s extremely important that at the beginning we say this is an AI voice as a principle.

REID:

Yes. That I completely agree with.

ARIA:

So we’re going to switch a little bit into data and privacy. So I realized the other day I was at a mom’s night karaoke for my child’s school. And it was super fun. And I was taking pictures and videos, not because I wanted the pictures and videos, but because I wanted them to exist. So that like in 10 years when I looked back at my phone roll, I’d be like, “Oh, right, right, right. We did that fun karaoke.” Like there’s so many things that you forget unless there’s a picture, right? Like, you didn’t realize that, but then someone showed, “Oh, I did, I did go on that vacation. Oh, I did go to that restaurant.” And so it was just like a way that I wanted to sort of memorialize something even though I didn’t really care about the picture itself. So Microsoft has a new recall feature that records everything on your computer. And Nick, you flagged this as potentially great, it can remember everything you’ve done. Obviously there’s huge security risks. And so I think there’s two things here. One is like remembering everything. And the other is security, data privacy. How do you feel about it? What are the trade offs?

NICK THOMPSON:

I mean like, the utility is obvious, right? Like if I had on my computer every document I’d ever had, like access to every keystroke, like that would be amazing, right? I am working on a book about my life and running, there’s all kinds of stuff I can’t remember from my life. I can’t figure out, I can’t piece together. The files I lost, the password, blah, blah, blah, right? Like, it’d be incredible, it’d be amazingly efficient, it’d be awesome. On the other hand, if somebody else [laugh] had access to that, it would be a complete and total catastrophic nightmare. And so there’s a creepiness concern, there’s a privacy concern, there’s a security concern. And there’s amazing utility. And that’s like, is often the trade off in tech. I also think with this one, it’s like a crafty way—you can tell me, you’re on the board of Microsoft—like, it’s like a crafty way to train their AIs of the future. Because if you can actually record everything on somebody’s screen and get all the mouse movements, you’re going to get a way clearer picture of what humans actually want than just getting their text. So it’s making amazing data to feed the machines as well. Correct? [Laugh]

ARIA:

On the record.

REID:

Yeah, exactly. One of the rules of Microsoft board members is do not speak for the company. So I’m not speaking for Microsoft. I would say as an outside observer of the top tech companies, Microsoft is the one that most has institutionalized the, “An organization’s data is its own data.” Doesn’t have access to the data. No one at Microsoft does. It’s part of how they operate. Now they’re more organization focused than, per se, individual focused. So they’re all really set up much more on an organizational basis for that. It is obvious that part of the thing, like if you look at what I think this agentic future is, I think we will pretty quickly have a set of agents that are listening to us fairly constantly, scribing things, giving us feedback, and you know, kind of the, “Hey, you don’t even have to think I was going to go search for that, but look—oh, you were thinking about this here.

REID:

I found this thing that could be interesting for you because you said something or did something.” And I think what’s important about it is that that process will only work if it’s for you, under your own agency and control, has zero chance of ambushing you like, you know? And so—but by the way, to learn from that and do that, that means that data is engaging in some loops, right? So like, “Oh, I’m going to go do this search for you.” Anticipating this result. You’d have to be comfortable for that. And you’d have to have the process by which you would be bought in and know that. And I think part of the—like I was not part of this design decision. I learned of this feature actually the same time the world did, because it didn’t get presented to me. But part of the thing that you’d say is like, well, if your computer’s a work computer and you’re doing a whole bunch of work stuff, this could be a real amplifier. Yes, data and so forth. And so then that would be, you know, in that context, probably a great thing, which is I think one of the reasons why that’s—I would speculate that—Microsoft is experimenting with that.

ARIA:

And I feel like in the new world of AI, there’s so many places where there are trade offs between data and privacy and surveillance. Like Reid, you and I have talked about it, if there was like a—if in cars it was set like you had to, it could stop your car if you got in and you were drunk. You would happily say, “I would love this to be in all cars across the country, even though it would stop me from drunk driving, it would stop—.” You know what I mean? Like, it would be an invasion of privacy, but of course you would want to do this because it would save lives on the road. Like are there other data trade-offs, places where you’re like, “Yes, I would happily give up that surveillance because it’s good for the world.” Like where do you see the positives?

REID:

Well, I think what ultimately has to be is we have to figure out what the boundaries are for it, being sure that it’s being used for me, with some kind of agency and control. But you know, think about you know, like for example, we all—I speculate every single person in this room has a smartphone with them. So you’re bringing along a microphone, a GPS LoJack and a bunch of other things because of utility to you. And while you may even say, “Well, sometimes I feel like I work for my phone,” the same way that you’re reachable by all these other people, it’s because you also want to be able to reach all these other people. And that’s part of the compact in the notion of what is the net benefit to me in doing? And I think the same thing will be true as you begin to think about, you know, kind of like health data and other kinds of things.

REID:

Like for example, if you say, well, if you allow our tracking of your biometrics to be included in this group thing that we can then do monitor, like be able to tell you, “Oh, actually in fact you’re now trending towards a risk of an early heart attack,” and if you’re contributing your data, we can find all that. That’s actually, in fact, as long as it doesn’t impact my—you know, ambush me with my employer, ambush me with my friends, other kinds of things—but that would net benefit people if they trusted that would be very positive. And that’s the, that’s the navigation that we need to have, which is this combination of, it’s for me, it has sufficient control and agency by me and it will never ambush me. And with that, like we can see that with, you know—I would be surprised if there was one person in this room who didn’t have a smartphone as an example.

ARIA:

Nick, what about you? Are there any thoughts on sort of that data privacy surveillance line?

NICK THOMPSON:

I feel like, of the issues in tech where there’s—I keep a list of questions where I feel like they’re kind of unanswered and unresolved and you know, they’re interesting questions. It’s an old habit I had in my old, you know, my job at Wired. And I feel like I have yet to read a philosophy of privacy that I fully buy and that I’m fully invested in. And the trade offs are so clear, right? And the trade offs in like, are so obvious from the company’s perspective, and they’re so obvious from the user’s perspective and they’re so hard and they’re so complex and they’re constantly changing with each new device. I was at an amazing lunch just a few hours ago and we were talking with—it was like half journalists, half venture capitalists—and we were talking about Facebook’s, you know Ray-Ban glasses, Meta’s Ray-Ban glasses, which, you know, obviously huge utility both to the individual and huge utility to the company because there’s not a lot of like eye height image data of the world that can be used to train robots, right?

NICK THOMPSON:

And so if you’re trying to figure out how to train the next generation of robots that can help humans, the data from the Ray-Bans would be amazing. And the venture capitalists were like, “Yeah, you know, and those Ray-Bans, maybe you just turn off that freaking white light man and just like have the eyeglasses on you get so much, be so much more useful, people wouldn’t be weirded out by you. And you get so much more data.” And the journalists were like, “Wait, what are you talking about?” Like, you know, and both sides have real value in what they’re saying. And so I find the privacy debate, one that—there surely is a really interesting philosophy of privacy that one could hold onto a set of principles and apply them to each new debate. But I have yet to encounter that.

ARIA:

So Reid, I’m not asking you to predict the future. Would you share with the group your philosophy on how far out can we look right now on what’s going to happen? What do you think is reliable?

REID:

[Laugh.] Well, it’s different in different things. The most often place that question is asked today is AI. And I think the maximum visibility that anyone rational can tell you is two years. And I think that’s pushing it.

ARIA:

Pretty wild. In three years, we have sort of no idea what the world is going to look like and anyone who’s saying they know five years, 10 years out are probably wrong.

NICK THOMPSON:

But what’s also amazing is like, this all started 18 months ago, [laugh] when ChatGPT came out and like, I don’t know, not that much has changed about like, most of how we live our lives. Like certain things are cool, right? You know, I did a bunch of research in like Anthropic today. It was really helpful and like sorting through some stuff. But like, you know, 18 months and I mean it’s been amazing if you’re an investor in Nvidia, but otherwise, like, your life probably isn’t that different.

ARIA:

Well, I, that also just might continue to happen. I was talking to someone about this the other day. It’s like 1900 to 1950, you got indoor plumbing and electricity and cars and women didn’t have to carry water, like 85 gallons of water every day. Like the built environment changed completely and your days changed. So knowing that we cannot predict the future, Reid, what is the thing that we haven’t talked about—maybe that’s not talked about enough—that you’re excited about with AI? Like what’s one area, whether it’s in two years or 10 that you’re like, “This could be really exciting.”

REID:

I think that the impact of AI on our lives will be faster than the speed at which smartphones.

NICK THOMPSON:

I agree actually.

REID:

Yes, because—.

NICK THOMPSON:

Totally.

REID:

Because obviously it builds upon smartphones. Right? It gives, creates an amplification. So, yes, not two years, but I don’t even think smartphones were a two-year timeframe. I mean smartphones were kind of like a five-year timeframe, you know, ish.

NICK THOMPSON:

Oh, I agree. I think it’s going to be totally transformative. I think it’s going to turn things upside down. Inside out. It’s going to change all kinds of stuff. I’m just—in the question of the two year timeframe—.

REID:

Yes.

NICK THOMPSON:

Like it’s going a little—if you had asked me a year and a half ago, when will there be like dramatic changes, I’d be like, “My God, you know, like I can’t possibly see 18 months in a day,” but now we’re 18 months in a day and I probably could have foreseen this. But yes, it will definitely, I’m with you that it’s going to happen.

REID:

Yes. The other thing is, so the minimum of what we’re seeing with these—because the revolution we’re in for the AI side is scale compute, scale data, scale teams. And we’re changing the compute paradigm from what we program to what it learns from all of that. And it’s that learning characteristic that raise—that creates some amazing cognitive capabilities and amplifications. Now, so if you roughly say that anything that we do with language will have between a 50% and a 50x improvement over, and a tool for doing that, in five-ish years that kind of compounds across all these things. But then to answer your question is—we generally speak and have too narrow of a view of language in terms of what are these amplifications of language. And so for example, there’s, you know, a language of, you know, kind of enzymes, proteins, drugs for, you know, human ailments.

REID:

This amplification of these kinds of language problems into drug discovery. You know, there’s a couple companies and a couple people who focus on that. You know, my view would be—is, within two years there will be at least one major new drug discovered by AI and within five years that will be five of them plus, right? So that kind of thing will be, I think, one of the things that is when you begin to think about this language amplification more broadly than just, you know, I give this speech in Perugia in English and then a couple hours later I have Reid AI deliver it (and it was kind of amusing to see myself speaking) French, Italian, Chinese, Japanese, Arabic, Hindi, et cetera—like my voice. I am inept at languages. So it was interesting to see that. But not just that kind of conception of language, not just reasoning, not just analysis, not just writing, not just comprehension, not just summary—but actually these other places in the world where languages are. So, like for example, another project that I helped an amazing team stand up by being the kind of first donor is Earth Species Project, which is recording dolphins and whales and crows and so forth to see if we can get translation and conversation understanding going there. And that’s another thing where like it’s—language is much more just in the languages that we speak and operate in.

NICK THOMPSON:

Do you think, do you think that—all right, so three years from now, you and I are setting up a meeting. If you and I talk in person, we’re still going to speak English to each other. Will our agents speak English or will they have created a new, more efficient language? Like presumably if Spanish is more efficient, just in general, like uses fewer letters and takes less time, over time, our agents will speak Spanish. But like presumably they could recreate an entirely new, more efficient language and save a bunch of time, right? Won’t that happen? Or will they speak English? Will they speak Whale?

REID:

[Laugh.] It’ll start by speaking English. Part of the comfort that we will get is because we can then listen to and audit the English. We’ll be nervous. We’ll have to build a fairly strong trust to say this is a communication, in a language that we don’t really necessarily fully understand, that is going between them. We’ll have to make sure that we understand that. I think that will get created. I don’t think it’ll be three years. Because I don’t think we’ll have the trust for it in three years.

ARIA:

If you guys haven’t checked out Earth Species Project—so literally within two years, they are going to be able to have conversations with whales, like conversations with dolphins. And they already know somewhat of what they’re saying because they’ll have them say something and then they’ll attack a boat, they’ll hear them say something and then they’ll, you know, deep dive under the water. Like I don’t even care about animals and it’s wild. So definitely check out Earth Species Project. Wow.

NICK THOMPSON:

And they’re going to be writing letters on climate change, like getting Congress to act.

ARIA:

We’re going to move to rapid-fire. This is what we do at the end of every podcast. And I am actually going to turn it over to Reid for the first question. These are all for you, Nick.

NICK THOMPSON:

Wait—

ARIA:

So get ready.

NICK THOMPSON:

Reid’s asking me. Okay, great.

ARIA:

Yes, Reid’s asking you.

REID:

So is there a movie, song or book that fills you with optimism for the future?

NICK THOMPSON:

Totally. I just read Fei-Fei Li’s book. Reading her story, having faith that she’s building these products made me extremely excited about the future and what she’s doing.

ARIA:

What is a question that you wish people would ask you more often?

NICK THOMPSON:

How can I extend The Atlantic subscription to five years [laugh]?

REID:

Where do you see progress or momentum outside of your industry that inspires you?

NICK THOMPSON:

I’ve been really excited by the commitment in AI, this multilingual process that you were talking about, in order to add in languages from around the world, which is something where I think that there wasn’t so much progress in the beginning and now there’s a whole bunch. And I think that’s one thing that will be great for the world economy and lots of other things. Right? So like adding Wolof right? Or adding, you know, Ga. It’s great.

ARIA:

Awesome. Final question. Can you leave us with a final thought on what you think is possible to achieve if everything breaks humanity’s way in the next 15 years? And what’s the first step to get there?

NICK THOMPSON:

Well. So, okay, I mean, everything. [Laugh] I’ll leave this thought. So I think, you know, I told you about that list of questions I have and the one that I think about the most, and I think maybe one of the most important is, “Will AI make the world less equal or more equal?” Right? And you can argue, I can argue it both ways, right? And you can get very smart people who will argue both ways. And there’s data suggesting that actually it’s going to make the world more equal. But there’s the last 20 years, which suggest will make the world less equal. I think that there’s a, if things break our way, the world would become a much more equal place, which would actually make democracy work better, which would make society work better. And I think there’s a real chance that AI does that.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Adrien Behn, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Parth Patil, and Ben Relles. And a big thanks to Laurene Powell Jobs and our friends at Emerson Collective—Mariana Salem, Melissa Trujillo, Jeff Enssle, and Kafia Ahmed—along with Madeline Goore, Vincent Lucero, Francesca Billington, Oresti Tsonopoulos, Alex Munro, Jim Toth, Karrie Huang, and Little Monster Media Company.