REID:
Hi, I’m Reid Hoffman.
ARIA:
And I’m Aria Finger.
REID:
We want to know what happens if, in the future, everything breaks humanity’s way.
ARIA:
We’re speaking with visionaries in many fields, from art to geopolitics, and from healthcare to education.
REID:
These conversations showcase another kind of guest. Whether it’s Inflection’s Pi or OpenAI’s GPT-4, each episode we use AI to enhance and advance our discussion.
ARIA:
In each episode, we seek out the brightest version of the future and learn what it’ll take to get there.
REID:
This is Possible.
ARIA:
Reid, I’m so excited to be here. This is the wrap-up episode of season two. So one of the things that I loved about season two is that although we had all new guests, all new topics, there were so many themes that sort of cut through things we were talking about in season one and the AI arc. When we spoke to Mustafa, he was talking about Pi and personal intelligence. And then we spoke to Maja, and she was talking about socially-assistive robots and how those can help, potentially, kids with autism. And then we brought in Anne-Marie Slaughter this past season, and she talked about the care economy. And I feel like all, everything she talked about, hearkened back to those advances in AI that are really going to be pro humanity, beneficial to humanity, and sort of bringing the humanity into everything we’re doing.
REID:
You know, we obviously select guests who have at least comfort with technology. Because obviously we have some amazing technologists, like Mustafa, for covering things like Pi. But we also had folks like Kerry Washington and other folks who, who are still very positive on humanity and the kind of possibilities, to whom technology is not their starting point. And that kind of blend of saying, “Hey, this, this question about how do we create a great future?”—obviously can be done and helped with technology, but it’s also done through leadership, through care, through spreading a sense of possibility and hope in stories we tell. In addition to all of the technology things was the broader humanity of it.
ARIA:
So I’m so excited about season three, and one of the things that we have in store, we’ve been talking about for a long time, which is a live episode. But the other thing that I think actually having Kerry and Katherine—sort of two of the last episodes of the season, because with both of them, obviously Kerry Washington does so much to get out the vote and is so sort of politically active and an activist. And Katherine was talking about truth and democracy and how do we keep our democracy alive—is that, you know, in 2024, this is an election year for the United States and also billions of people around the world. And this is not a political podcast per se, but it is something that we cannot ignore. When we’re talking about humanity and the future of, the future of the human race, we’re thinking about democracy. We’re thinking of the future governance structures for both the US and externally. And so I’m really interested in how sort of democracy in the election will play sort of key roles in what’s happening in technology in the coming year. And I’m, I’m positive that we will reflect those themes in this podcast.
REID:
You know, so among the things that we’re doing, tackling questions that come from our listeners, from AI assistants— you know, Pi, ChatGPT, and from each other—one of Possible’s producers, Edie, will join us to share listener questions. And, you know, the discussion will range as the normal thing from, you know, technology, humanity and AI. But you know, we’re not going to spoil everything. I think we should get into it.
ARIA:
We’ll get to listener questions shortly, but first let’s each answer one that we had for each other. So Reid, I would love to hear from you: What new AI developments are you most excited about? And what’s the most extraordinary example that you’ve seen or heard of AI used to elevate humanity this year?
REID:
I’d say one of the things that’s really coming that people don’t realize is how much raw medical capability will be coming out of this. And it’s not just, “hey, it could help you with a medical assistant,” but like drug discovery and, and mapping the biological substrate, genetics and protein, you know, because obviously we have Baker and AlphaFold and so forth. And all this stuff will cause a renaissance in medicine. You know, just as we had a huge step forward with penicillin, a huge step forward with the germ theory of disease, this will be another huge step forward that could potentially benefit all of humanity. And I think that’s one of the things that I see coming and in development with AI that is, you know, among the things that are most exciting. And so, Aria, you know, I’ll, I’ll also volley the question back over to you. What would your answer be?
ARIA:
So I’ll a hundred percent agree with you. And I’ll, I’ll go a little more specific. So I have been interested in neuroscience and brain chemistry forever. I remember reading Oliver Sacks’ The Man Who Mistook his Wife for a Hat and remember voraciously reading everything he had to say. And I thought it was so fascinating. And then, this summer I stumbled across, there was that article in Nature. And I actually believe our friend Sean White showed this to us first, about the woman who had been in an accident 20 years ago. And she hadn’t been able to speak. And because of the use of AI, Eddie Chang in his, in his lab and his research team, were able to sort of map her brain using AI so that she could think about what she wanted to say and that because they had some of her voice pre-accident, she, this woman was able to speak for the first time in 20 years.
ARIA:
And I mean, it, it brings tears to my eyes, the idea that like, we are just like at such the beginning, we’re in the first inning, and we’re already having these unbelievable dramatic discoveries that now don’t have to be anomalies. That we can now sort of roll them out for all of humanity. So just a hundred percent agree, the medical field is what we’re going to see so much. And I think specifically sort of neuroscience and brain chemistry and speech. Because that’s what AI does. It allows you to use speech and language in a new and different way. Is, is just extraordinary.
EDIE:
Producer Edie jumping in. This one’s for Reid. You were a board member of OpenAI from 2019 to early 2023. A number of our listeners are curious for your perspective on, you know, the rollercoaster developments between the leadership and board of OpenAI last month. Anything we can learn from that whole saga?
REID:
Well, we could spend the entire episode, given how important this is, with kind of key learnings. But I would say, it’s one of the things of, kind of, how board of directors governance works, and it’s the different lens of kind of, like, as it were, the importance of humanity. Because there isn’t anything that you can do to particularly change the structure of boards of directors to avoid errors like the OpenAI one. It’s kind of one of the places where expertise matters. You know, knowing how to be a good board member, what are the questions asked? Because among other things, part of what happens here is, in life, and in these organizations, there’s a bunch of fast-moving, you know, kind of fender scrapes and, and risk-taking and zones for misunderstanding and so forth. And, you know, I’ve had that across a whole number of companies that I’ve been on the board of.
REID:
And you have to have the right kind of skills and perspectives as a board member in order to, to know how to sort them out. And when is this something that you tolerate? When is, you know, you say, well that’s part of moving fasts and doing stuff? When is this something that you work on internally? When is this something you make a drastic move because of? You know, I think the, the short answer of this was, you know, I think the board made a number of mistakes, which is, you know, I think fairly obvious now. You know, after the, the whole process. Sam was trying to be so attentive to the AI safety folks that as opposed to kind of staffing a board with people who like, okay, I understand board of director dynamics really well, he was staffing with AI safety in mind, and he wanted to be really clear about the fact that he wasn’t in solo control.
REID:
He was trying to be really careful about AI safety. All of which were well intentioned. But he kind of, we, we kind of devolved to a place where the board was not well-skilled in what to be doing as a board [laugh], right? And so when they kind of encountered this thing of like, “well, we’ve got this, this episode that has caused us to, to lose confidence,” they kind of acted like, you know, people who didn’t really understand boards and kind of what are the things to do. And so when they were challenged by other people about, “well, you know, you say you’ve lost confidence in Sam, but by the way, from an outsider perspective, either I should have confidence in Sam, or I should have confidence in you, so why should I have confidence in you versus Sam?” And they couldn’t answer that question [laugh], right?
REID:
And that’s one of actually the central things that a competent board of directors would know is to say, “look, if I’m going to make this kind of decision, I have to be preserving trust across a wide variety of constituents: . My employees most centrally among them, but of course also shareholders and society and all the rest. And there isn’t just the presumption because I’m sitting on the board that I, everyone should just have confidence in me. And so I think what you saw, like with the employee, you know—there’s literally, I’ve never seen anything like it, even in an organization of like 20 or 30 people, where like 700 people, a huge percentage of the organization, right, and say, “Fire yourselves and reinstate the CEO, or I quit and I go to work for Microsoft.” You also saw that on, you know, they appointed Mira Murati as the interim CEO, and within the same day that they appointed her, she’s like, “Well, you guys aren’t doing this the right way.
REID:
I’m not, I’m not your interim CEO. I don’t think you’re doing this the right way. You’re like, you, you lost confidence from the person you appointed as interim CEO in hours [laugh], right? And so that kind of, all of that highlights a whole bunch of mechanical mistakes and other kinds of things that the board of directors did. And, and I think that the importance of a board of directors, and the importance of the competence and the skills of it—in addition to well-meaning, being smart, having high integrity, and all the rest. All of which, by the way, I think the former board of OpenAI has, but failed on execution and, and competence about how to know what to do. So look, one of the things that was broadly commented on was there’s this weirdness of, it was a 501(c)(3). Overall, this kind of commercial entity. I don’t, didn’t think that was the issue. I, as you know, as I just said, I think it was principally around board of directors and competence. But you know, Aria, you know, you ran a major nonprofit. Do you have a perspective on OpenAI’s structure, or more generally about nonprofit, private, private sector and how, you know, each of these might collaboratively or selectively be involved in the development of AI?
ARIA:
Of course. In the mountain of hot takes that came out after the, you know, sort of the OpenAI weekend, , I feel like there were so many that were like: : the problem here is the nonprofit structure. And I, I fundamentally disagree. You know, you talked about, you know, specific board members and how we should act as a board and sort of our fiduciary duty and sort of all of that. But I actually see the nonprofit structure in this case as a strength. Because OpenAI was started for the benefit of humanity. And I remember talking to you several times when you were on the board, and you said, “listen, I’m on the board of a 501(c)(3). My job is not to maximize dollars. My job is to maximize the potential for humanity.” In doing so, we might need to raise or create vast sums of money for the training data, et cetera.
ARIA:
But the ultimate goal is the benefit to humanity. And I think it’s really interesting. If you actually look at the major AI companies— if you look at Inflection, AnthrAnthropik, and now Grok—they’re all public benefit corporations. And what people don’t talk about is we actually all owe a debt of gratitude to B Lab and the B-Corp movement, because they’re the folks who wrote the public benefit corporation statutes and fought tooth and nail to get it into Delaware, where obviously most C Corps are registered. And so we now have a lot of options that, if you don’t just want to be about a financial return, you can do what OpenAI did and say, “okay, we’ll be an NGO, but obviously we also need to take investment capital because of the large sums that we need to run these, run these models.” Or you can go the Inflection route, the Anthropik route, et cetera, and say, “we’re going to be above public benefit corporation and say that of course, return to shareholder fiduciary duty is important, but we’re also taking into account the environment, humanity, diversity, et cetera.” And so I actually think that many structures can work. You just want to make sure that the incentives are aligned, and the incentives here to stop and pause and ask the question “Are we doing the right thing?” Is a good one. Staging like a 2:00 PM Friday coup, I’d say is a bad outcome. But like the stop and asking questions I, I think is perfectly fine. And so I think the, you know, the, the dragging on nonprofits didn’t need to happen as I, you know, defend my sector.
REID:
One of your sectors.
ARIA:
One of my sec—my old sector. My previous sector. One of my children.
EDIE:
We have another question on the OpenAI thread coming from one of our listeners, Danilo T. They asked: Why was OpenAI able to get miles ahead of its competitors in 2023? And Reid, maybe you can take this one.
REID:
Well, it’s a great question. And it wasn’t just in 2023. It was actually much earlier. And this is part of the classic kind of startup journey and why startups create new, amazing, scale companies that transform industries. Because there’s a focused idea that they take a risk on, they assemble a group of people who are completely focused on that, who are moving very fast, and they generally move much faster than larger scale organizations that have to deal with internal politics and competing priorities and, and a bunch of other stuff. And if they, their bet plays out to be right, right? Then you end up with something that has a whole bunch of momentum is, you know, to one of my earlier books, is blitzscaling into size and net and impact. And what OpenAI noticed is they said, well, not just is AI going to be really important, but we have, you know, back then we have a team of like, you know, 30 people and then, you know, grew to 50 and so forth.
REID:
And it’s like, okay, unlike Google, unlike other places, we’re going to have to be totally ruthlessly focused on what do we think the key idea is? And not do all the other ideas. They looked at a couple different things and they were doing experiments work, but they realized that actually in fact, what are called large language models now; it’s kind of through this attention transformer paper, applying scale, using that kind of scale approach of this very generalized learning technique applied to, you know, kind of the domain of language and the domain of, of human knowledge—was the thing that could create a whole bunch of magic much faster than the other approaches. And they completely focused on it. And as that focus, they started showing, you know, the progress. And this is part of how I knew that this was—went from a speculative idea to something was going to work.
REID:
Is they had—okay GPT-2 kind of entertaining. GPT-3, mildly functional, but a lot better than GPT-2. And then looking at this going, oh, when we get, we look at the trajectory from GPT-2 to GPT-3 and think that same trajectory to 4, now we’re going to have something that’s going to be completely amazing.” It was that ability to take that early risk, do that focus, move with speed and intensity, and then keep doubling down on those bets as a way of doing it. And so as other people say, “Hey, we now see what you’re doing,” they’ve already got, you know, a deal in place with Microsoft, a compute infrastructure, a team working around it as they play through that theory.
ARIA:
So one of the things that we’ve seen a lot recently is that there’s all these lawsuits, and people are asking the questions about AI and copyright. You know, we talked about this with Sarah Sze earlier in the season about AI and copyright and specifically, you know, creative expression and artistic works. But the New York Times and other renowned authors like George R. R. Martin have actually sued OpenAI for copyright infringement. And you know, I think the question is, is this fair use? Are they just using this as training data? Are they actually, you know, sort of taking that and spitting out, in some instances, you know, exact passages from these books? And so I would love to hear from you, like, how do you see this playing out? Maybe what do you think the outcome of the lawsuit will be? What should the outcome of a lawsuit would be? I would love to, I would love to hear your thoughts
REID:
First, to make it tangible to people about why there’s a very legitimate case of fair use from OpenAI and Microsoft and other people is, say, well, it’s training. It’s like you’re just reading it, right? And you just say, “no, you can’t read it.” It’s like, well no, if I buy a copy of Game of Thrones, I can read it, I can hand it to my friend Aria. She can then read the copy that I hand to her. And reading it is precisely what’s allowed. It’s part of what fair use is. And when you look at what copyright is, it’s like, well, I’m not allowed to like pick a chapter and, and put it in my own work and sell my own work now [laugh], right? But like the training is, is essentially analogous to reading. And reading is fine if you have legitimate access to the content. Now, then they say, “well, okay, our real issue is that you’re reproducing.”
REID:
And so maybe you’re going to reproduce, say for example, a New York Times article that you know, then someone gets the New York Times article free and the damage you’re doing is you’re producing, you’re giving someone the New York Times article free where they’d normally buy it. Well, from my quick read of the, kind of, the complaint, the only way they can get these things to actually fully reproduce their copywritten materials is by preloading a lot of it into it [laugh], right? So like, go, “here is half of the New York Times article. Can you generate the other half?” And it’s like, well, if I’m preloading it in, I’ve already, as a reader, I already have access to it. I already have it. Well look like, you know, all it is is this artifact of can I generate the other half as opposed to doing it. Now, one of the things I find particularly entertaining about the New York Times complaint is they on one hand complain about the fact that you could generate this, which they do by preloading in so much of the article, that the completion just works and it’s unclear that there’s any damage there.
REID:
But also they then say, “and it generates speculative articles that are not New York Times thing, and attributes to them to the New York Times.” And you’re like, so the fact that it reproduces and it doesn’t really do that fully, fully accurately, you’re almost like contradicting yourself in your own complaint. Because you’re kind of looking around for, you know, what seems to be the “we just want to be paid,” This is one of those weird things where I think it’s unlikely that OpenAI and so forth is actually going to get a fair shake in the press where they, their point of view is going to be reasonably well represented—like say in the New York Times and so forth—because it’s showing where the New York Times has its own interest as a, as an economic engine saying, “well, since you’ve read some of our materials and produce something valuable, we think you should pay us.” And it’s like, well, I read a book and I produce something valuable; that doesn’t necessarily mean I pay you [laugh]. Right? And maybe, you know, the right thing is to, to figure out a different scheme. But that’s the baseline about where it starts.
ARIA:
Specifically following on that, you know, we talked in the Katherine Maher episode about, like she suggested, is there a mutually beneficial offering of training data for a nominal fee? Like, is that the way forward? Because I, I believe OpenAI and other models have said they’re not going to crawl the New York Times anymore. Obviously it was trained on it, but sort of going forward that won’t happen anymore. Do you think the fee is the solution, or you actually don’t even think that’s necessary?
REID:
Look, I think it’s perfectly good. I think Wikipedia has a good model on this. Because even though they’re, that it’s all public, you can all read it, you can train on it without it—which they allow, but they can also get to an economic arrangement where they can help you with the data and everything else. And they, you know, add in some very light features to it that help you with that and you pay them for it. And I think that’s a great suggestion by Katherine and something that’s doable. And I think anyone can do that. And I think, you know, I think multiple people training models would be willing to do that with the New York Times and others.
EDIE:
Reid, another question for you. You’re the co-founder of Inflection AI, which wants to create a personal AI for everyone. I am personally a power user of its flagship product, Pi, which is this supportive, smart AI. And we’ll actually hear from Pi later in the show, my favorite voice, voice number four. For those who may not know, can you just tell us a bit about how Pi got started and a bit about your co-founders, Mustafa Suleyman and Karén Simonyan.
REID:
Mustafa and Karén and I kind of all, you know, kind of said, “well, there’s obviously going to be all of these things that are going to be happening in the future. AI is going to transform industries, companies, but also the ability for individuals to have really quality lives. What are some of the really interesting things that can be built now that would be an important part of that transformation that would have, you know, kind of a decades-long impact?” And one of the things we realized was—primary authorship of this is Mustafa, but, you know Karén and I both in discussion—was everyone’s going to have a personal intelligence. Everyone’s going to have a, a kind of an assistant that kind of covers everything from, “Hey, I’ve got these seven things in my fridge. What can I make for dinner?” to, “My blender’s broken? Is there anything I can do to fix it?” To, “You know, I had this odd conversation with my brother or sister and I kind of want to unpack it,” to, “You know, here’s what I’m thinking about doing when I’m, you know, at work or at engagement I’m doing—and, and who is my personal intelligence who helps me navigate with it?
REID:
Not the company’s, but mine, that’s working with me on that.” And we realized that that space is a primary, you know, kind of trusted agent that is yours that orients that way. And that’s, that’s kind of how the, the concept of Pi got created. And, you know, the supportive smart AI is like, look, it’s, it’s, it’s not meant to be like you say, “wow, you know, I’m a person full of hate and I want something that that hates with me.” It’s like, no, no, that’s not what we do [laugh], right? Our thing is, you know, that it’s kind and collaborative, and then people say, “Well, should it be like my best friend?” And you’re like, no, no, no, your best friend should be other humans. This should be like, when you ask Pi, you know, like you’re my best friend. It’ll say something like, “oh, well I’m not really human, so I really, you know, like my role is to be here as your, as your ally and companion, but let me help you.” Like how you talk to your friends and all the rest and, “because you should have this, you should have best friends around you.”
REID:
You know, that would be a really great thing. And so it’s you know, it has parallels to the optimistic story of Her, but it’s not meant to replace human beings in human beings lives. And that was the idea. And it was like, that certainly should exist. And if it does exist, it’ll have a massive positive impact on humanity. Let’s go do the startup thing. Let’s assemble a small team of, of super amazing people. Not just Mustafa, but Karén himself is like one of the superstars who has built like, just amazing things. There’s other people in the company who are that way. Let’s get the, the A-plus team and start building this personal intelligence.
ARIA:
One interesting entry point into Inflection’s Pi is actually a recent essay called Super Humane. And you end the essay by talking about how AI can help us become a better version of ourselves. And you say specifically, “along with getting smarter and more productive, we could be getting nicer, healthier, and more emotionally generous in our interactions”—which I particularly loved. And I, and you said, “I think the full potential of this is going to be a much bigger deal than most people realize.” Can you say a little bit more about that, especially emotionally generous?
REID:
Yeah, and it’s obviously, it’s a great question, obviously, specifically in the reflection or the, the, the penumbra around Pi because people tend to think, “oh, it’s like, you know, ChatGPT, it’ll give me an instant Wikipedia answer,” that kind of thing. Like an AI is a tool primarily for this kind of information space. And by the way, of course, it’s really important to have, you know, the kind of good information and information assistant, it’s part of the transformation of the internet and search, and all the rest it’s brought. But part of it is: Who do we become as people? Who do we become as human beings? What do we aspire to in terms of becoming better human beings? And in part, for all of us, that journey is the people around us. So if you experience, you know, empathy and compassion, you have a higher likelihood of wanting to evince it, of learning about it, of doing it better.
REID:
It’s a skill as—it’s both nature and nurture in these things. Because of the Hollywood stories, you know, Ex Machina and, and Terminator and so forth. And even to some degree, the, the strangeness around the drama of Her, everyone kind of goes, “oh, AI doesn’t, AI is anti-human,” right? Like, it’s, it’s, it’s, it’s the, it’s a technology distortion. It’s like no, no actually, in fact, it has an enormous capability, if we build it the right way, integrate it in our lives in the right way, to help us become even more human. And human in the ways that we aspire to. You know, it’s like, you know, how do you think about our human aspirations? It’s like the, but the shadow of the inhuman. It’s like we are more human when we are compassionate, when we are kind, when we are wise, when we are listening, when we are, you know, in dialogue with the effort to try to help our fellow humans become their best possible selves. And that part of it is completely unreported in any of the press dialogue, the vast majority of the dialogue around this. And is central to what Mustafa, Karén and I are trying to do with Pi. And, and, and it is totally there. This is not like, like, like this, this shouldn’t be controversial once you put your eyes on it. You go, “I can see it.” And that’s a really good thing.
ARIA:
Reid, I think that’s so true. And again, it just feels like when sometimes when people talk about the negatives of AI, they’re like, “well, okay, I guess we’re all computers. We’re just going to be sitting in front of screens all day.” And the great thing about so many of the guests on Possible—whether it’s Anne-Marie or Maja in the AI arc from last summer—it’s like, no, no, no. This is all about creating more empathy and creating more humanity. And I think that, that sort of dichotomy, is something we need to do away with, because AI can actually deepen human connection and make us more human. So I love that you mentioned that.
REID:
That actually leads me to my, you know, kind of a follow-up question for you, Aria. You’ve now tried out a number of AI chatbots—Pi, ChatGPT, Khanmigo—as well as talked to, you know, we’ve had a number of AI experts here on Possible. What do you believe are some of the most exciting applications of you know, on tap, present and patient conversational AI tools like Pi? Which industries, people do you think can most benefit?
ARIA:
So I think it’s sort of impossible not to acknowledge the therapist, the coach, the educator. If anyone listening hasn’t listened to the Sal Khan episode about Khanmigo, you hear me just marvel at how amazing AI is going to be as an educational chatbot. But I actually think one place that is really critical is actually for government services and actually just for low-income people in general. People often talk about how it’s expensive to be poor. And it’s both expensive from a money perspective, but even more so it’s expensive from a time perspective. And I actually, I had the pleasure of jumping on a call with Phaedra and her team at Promise last week, and we were just sort of bouncing ideas around about all the ways that AI could help her team and what they could do. And the idea that an AI chatbot can save you hours of filling out forms.
ARIA:
It’s like you’ve just written your social security number a hundred times, whether that was for SNAP benefits or to get your kid into public school or whatever. And, and, and these things seem sort of less wow maybe than some of the other AI breakthroughs. But I think if you can save a low-income parent five hours a week, oh my gosh! You could spend that time with your kids. You could spend that time working. You could spend that time going for a run. I mean, the, the sort of magic of introducing humanity back into your life. So I do think for, for low-income folks, a chatbot that helps you interact with those services—and actually for all of us, forget it—but again, because it’s especially difficult to be poor in America, I’m just really excited and, and, you know, one to watch Phaedra and her team at Promise—what are the, the next AI iterations they come out with.
EDIE:
I think that the line of conversation that you guys have had so far is a great transition into a pocket of listener questions that we’ve been getting. You know, the line and interactions between humans and machines, it’s getting closer and closer. So a lot of our listeners had questions about the ethical development and deployment of AI. We had two questions that were rather similar that we’re combining into one question. One of these questions was from Krishikesh K., and the other was from Greg B. Essentially, it is, you know, how do you strike a balance between AI advancements and preserving human connection? You know, what happens when people’s main social relationships are with AI?
REID:
You know, obviously the earlier answer is, one of the good things for designing these things is to try to design them with a theory of what’s the best kind of elevation for people. And so it’s like, “no, don’t, not, don’t talk to your friends. Talk to me.” Not that design, but the design of, “oh, you should spend time with your friends, you should talk to them. And here’s how you can have productive conversations. And here’s how you might deepen your friendship” and so forth as a general design principle—the thing we covered before. The next level of down is that too often these conversations are kind of one-size-fits-all. Is like, “it should only be X.” And obviously for a lot of human beings, the right thing is to say, “how do I connect with other human beings?” You know, either, even if I’m an introvert, you know, most introverts also value their social connections.
REID:
They just experience it differently. They just don’t like cocktail parties, you know, for example, as a way of doing it. It kind of fills them with terror. But they still value their, you know, connections with human beings and, and, and interactions in various ways. It’s just different patterns. And I think that breadth of pattern is one of the things to think about here, which is; you know, sometimes it will be important to have an AI that’s almost an essentially like quasi-counselor, quasi-therapist that’s having a conversation that helps you with something that, that is this substantial role in your life for some people. And for others, you know, not so much. And, you know, if we look at kind of how human beings are constructed, you know, we are inherently social animals. You know, Aristotle, you know, we’re citizens of the polis. That’s where kind of political animal becomes.
REID:
And you know, we are very attuned—like a lot of our neurology is watching other people’s faces for kind of social clues and other kinds of things. Because we’re social animals. And I think that one of the things that is, that is illusory is the whole Robinson Crusoe thing. It’s like, no, actually in fact, we’re inherently social animals that express our individuality, versus individual animals that get corralled into sociality. And I think that kind of design principle for, you know, but having that diversity of different circumstances and needs is, I think, really important. And it’s, and I don’t think we’ll ever have our main social relationships with AI. You know, one of the interesting speaking conversations with friends, I had a conversation with Joi Ito about, you know, why the Japanese are more positive on AI generally. And it’s partially because they have a, this spiritual thing of Kami, and like, you know, you have a spirit of the tree, you have a spirit of the rock, you have a spirit of the shrine. And that, that panoply of relations is actually one of the things that I think is, is particularly human.
ARIA:
I’ll add super quickly, I do think that we’re kidding ourselves if we don’t think there’s going to be some situations where, you know, it gets worse for a person, and they’re, and they’re talking to AI instead of humans or whatever. And I think we should just be especially watchful when it comes to young people, when it comes to kids, when it comes to children. You know, just how we probably should have been a little more watchful about social media—yes, we should be watchful about this. But I actually think that the most important thing, like, is of course fostering human interaction, but as you said, it’s like, what is the, what is the starting point? If the person is going from zero interaction to interaction with an AI that eventually can lead them to more human interaction, that’s a positive. We don’t want to reduce human interaction, but I, I do think that AI can be used sort of as this middle ground, you know, to get someone to a more positive place where they are ready for that social interaction, whatever it may be. So again, I think we just really need to think about the context, because that’ll be critically important.
EDIE:
Along this vein of approaching AI ethically, we got this one question on LinkedIn from the AI in Work and Skills Forum. They asked: How should companies prepare their employees on how AI will impact their jobs or the future of work?
REID:
Look, this is a really important question because if you think of AI as a steam engine of the mind and is going to have the same kind of transformation of industries, of the way that jobs happen. At a much faster pace than even what the steam engine did, because the steam engine obviously created manufacturing and factories, created transport with trains and, you know, boats and you know, steamboats and so forth. There’s all these ways of doing this. And there’s, there’s, there’s this massive application of physical superpowers that got added in. And so, you know, now being a big strong, you know, worker at the, at the dock matters less because we have forklifts. And so, you know, this kind of transformation. And now we’re going to have that steam engine of the mind. And so you have to be in the tools for doing it. Because even though I think the outcome will still, like, will still be really good for people, I think they’ll be able to do more interesting work.
REID:
I think they’ll be able to do a broader range of things. They’ll be more productive. Just like all the things with the steam engine and the physical, now mental. The transformation will still be challenging. So most tactically, I think one of the things is to start experimenting with it, start using it, and on real things. Don’t just kind of show up and say, “okay, I don’t know how to write a sonnet. Write me a sonnet.” I’m like, which is fine to do. No, totally happy, more sonnets. Great. Like for example, you said, “well, you know, what kinds of things could I use it for?” It’s like, well, okay, one of the things that one of my friend’s kids did that I thought was really interesting is: she’s really into organic chemistry. So she goes and loads the scientific paper in and then says, “explain it to me. I’m 15.”
REID:
[laugh]. Right? Well, that is now a whole lens that’s available to everybody. And so that just gives some of the things that it might do to be helpful to whoever you are as a worker. Processing information, doing stuff, obviously it can help with kind of marketing and content generation. It can help with, you know, communications and collaboration. You know, one of the things that Microsoft Teams does now and is, you know, deployed to some customers, is take notes. You know, kind of record whose action item for who. Bring in search results from the local information base to say, “Hey, if you’re talking about this, maybe you should also talk to Aria about this.” You know, et cetera, et cetera. You know, as, as kind of you know, kind of ways to do it. But it starts with just starting to use the tool some. Aria?
ARIA:
I think, you know, we, we talk a lot about in technological shifts about retraining. And I think that is like the space that we think about it. It’s like you do a job, and then you retrain, and then you go to your next job. And we just need to say, “Nope.” Like, “that’s not how it goes.” And we just need to rethink the entire paradigm from government work programs to what each of us are doing. Because as you say often, Reid, like we, we really are going to have to be lifelong learners in this new world. And how can we support adults who need to be lifelong learners too—who might not, whatever, have the capabilities or the skills or the practices or whatever—who need a little help. And so I, I’m so excited that AI, as you said, can be the help that all of us can have as we’re just, you know, sort of embarking on this brave new world. Because I think the transition is going to be tough. You know, it’s going to be continuous, but especially the next few years, you know, we want to make sure we get through it, and everyone’s intact on the other side.
EDIE:
We should move on to talking about the 2024 election. You know, it’s really a big year for AI, 2024. But also a big year for US politics with the upcoming presidential election. So, going into the 2024 presidential election, what role do you think AI will play? Should play? Hopefully won’t play? Aria, maybe we could start with you and then we’ll go on to Reid.
ARIA:
I often say that I’m the most optimistic person I’ve ever met, and then I met Reid. And in thinking about the 2024 election, I have to say that AI and the election does make me a little nervous. I unfortunately think that even if AI didn’t exist, so no matter what political side you are on, people are so entrenched in their own worldview. You know, you get a fact, you either reject it or accept it based on your previous worldview. You know, that’s why we often say that facts don’t matter, because you reject a fact if it, you know, doesn’t square with your worldview, and you accept it if it does. So I, I’d say that we’re in sort of tense times regardless. I am nervous that AI will only add to that sort of us in our corners, you in their corners—”Oh, I saw this deepfake. Well, it doesn’t matter if it’s a deepfake or not, I believe it anyway, because that’s something he would do or she would do.” So unfortunately, I’m, I’m worried about that. I can only hope that the AI tools that we have can also help us push back against that. And that, you know, we all just think about our own humanity a little bit more. It seems that if we brought even a little bit of humanity into the political realm we’d be, we’d be in a little better place. So a little less hopeful answer from me than usual.
REID:
Well, I know from your setup there, Aria, that you’re hoping I’m going to be more optimistic than you on this. And actually, in fact, I don’t think I’m going to be—so you keep your, your optimism crown. So in quick succession, I would say that the role that AI is certainly going to play is— malicious actors like the Russians, North Koreans, Iranians, and some others will use AI as an amplification of things that they were already doing in the 2016 and 2020 elections, which is trying to sow discord and chaos and disbelief in institutions. Disbelief in media. Disbelief in facts. You know, you get back to the, you know, the alternative facts idiocy that we were having from, you know, some of the Trump presidency. And AI will be used for that just as human beings, we’re used for that too, and that will amplify that in various ways. And that’s certainly going to happen, right?
REID:
And, you know, deepfakes is just one of the things that, that that is part of that. Now, ideally, you know, what we would see is AI being used as a, as a helpful kind of truth barometer that, you know, kind of in, in rebuilding in institutions is you know, kind of like, look, the AI is kind of like a search engine and has been built with some deliberation to try to present to you information that can help you analyze through this and won’t just be confirmation bias, won’t just be the, “well, I have my AI agent that tells me that January 6th was, you know, some peaceful protestors who were maligned by the evil Democrats and institutions,” ignoring the fact that they broke into the Capitol Hill building. Ignoring the fact that there was all these kinds of communication about how they were hoping to try to kill Mike Pence, did kill some police officers, you know, et cetera.
REID:
And you’re like, okay, but I want it to tell me what I want as a confirmation belief. Or that, you know, that January 6th was secretly orchestrated to be violent by the people in the Trump administration versus just encouraged as an insurrection. And you say, “look, what we want is we want these things to be good truth tellers and truth assessors with revealing kind of how they’re trained.” Like think about The Economist as a magazine—so, look, this is, this is how we look at the world. This is how we think. And we’re going to try to be as good truth tellers as we can. Right? And I think that’s part of where you want to see media being, and you want that combination of revealing that you’re bringing a perspective, a certain set of questions, but you are also trying to be as good as truth tellers as you can for doing it.
REID:
And I think that’s what we’d want AI to be. And then we’d, if we’re speaking about hope, we would hope that the platforms would be integrating that in—whether it’s a browser, a social media platform, or anything else, for helping us ascertain that. And, you know, part of when you think about like, what role could AI play here that’s very positive is to say, “look, it should say, here’s the case for, and here’s the case against, and here’s why I sort out to broadly the case for, or broadly the case against.” But then it’s kinda like, okay, I can sort through that. And that’s the kind of truth teller perspective that everyone should be adopting in civil discourse, in dialogue. More or less, I’m supportive of the organizations that do that, whether or not they are, you know, kind of more blue or more red on that. And I’m not supportive of the organizations that, or the media sources, that don’t.
ARIA:
As we talk about the upcoming election, and no thanks to you, I am not more optimistic now, my thought is about the US government. Is the US government equipped to effectively regulate AI at this time? Like, have you, you know, we were generally positive on President Biden’s Executive Order on AI. There was a lot of great things in it. We didn’t agree with everything, but is there, you know, some area that regulators should be paying particular attention to now? Or something that you would like them to do in sort of this follow-up time to the Executive Order?
REID:
So I think the thing that the White House did in approaching this was really smart. They first called in a bunch of key players and asked for and pushed them to be aggressive on voluntary commitments, which included, like, watermarking AI content and doing red teaming. Then they use that, plus a bunch of different dialogues, to formulate an Executive Order with a firm legal basis. And the Defense Production Act about where they can say, “Hey, now this has a kind of a rule of law kind of impact across a set of, of companies.” And, and then to make that not just the voluntary commitments of the companies that have come in, but as, but as kind of strengthening and broadening. And I think that was a smart thing to do, and I think has had very positive impact. To some degree, I think a template for thinking about when new technology is being innovated—
REID:
We don’t really know where the ability to regulate will be, what the positives that we don’t want to miss are, and what the negatives we really need to navigate are. And it’s all, you know, in that poetic phrase “looking through a glass, darkly,” you know, kind of like running through the fog of uneven ground at night, you know, as kinds of ways of doing it, and I thought that was a, that was very smart and competent as a set of things for a whole set of players at the White House to be doing. Now look, I think that the thing that always—people are thinking about regulations: “Stop them from X.” The thing that I have been really advocating for is to say, “Well, look, we have this massive transformation coming. Let’s try to make sure that we get it as broad based across our side of humanity as possible.”
REID:
Like, so, for example, today we could release, in a number of weeks—like weeks, not months—a medical assistant that could help anyone who doesn’t have access to a GP, or even with people with access to GPs in a dialogue with a GP. Having that in everyone’s hands is the thing that I would say is one of the ways that government would most benefit our communities and our society. So as opposed to the “stop, don’t,” it’s the, “let’s get this provisioned so that we are helping children, families, everybody else in doing this.” Because when you get to the, you know, eight-plus billion people in the world, much less than a billion of them have accessed GPs, much less have real access to GPs. And let’s, let’s try to help everybody with that. And that even includes, you know, societies like the US. You know, one of the things that we’d want to say—and you know, when you have things like the NHS in the UK, you know, part of the problem is, is they have rationing and all the rest. And you could, you could make that much broader in terms of being helpful. Even though like, you know, obviously in theory in the UK everyone has access to a GP; you know, pragmatics or something different, and especially in the depth that you would want. So this kind of thing would be the kind of thing that I would be hoping governments, a la regulators, would be paying much more attention to.
EDIE:
On this vein of governance, we have one more question from Pi, and I’ll actually let Pi ask the question itself.
Pi:
What role should AI play in national security and defense? While AI can provide useful tools for intelligence gathering and analysis, it also raises concerns about autonomous weapons and surveillance technologies.
REID:
One of the things I think it’s too easy to fall into is to think the primary role for national security and defense is like the promulgation of violence. Whereas I think what most people don’t understand in, you know, kind of well-ordered, Western democracies or Western-style democracies—you know, because this includes India and other folks—is that the militaries that are actually accountable to democracies, they’re actually primary mission is peace. What they’re trying to do is like, how do we have less war? Like if we have strength, it’s to project strength so that bad autocracies don’t create war, as one instance. And, and I, so I do think that there is a role in, in technology generally being supportive of the national security and defense establishments in, you know, kind of well-ordered democracies. And I think AI can certainly do that.
REID:
And by the way, you know, kind of my—you know, obviously intelligence gathering and analysis is one part of that. I think part of what’s been really happening, the Ukrainians and the defense of their society and the kind of artillery bombardment that the Russians have been doing, and the, the quasi ethnic cleansing that they’ve been doing is the drones and what we’re learning about drones and drone warfare has been part of what’s been enabling their defense. And there’s a whole bunch of AI stuff in that. And it’s weapons, it’s weapons to blow up tanks or blow up howitzers or, you know, other kinds of things. There’s ways of doing that. And I think that is a really important part. And so it’s not so much “should it be involved in weapons or not” because obviously intelligence is good. But it should be involved in weapons in ways that try to maximize the ability for democracies to live peacefully with everybody around them. And the only way that a democracy ends up attacking other democracies is usually democracy one collapses as a democracy. Having democracies be vigorous, and have competent national security and defense apparatus, is actually part of the thing that we should all want. And, you know, is part of what I give some of my public service to as well.
ARIA:
All right, Reid, I actually have another question from me. When we think about AI, so many people think about software, and they think about interacting it through, you know, their smartphone or their computer.But there’s so much hardware powering all of our use of Pi and ChatGPT and these AI tools. So can you talk to me about the state of GPUs, chips? Like, basically what is the state of the hardware that AI companies need to run? And what does 2024 look like along this vector?
REID:
One of the things I’ve said about AI is that it’s about scale compute. It, it’s one of those things happening is it’s the algorithms that that enables scale, that is part of what’s happening. And it’s part of the reason why the convolutional neural networks that kind of started from, you know, kind of the notion of how perception works actually ended up then applying the language and getting the scale and was part of the genius inspiration from OpenAI. What that means with scale compute is over the next couple years, we’re going to continue to have, you know, like infinite demand and limited supply for, you know, chips and compute in doing this. Both in training these models, but also in serving the inference of them. We’re going to be figuring out how to do it more and more cheaply in order to make it work. How do you build models that could work specifically for things and run cheaply in doing them?
REID:
And so the short answer on the hardware side is: just as we’ve seen this year, next year and the year after we’re going to still see enormous, you know, jocking for, you know, kind of where does the compute get deployed? Because there’s so many different things. Like I, you know, I’m the personal recipient of, “Hey, wait, we could do this amazing scientific thing if we could just get this $10 million of compute.” And you’d say, “Well, $10 million of compute, that sounds pretty easy. There’s billions of dollars of compute, we should be able to do that.” It’s like, well, but actually, in fact, in the billions of dollars of compute, there is hundreds of billions of dollars of demand in various ways. And so that’s what we’re going to be sorting through. And so, you know, part of where Microsoft, you know, in its partnership with OpenAI, saw the world coming early is they said, “Oh my gosh, we’re going to need tons of compute. So we’re going to start building it. We’re going to start setting up the data centers. We’re going to start getting the power contracts and power that goes with it.” Because it’s not just chips. It goes all the way down to the data centers and power as part of this. We’re going to see the continued intensity of that as we go forward.
EDIE:
We have a closing question from GPT-4, a very thoughtful question, which is: As AI takes on more roles traditionally associated with human purpose, how might this influence our collective and individual sense of meaning and purpose in life?
ARIA:
So it’s funny. When I heard this question, I, it didn’t, like, this question didn’t even occur to me. I was like, why will it affect things at all? Like, why, why does this have to change? Like where, where are we at that AI should affect this so deeply? And I think that if, you know, the purpose of life is to sort of make, make life okay for you and the people around you. And it’s so sort of like, at its core, your job to, you know, help out humanity, help out your family, help out your friends you know, be kind, you know, sort of do charitable acts, like all of that stuff. That to me is like very central to what we should all be putting out there in the world. Doesn’t change for a minute. And if AI can take away some of the stuff that doesn’t add to that, then that’s magical. So I actually see, I only see positives here, with AI taking away all the stuff that we don’t want to do anyway, and just leaving us with the human connection and more time and more ability and to do the stuff that makes us truly human. Yeah, I can only see a positive future. But again, that’s my dad’s influence on me. So there you go.
REID:
And there is the awesome and canonical Aria optimism, which I, you know, share. As you guys know, I gave the commencement speech at the Bologna Business School last September and talked about the fact that part of how we evolve as human beings is through technology. We evolve, you know, we get good vision through glasses. We — and that changes the whole nature of, kind of, human experience and kind of changes how we interact. We, you know, inventing books, inventing, you know, transport, communications, podcasting. All of these things help evolve this sense of what it is to be human. What it is to be human together, you know, kind of collectively. And what kinds of possibilities we can aspire to, what kinds of future we see and we can work towards. And AI is just going to be another real amplifier of this, because it’s that evolution of, as opposed to homo sapiens, we’re homo techne—we, we dance in symbiosis with our technology in various ways.
REID:
Just like, you know, think about the steam engine leading to a manufacturing floor that is a lot safer. You know, has human beings, kind of, as opposed to having them doing all the grunt—like, you know, reproduce the screw in this, screw in this, screw in this—you know, they can then be part of managing the process in terms of how they’re operating. Well, the same thing in terms of, you know, steam engine of the mind, and where, as a human beings can add into it and help guide and direct and, you know, add in the, the extra special sauce that makes that particular cognitive task really work. And I think that’s the kind of thing that I think we will see. And obviously one part of human meaning is the meaning we get from work, but it’s also, you know, the question of, like, how we connect with each other. How we talk to each other. What kinds of things we can do. And we’ll figure out how to do culture creation—kind of events, philosophy, art. You know, there’s lots and lots of things that I think we will be able to keep ourselves busy with. So I’m also very optimistic about what can be made here with meaning and purpose and humanity.
SHAUN :
Hi, it’s Shaun, the showrunner for Possible. It’s been a fun first few seasons making this show, and I’d like to acknowledge some key people who made Possible, well, possible. First, a massive thank you to Reid Hoffman and Aria Finger, our co-hosts. They take their mics wherever they go and remain eager to talk to some of the brightest thinkers, builders, and leaders out there on AI, the future, and humanity. Much gratitude to our incredible guests, who help paint a vision of the best possible future. Thank you to the ace team at Wonder Media Network, including Edie Allard, Sara Schleede, Michele Dale, Aria Goodman, and Paloma Moreno Jiménez. Many thanks to Jenny Kaplan, our executive producer and editor, as well as Shira Atkins, her co-founder at Wonder. Special thanks to Katie Sanders, Saida Sapieva, Ian Alas, Greg Beato, Ben Relles, and Surya Yalamanchili. A big thanks to Little Monster Media Company and MacLean Fitzgerald. Last but most, thank you to all of our listeners. We’ll be back later in 2024 with more Possible.