This transcript is generated with the help of AI and is lightly edited for clarity.
REID:
I am Reid Hoffman.
ARIA:
And I’m ARIA Finger.
REID:
We want to know what happens if, in the future, everything breaks humanity’s way.
ARIA:
Typically we ask our guests for their outlook on the best possible future, but now every other week, I get to ask Reid for his take.
REID:
This is Possible.
ARIA:
Humane Pin, the AI pin, it recently shut down. And people love to dunk on companies. It raised $241 million, you know, from all the big investors: Microsoft, OpenAI CEO Sam Altman, et cetera. And maybe you disagree, but they were trying to do something—they were trying to do something big. It didn’t work out. They shut down. And now, it was acquired by HP for $116 million. And so, you know, a lot of people would say, “Well, see, they tried something in the hardware space, they tried something in the wearable space, and it didn’t work out.” I know you don’t like predicting the future per se, but if you had to give a guess—and maybe do the year-time horizon or 10-year-time horizon—what will the commercialization breakthrough be for wearables in AI? People love to sort of theorize: “Oh, it’s dead,” or like, “Oh, it’s happening tomorrow.”
REID:
Yeah, because classically it’s, you know, how can you soapbox, posture, and—
ARIA:
Exactly.
REID:
You know, and that’s—whether it’s the press, or people in social media, or everything else—is kind of the way of doing it. And you know, mostly you would say, it’s clear that wearables will be spectacular and will be there. And so, for example, you know, glasses that can help parse your world. And so, for example, if you are kind of staring at the washing machine in your Airbnb going, “How does this work?” Right? And it goes, “Oh, you look like you’re staring at this thing. Are you trying to figure out how to make this work? Here’s the, for a washing load, you do this,” and you know, dah dah dah. And that obviously could be very helpful. You’re walking down the street and you’re trying to figure out things.
REID:
One of the things we heard from Boz was the amazing thing about the Raybans, and other things, being used by blind people, because it helps them solve some problems that were otherwise hard to solve–like, “Where’s the door?” And you know, all of those things. That’s just clearly the future. And then, I tend to think, for a lot of these wearables, they’ll start actually in more of a, kind of a professional circumstance. I think we’re going to want nurses and doctors and firefighters and policemen and community workers and everyone else to be wearing them. And I think it’ll make the whole thing better. Now, the last part of that is the timing question, which is frequently the venture investing question. And it’s a reason why, as a venture investor, I tend to orient deeply to things that are within software. So like, co-founding Inflection, or co-founding Manas. You know, one is drug discovery, one is a chatbot. But they’re both intensely within the “How do we use bits intensely to make atoms better?” And, you know, going into atoms directly is always a much more fraught investing, and much more fraught on time.
ARIA:
I mean, all of us children of the eighties and nineties can remember the one parent’s friend who had the car phone, and you were like, “You must be rich!” Little did we know that, 20 years on, it would be, “Like, who doesn’t have a smartphone? What are we doing here?”
REID:
Why doesn’t the 12-year-old have a smartphone?
ARIA:
Right. Exactly. What’s going on? I do think it’s interesting, though, to think about what are the ways that we can use, you know, now that AI can see, whether it can see from your smartphone or see from your glasses, there might be so many things that we can do that it can see from your smartphone first, because that’s sort of the easier way to make it happen. And there’s all sorts of computer vision. And it can see your computer screen. And sort of all those other things that can get us there. Although that being said, one thing that I think is interesting is, we talked a lot about everyone, when they wear glasses, they think about the, “Oh my God, if it could just tell me that the approaching person is someone I met one year ago and they were, you know, their name is John.” But the thing we don’t think about is, actually, the fact that people with poor vision have a much harder time remembering people, because they can’t see their facial features as well. So there is actually such a democratizing thing that we can do with glasses that isn’t just, “Oh yeah, it would be nice to remember people’s names better.” That it actually does sort of help equalize for things that people need.
ARIA:
One of the, sort of, advances that I have to admit truly boggles my mind—I try to wrap my head around it—is Microsoft’s recent announcement of the Majorana 1 chip. This is quantum computing. This is just a whole new realm of innovation that I think a lot of people wouldn’t have guessed we were going to get into 2025. And I feel like quantum is all the rage these days. People are talking about it all the time. Can you break it down for us? Why is quantum so important, and is this going to be tangible for us in the near future—for us to see the benefit?
REID:
Here’s why quantum is important: Here’s a whole bunch of problems that are really great to solve—that by the way, AI does help us a bunch with, but are still better with quantum computing, and probably with quantum computing and AI. And, you know, most of the dialogue tends to be, people tend to think, “Oh well, actually, you need quantum security, and what happened with Bitcoin with quantum?” There’s this notion of logical qubits—e.g., logical quantum bits—and probably, to get into the security realm, you need, call it, 2,000 to 5,000 logical qubits to really change that. Interesting. And there’s ways to do quantum security. And when you think, ‘Well, the current quantum computers are like 70 or 80 qubits,’ you’re like, well, that’s a ways away. There’s definitely some smart people who think it will be tangible in the near future. I still tend to think that we may be a few more years out than the loudest proponents. But at a hundred and plus—maybe call it 150, 200—you begin to be able to solve problems in quantum that are hard for traditional computing (AI makes it much better, but not perfect), which are small molecule things. So that could be drugs. That could be materials, semiconductors, you know, other kinds of things.
ARIA:
I mean, there’s many reasons why AI is so exciting. But it’s also accessible in the way that yes, I’m not going to go out and discover new drugs with AI tomorrow, but I can go and use Pi, or ChatGPT, or Claude, and I, just as a consumer, can go, “Whoa, the benefits of AI are truly mind blowing.” Are there going to be ways that consumers can understand the benefits of quantum? Or is it mostly going to be in the scientific realm for accelerating those things?
REID:
Well, I don’t know if there’s, like, a consumer ChatGPT of quantum. I mean, that’s kind of a, you know, maybe that’s Schrödinger’s cat problem, and when we evaluate it, the cat’s either alive and kicking or, you know, not dead yet. But, for example, the derivative benefits, for example, you say, hey, we can suddenly accelerate—just like Siddhartha Mukherjee and I are with Manas, using AI to accelerate drug discovery for curing cancer. And to make that much closer in reality and adaptive to make this happen, AI is going to be a great accelerant. Quantum could be another great accelerant. You put the two together, you might even get to something that’s massively more accelerated, and that could be very good. And the consumers will get the benefit of what the drugs are, even if they’re not carrying around, you know, qubit processors on their smartphone.
ARIA:
They don’t know that quantum was helping them be cured of cancer. No, that makes sense. Sort of thinking about AI in the global context and, you know, this question, I feel, is one that people really come back to to criticize AI. I don’t think you agree, but I would love to hear more. So the U.K. government, they recently launched a consultation process on proposals to give creative industries and AI developers clarity over copyright laws. So I certainly agree that clarity is key for any business environment. We want to understand, sort of, what are the rules of the road? And that included an exception to copyright law for AI training for commercial purposes. So Nobel-Prize-winning author Kazuo Ishiguro mentioned that we’re at a fork-in-the-road moment regarding creative works. At the dawn of the AI age, why is it just and fair to alter our time-honored copyright laws to advantage mammoth corporations at the expense of individual writers, musicians, filmmakers, and artists? So when you think about, you know, people ask you time and time again: Is allowing LLMs to train on authors’ work theft? How should governments strike this balance between protecting creative content but also giving tech firms the freedom to innovate that they need?
REID:
Look, the clarity thing I totally agree with because clarity creates uncertainty in all areas. Now the problem, of course, is one can make both very compelling arguments about how this is a fair use under copyright. Because, for example, I can take Ishiguro’s work, I can hand Klara and the Sun to someone. I can have that person—I can teach them. They can learn to write. They can learn ideas from it. They could be inspired to do other things. They could generate other creative work after having read it. You know, et cetera, et cetera. And so the whole notion around, “Well what does it mean when it’s machine reading?” And, of course, the critics try to say, “Well, but because it can reproduce it!” Right? And it’s like, well, but what if it doesn’t reproduce it at all?
REID:
Or in the case of like the New York Times lawsuit, it’s like, well, the only way you could reproduce the article is when you put in the first half of the article and say, “Now please complete this,” and it goes, “Okay, well I presume you’re referring to this article, so I’ll do that.” And you could obviously train it and not do that, but like the presumption is you’re not actually doing the harm, because if you had the first half of the article, you probably had the whole article. So you probably, whatever way you got the article, by purchasing it or anything else, is the way you got it because the articles are not handed out in halves. And so it’s like, okay, the fact that something is learning and training on this isn’t necessarily theft from any particular person. Just like when I’m reading Klara and the Sun, I’m not stealing from Ishiguro. In terms of: I bought my copy. I bought my copy on Kindle, I bought my physical copy, et cetera, et cetera.
ARIA:
Yeah. Think of a thought experiment where yes, let’s assume it absolutely is legal and we just think it’s good for innovation. We’re happy that it is legal and that it’s good for innovation for LLMs to be able to use this data and these copyrighted works. What are, are there any downsides? Are there any like, “Yeah, no, I think this is right, but yeah, in four or five years we’ll have to worry about XYZ?” Or you think the critics are overblown?
REID:
Look, I think the underlying unspoken thing—the reason why it goes to something more sloganistic about theft or somebody’s like, “Oh, does suddenly the value of my creative work go way down, because now a whole bunch of things can be created by this new machine, which is partially enabled by the work that I’ve done before?” And I think that the notion of, “Will my creative work be valued?” it’s a muddy issue that we tend to navigate poorly. So a current one in the music industry tends to be, the vast majority of musicians benefit from live concerts, merchandise, et cetera. Because streaming and the change of the things has made the economics very different. And they tend to say, “Well that’s a huge tragedy, because the previous economics was selling CDs was really good, and it’s, so that’s unfortunate.” It’s like, well, but it’s not clear that the selling CDs thing is the thing that should be through eternity, right?
REID:
Change is a thing. But we want to have these laws respecting creators when that kind of stream of creative output is something we value as individuals in a society, so that we have the right kind of incentive loop. So I do think that the figuring out how that works—now, of course, part of what I think about AI tools, is right now, of course everyone’s like, “Oh my God, end of the world.” Part of the reason I wrote Superagency with Greg. But I think that what’s going to really start happening is you’re going to start going, “Oh, this will really enable me to do so much better, faster creative work,” and to make it happen. You know, you’ve been part of this journey with me too–I’ve been trying to say, “Okay, can I get the various AI tools to help me write some of the science fiction that I’ve been thinking about?” And it’s not very good. So it’s like, well, it might get to a competitive threat with the very good science fiction writers, but not right now. By the way, one of the things that’s interesting is, even as it gets there, the good science fiction writers writing it suddenly might be able to write so much better, or so much faster. One of the things that I always find frustrating about the series that I really like, is I find the series I really like, and you go, “Okay, I got to the last book, how many years until the next book?” [Laugh] It’s like I’m in it right now! I’m in that universe right now. And so you’d be like, “Well actually, in fact, if I could be doing this, I could be producing a book for this series every month,” right? And as I’m going down the journey with it, I think it could be enormously beneficial to some creators doing that. But I understand the first reaction of, “Oh God,”–like for example, you take someone as amazing as Ishiguro–is, “I have done this really hard thing of creating these masterpieces. I’m one of the world’s most celebrated authors doing this, and now you’re changing the game?” Like I get that as a “Ah.”
ARIA:
And I would be remiss, because if Greg was here—your co-author on Superagency—he would also point out that on the music front, CDs were technology, and before that you couldn’t even make a living as a musician. Before that we had radio for a little bit, but before that you were just strumming alone in your basement. And so technology also has actually enabled a lot of this amazing creativity. We don’t even have to go back to the printing press to get the fact that CDs opened up this whole new world–and then ringtones for, like, a minute made a lot of money. And then we came on to: how can even more musical artists make a living? And so I think to your point, this is the right way to go, and how can we navigate so that people can honestly have agency to make their careers better, make more beautiful art, and sort of do all the things that they want to do, just in a new technological context.
REID:
And the transitions will be painful. You can’t stop the future. But what you can do is you can try to navigate to what the better futures are.
ARIA:
Alright, so, to end our episode, we have a special guest with us on Possible today, Parth Patil. He is one of the creators of Reid AI, and was the first data scientist at Clubhouse. And as you all know, we talk a ton about AI with Reid and many of our guests. And Parth is one of those resident AI experts. And so Parth, I’m so excited for you to come on Possible, and help us break down some of the most recent AI happenings. Hi Parth.
PARTH:
Thanks Aria. Glad to be here.
REID:
So I want to share with you a recent experience I had, and then ask you for your, kind of, diagnosis of it, and then go in the future. So I was recently hanging out with Atul Gawande, and I was like, “Have you tried Deep Research?” And he’s like, “No, I don’t know what you’re talking about.” I’m like, “Okay, like, what’s a book you’re working on?” He was like, “Okay, I’m working on the following thing that has a chapter on an anesthesiologist.” And so we pulled up ChatGPT-01 Pro Deep Research, and we pull up Gemini, and we asked the questions for getting answers. And let me run you through what our discovery was that was really interesting, and then this will be the diagnosis of, “What is the current state of Deep Research, and where are we going, and how does this play?”
REID:
Which is, ChatGPT generated, just some, he’s like, “Oh my God, this is amazing. This just saved me thousands of hours with my research assistant!” And so he cut and pasted it and fired it off to his research assistant, and then we did Gemini. It was like, “Well, you know, this is much less inspiring, but okay, fine, we’ll fire that off too.” Now what the research assistant came back with was, “Well, on the ChatGPT answers, 90% of them were inaccurate.” Right? Like, the quote that the surgeon said, “Anesthesiology helped me in my policy,” that quote doesn’t exist. That source isn’t there the right way, it’s kind of misquoted, et cetera. So that was a problem. But the thing that was interesting was it pointed me to interesting documents. It was almost like, where to look at, to find the kinds of things that we want, was in doing the research cross-checking actually, in fact, did save me many hours because I went to a bunch of different sources, which actually had some of the stuff that could be interesting.
REID:
And so you had this thing where Gemini didn’t have any factual inaccuracies but was less exciting and interesting, and so could have been used a little bit more, just kind of flat. And the ChatGPT one was, like, if you just quoted it, you would’ve been like, “Oops, I wrote something as fact that was wrong.” But it was a doorway into the right things. What does that make you think about the current state of Deep Research, tools, how people should be thinking about using them, et cetera?
PARTH:
Yeah, so I’ve been working on similar tools to Deep Research for over a year now. And my early experience was like, wow, it can do a lot of work. But then you realize: If the work isn’t high quality, it creates work because now you have to go and verify all the things that it’s coming to you with. And I think it means that it’s a downside to asking certain types of questions. Like, you should not expect fact in the response, which is—you know, the LLMs can be confidently presenting information, but we should always take that with a grain of salt right now while it’s hard to verify. On the other hand, the way I like to use Deep Research is more for subjective intro exploration into a space, usually for something that I wouldn’t have the time or energy to do anyways, right?
PARTH:
So, if I’m brainstorming a new concept for a new app, and I’m like, oh, where do the people who are interested in this fandom exist? What are they talking about? And then Deep Research can go find where on the internet I might find the answer to those questions. So I think you’re right there—I think it’ll get better. But the real magical feeling is that it can do, in 10 minutes, what would otherwise take me a couple days to do. That kind of accelerated kind of information synthesis is actually really valuable as it gets more and more high quality—like the reasoning kicks in, and the quality of the response starts getting higher.
REID:
As you know, I describe you as, to other people, the person who has not only taken the red pill, but is bathing in the red pill. What are some of the current like, oh gosh, this is some of the stuff that, like, the future is already here, it’s just unevenly distributed? What’s some of the use of AI that’s caught your attention in the last month?
PARTH:
I think for me it’s something that I’ve been hacking on for two—an idea that I’ve been hacking on for almost two years now, is now starting to, a lot of other people are starting to experience this red pill moment of—I think [Andrej] Karpathy calls it “vibe coding,” where you kind of just lean into the exponentials and the general awareness of these models to just be as programming assistants. And you’re like, “Oh, why don’t I just make this, make this, build this feature, think of a game idea.” And you just let the model generate a lot of the code, and you shift more to, like, speaking. And a lot of people use [the AI voice-to-text tool] Superwhisper, so they’ll literally press a button and describe the app that they want, and then they let the model make the first version of it. Claude 3.7 Sonnet came out, it’s one of the best coding models out there right now. And more and more people are realizing this. And you can tell because you have non-technical people that are like, “Oh my God, look at this game that I made with AI!” And then you have experienced technical people that are like, “Oh, but it’s not robust and scalable.” And in my mind, like, the fact that we can even just create software by describing it is the magic. And yes, the models aren’t perfect, but this is exactly the direction we should be going in. And the reason I say that is because—my mom is a programmer, and 15 years ago, she developed carpal tunnel, and I thought that was crazy that, now it’s like every button she presses is actually deteriorating her hand. And it hurts, right?
PARTH:
But that’s your career. And so 15 years ago, she got the company to pay for Dragon NaturallySpeaking, which is a transcription software. Transcription costs $500. Like, high quality transcription was expensive back then. And then she would connect it to—she would give it blocks of code, so she would be like, “Write a for loop. Write an if statement.” And those predetermined blocks of code would be inserted into her code while she spoke. And that was the first time I had this, I got this idea of like: What if we just talked to the computer, and it wrote the code? And of course we didn’t have language models then, so it was really rudimentary. But then, now that we have Whisper technology, we’ve got language models that can write a lot of code very quickly and at higher quality. I came back, and I showed her, and I connected Whisper to these CodeGen, and it’s a hundred lines of code. That’s 51 characters per line that’s, that’s 5,000 button presses, but now you can just talk to the program, and it just exists. So I’m excited for more people to experience vibe coding. Even if it’s not the same as normal coding, because I think in certain ways it’s just a hundred times better.
PARTH:
Alright, Reid, I got a question for you: When we were talking about code generation early on when we met, I was like, wow, this thing can write perfect SQL. Doesn’t that mean that conversational data analytics is basically here, where, instead of an analyst writing queries by hand, you should just talk to an analyst that talks—your analyst should just be talking to an analytical agent, and it should write the queries? And then you made the comment that was like, “Yeah, but, you know, that’s really scripting. It’s not programming.” And that’s because I think, at the time, the models were kind of limited. But I think we’ve come a long way since then, and curious what your thoughts are on coding copilots and what it means.
REID:
Well, I definitely think that—and I know we’re aligned in this—this year, all the major shops are working intensely on increasing the coding capabilities for copilots, for press-button, get, you know, software engineer, you know, active agent. And one of the things that people always think is, oh, what is that going to mean for software engineers? Now, in parallel to the data scientist, I actually think that there’s still infinite demand for software engineers. They just may be deploying with a set of agents, in terms of how they’re operating. And so I think that the same thing is now true, and I think the coding capabilities are way up. But the thing that your question also gestures at—which I know you think about in depth too—is that actually, in fact, every professional is going to have not just a set of agents working on the thing they’re doing, but also some of these agents, or all of these agents, having capabilities, which eventually as you get from scripts and other things, those coding capabilities get very deep, and that the most prominent programming languages, you know, won’t be C++ or Pascal, or anything else. They will be English, Chinese, in terms of how we’re generating, we’re all going to be using it. Now the reason why there’s still room for a lot of human activity—data science is a parallel—is because the way you think about it and things you do, as opposed to the, “Hey, I’d like you to run a query to say how active are all of our users who’ve been here for over a year,” and say, “Well, I could just ask the thing myself and then kind of generate the thing.” But if you might say, “Well, what are the different ways that we should try to understand churn?” then actually, in fact, you also working with—or someone, data scientists—also working with these tools would say, “No, no, I can actually generate a whole bunch of stuff that’s really interesting to you,” that you as, call it “the general manager,” might not have actually, in fact, known exactly which kind of questions and analyses to run through.
REID:
And so I think we’re making great progress, although I think that there’s still—you know, just like writing, other things—you know, I suspect we’re still some ways away from where you should get a large block of code from an AI copilot and just check it in without looking at it.
REID:
Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jimenez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.
ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Thanasi Dilos, Ian Alas, Greg Beato, Parth Patil, and Ben Relles.