This transcript is generated with the help of AI and is lightly edited for clarity.
REID:
I am Reid Hoffman.
ARIA:
And I’m Aria Finger.
REID:
We want to know what happens if, in the future, everything breaks humanity’s way.
ARIA:
Typically we ask our guests for their outlook on the best possible future, but now every other week, I get to ask Reid for his take.
REID:
This is Possible.
ARIA:
Anyone who’s been paying attention to the news has seen the explosive story from Jeffrey Goldberg from The Atlantic about the Signal group chat that discussed the bombings of Yemen. And so there are so many questions related to national security, data privacy, data security, having records that will disappear after a week. And so I think there’s two separate things here. One is sort of more the government side, and the other is just sort of the future of digital technologies. When we get back to the issue at hand for the government: Do you think the government should be using Signal chats to communicate? Because it is a method that can be either easily hacked—you invite someone accidentally, the records aren’t kept. Where do you see that line?
REID:
I actually think, generally speaking, using Signal is good. Part of the choice of it is it’s one of the most safe and secure and independent encrypted apps. And unlike a lot of technology—which the government tends to try to RFP—it’s live and constant and upgrading, and so safe and secure from, for example, Russian hacking. Now, Signal is designed for individuals, not for groups, so it doesn’t have all the features that, like, a Microsoft Teams would have, which is making sure that anyone you’re including is in your organization, et cetera. Because, you know, Signal is that freedom of the individual, in a way. So it doesn’t stop you from doing what I presume is user error, and including the editor-in-chief of a prominent magazine on your thread where you’re detailing your war plans. And, perhaps even most comedically, saying, “And our OPSEC is good.” OPSEC, for people who don’t know, is operational security, which means you’re protecting yourself from intrusion and spying. Actually, you’ve invited the journalist in. And so that’s just more, you know, department-of-government-incompetence kind of activity. But I actually think generally speaking using Signal is good. It just will take people who are not being incompetent.
ARIA:
Right. So let’s talk about another technology story that’s been in the news. There’s been a lot of buzz recently because a London National Health Service hospital, they claim to be the first in the world to use AI to perform instant skin cancer checks. And so they fed photos of moles into their app, and the hospital had a 75% reduction in clinician time, and they still had a near-perfect diagnostic record. I mean, this is sort of the future of medicine that we’re so excited about. What do you think are going to be the biggest barriers to leveraging AI as a diagnostic tool? We have this amazing story. How do we scale this?
REID:
So, I mean, just even to elaborate—I mean, imagine that all this stuff is coming to a smartphone near you, that the camera will be used for this. And there are already apps and applications for doing parts of that, and I think that’s part of where we will see an amazing revolution in human health and the elevation of humanity. And so I think that’s a whole range of things that are, for example, advice from Pi or ChatGPT or Copilot, Gemini, to you—to doctor amplification. You know, today, if you had an AI or a radiologist reading your film, would you rather have both? And that kind of thing. And then this app, which is running on an iPhone, can bring all kinds of diagnostics.
REID:
I mean, just think of, OK, that mole had particular characteristics, but also imagine rashes or skin tone. You can imagine sensors where we begin to go like, “Here’s your breathe-on-it sensor.” I mean, because imagine like most medical conditions, the earlier—and the cancer is an obvious example of it—but the earlier that you get to it, the earlier that you understand it and can do something about it, the much better your health outcomes are. And so just imagine all of that being brought to you by AI. And then, of course, the moonshots of curing cancer, therapeutics. And obviously Manas, which I’m doing with Siddhartha Mukherjee, our moonshot target—part of that video we did—is, in addition to sending men to the moon, we can tackle big things like cancer. I think this is small N years by which we start getting more and more amazing results.
ARIA:
Yeah. I have to admit, I’m really bullish on the technology. The thing I do worry about is—I don’t want to say government getting in the way. But I was at a GP appointment last week, and I got a notice, and it said, “Hey, your doctor wants to record the visit. They’re going to feed the data into Microsoft, and use Microsoft Azure and Copilot, and we’re going to improve health outcomes.” And I was like, “Oh my God, this is amazing. I am so excited.” And then the next line was something like, “Oh, by the way, if you sign this, HIPAA no longer applies to you, and you don’t own your data and it could go anywhere.” And I was like, “Ooh, okay.” It seems like we need a new regulatory framework which is something between: “I want my data protected” and “You could use the data.” And perhaps our political system is going to have to catch up. Do you think that’s going to be a barrier in the U.S. and U.K.?
REID:
A thousand percent. Because most people imagine the unknown fears of what happens with the data more than what they get as collective benefit. Whereas the collective benefit is massive, and the unknown fears are rare.
ARIA:
Yep.
REID:
So actually, in fact, everyone—which I presume you did—should actually sign that kind of thing. Because, for example—whether it’s cancer or anything else—we are going to get a massive benefit from the collective. Now what the regulation needs to be is not “You can’t use the data.” The regulation has to be that “You have to do appropriate care to keep Aria’s safety and sanctity healthy.” Where it can’t bleed out information to other places, where she might get discriminated against in her workplace. Think about how much we could accelerate everybody’s health and cures if we could just take all of the current health data and upload it all and just use it right now. We would be so much better off. All of us. And the only thing that really stops it—in addition to just general regulation—is there are some areas where you say, “Okay, we have to think through and make sure that we don’t do damage to individuals as we do this.”
REID:
But I think that’s actually relatively straightforward. I mean, for example, I would have no worries—not just because I’m on the Microsoft Board—but I would have no worries about signing that. Because I know that all of these major players treat the data stuff very, very seriously indeed. And so, I got asked this question at a couple events when I was doing the Superagency book tour in London—was that, “Well, should I trust the data that I’m putting into ChatGPT or Gemini, or Claude, or…?” And the answer is: Most of the time when you do a paid account, you have a contract, and it’s managed and safe and all the rest. But it’s also—these companies know that their ongoing value is by maintaining brand safety with you. They have a massive economic incentive for that. So it doesn’t mean that they can’t f*** up. Just like anything can f*** up. Your doctor can fuck up, you know, da da da—but they have a massive incentive for that. And so I think, generally speaking, we want to go very fast down this path. But we do want to watch for specific considerations and that’s, as you know, the entire Superagency thing.
ARIA:
Absolutely. Alright, so switching gears a little bit, thinking about no matter what the new technology is or what the new thing VCs are investing in, you’re always going to have some people saying it’s a bubble, and some people saying it’s amazing. So recently, Alibaba chairman Joe Tsai, he said, “I’m still astounded by the type of numbers that are being thrown around in the U.S. about investing in AI. People are literally talking about $500 billion, and I don’t think that’s entirely necessary. I think in a way people are investing ahead of the demand that they’re seeing today, but that they’re projecting much bigger demand that ostensibly isn’t here yet.” And then on the other side, you have, you know, the NVIDIA CEO, Jensen Huang, and he just announced a new chip—the forthcoming Rubin GPU, and it offers 3.3 times more power than its predecessor. And so he is predicting an inflection point where the world would see an hundred-fold-increase in computing demand due to the advancements in agentic AI systems. I don’t necessarily think those are in conflict—it’s sort of when the demand will come. But where do you stand on this in terms of: Are we going to be using this forthcoming compute that’s coming from these chips? Or is the demand just not there yet?
REID:
Well, obviously I would have been voting wrong on the Microsoft Board if I was not voting for—that there will be massive demand. Now, part of the whole question of navigating this is: You could be a little wrong. You don’t want to be massively wrong projecting demand. Because otherwise it can be very expensive, miss a cycle, et cetera. So roughly speaking, you want to be potentially months wrong, not years wrong. So like if you’re ten months wrong, no problem. Ten years wrong, big problem. But I think that because you look at this and you say, “Hey, if you can actually add intelligence”—if you can add intelligence to research and analysis; you can add intelligence to navigation; you can add intelligence to education; you can add intelligence to navigating legal things; you can add intelligence to, like, literally everything.
REID:
Then the presumption—which I think is very smart—is that we have the same kind of market demand for it that we do for electricity. At a certain price, we have infinite demand for it. Right? And just for example, as electricity maintains cheap, or anything else, we do more air-conditioning, we do more heating, we do more of these other things. And so it’s kind of like—and that’s part of the reason why it’s very important for the energy to be green. We want to factor that into getting the energy price. But with green energy, make it as cheap as possible because there’s infinite demand for it. And I think that same infinite demand is going to apply to intelligence. So, as you said, they can both be right. But I would be betting the way the U.S. tech sector is betting, which is this is going to be massively important, and getting there early and iterating really, really matters.
REID:
The notion is: We’re first in a very slow thing—because actually AI adoption right now is slow. Part of what I’m trying to do with Superagency, this podcast, is get that to be faster. Because I think that is useful in shaping it. You know, “Humanity Enters The Chat” is chapter one of Superagency—and I think that’s the thing we want to get to. But I think once it starts going fast, it’ll also go fast.
REID:
So it’s kind of slow, then fast. And part of what will trigger, “Well when fast, and how?” will be competitive things. So if you’re an engineer, and you look over your shoulder and your fellow engineer, she’s using Copilot to suddenly code a whole lot faster and better, you go, “Oh shit, I should do that too.” And then, not just individuals, then groups. And groups within companies. And then not just groups within companies, companies between companies, and then industries. And so individuals! It’s like, “Hey, I made a great thing for my dinner party because I consulted ChatGPT on my recipe.” And so anyway, that’s the acceleration that we will see very, very soon.
ARIA:
And there has been some demand. I mean, a hundred million people are using ChatGPT. Anthropic, they do have billions of dollars in revenue. So there will be companies that fail. There always have. That’s what’s actually the great thing about technology and VC and companies, is there’s a lot of people trying things. And it’s going to be really exciting to see what’s successful. And what we’re seeing right now is really an explosion in image generation. I mean, our internal Slack channel has just been, you know, popcorning with new images, new pictures of San Francisco. There is so much excitement coming because of the new launch from OpenAI. And Sam Altman just posted, quote, “we think people will love it, and we are excited to see the resulting creativity.” Why are hyperscalers sprinting so hard on image generation in particular?
REID:
Well, they’re sprinting on a lot of things. I think image generation just happens to be like a particular highlight of this month. And it will be a highlight of future months, and it was a highlight of past months. But it’s like this month: Image generation. Next month: Chain-of-thought reasoning. Month after that: Coding—maybe two months for that.
ARIA:
Do you think it’s just fun? Do you think one of the reasons is because people just like making pictures?
REID:
Yeah. Well, you know, a picture’s worth a thousand words. And so you kind of get to the, “Hey, I can do Deep Research, but I can also create a picture.” And I think it’s also when, with technologists, when they get a bunch of feedback—like, for example, I think part of the update was, “Boy, it puts words very poorly in pictures,” which you and I have experienced, because we’ve tried to create images that would mean something for people and then put words in it, because we are linguistic creatures, and the words come out: “Blah blah blah.”
ARIA:
Not great thus far.
REID:
Not great thus far. You know, part of what I think we’ll see because of AI is a revolution in communication and images. Not just in the words, but how do I use an image—it could be a picture—to actually in fact help amplify and understand this point in an efficient, emotive, connective kind of way. And I think we’re going to see a lot more of that. And I think that’s—again, it’s all of these areas in amplifying our expression and our speech for this.
ARIA:
I love it. Reid, thank you so much. Always a joy to chat.
REID:
Always a joy.
REID:
Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jimenez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.
ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Thanasi Dilos, Ian Alas, Greg Beato, Parth Patil, and Ben Relles.