This transcript is generated with the help of AI and is lightly edited for clarity.
SEAN WHITE:
If you’re just thinking about IQ and EQ, well, that’s the cognitive world, and us being able to have discussions. AQ is really being able to then take action. It’s as simple as, all right, I’d like to actually take something from my sales list, and I want you to email those three people and then let me know what they say back. It should be able to do that. But that’s what I mean when I say something that is agentic, is its ability to take action. And in some cases even take action that is complex and not just one single thing.
REID:
Hi, I’m Reid Hoffman.
ARIA:
And I’m Aria Finger.
REID:
We want to know what happens, if in the future, everything breaks humanity’s way. What we can possibly get right if we leverage technology like AI and our collective effort effectively.
ARIA:
We’re speaking with technologists, ambitious builders, and deep thinkers across many fields—AI, geopolitics, media, healthcare, education, and more.
REID:
These conversations showcase another kind of guest. Whether it’s Inflection’s Pi or OpenAI’s GPT or other AI tools, each episode we use AI to enhance and advance our discussion.
ARIA:
In each episode, we seek out the brightest version of the future and learn what it’ll take to get there.
REID:
This is Possible.
REID:
What does it mean to amplify human intelligence with AI? How do you create truly empathetic AI? Today we’re talking about the intersection of LLMs and personal intelligence. Our guest hails from a company you may be familiar with. I’m certainly very familiar with it, as I co-founded it with Mustafa Suleyman and Karén Simonyan.
ARIA:
Our guest, the CEO of Inflection AI, brings decades of experience to some fascinating challenges, decoding animal languages, categorizing the Smithsonian’s plant collection, running R&D at Mozilla, and pushing for advances in neuroscience, creative expression, and virtual reality—to name a few. Now, he leads—actually, you know what? Since Inflection is behind our favorite chatbot Pi, that actually feels like a good reason to have Pi take the floor. Hey, Pi, could you please introduce our wonderful Possible listeners to today’s guest, who also happens to be our friend—and, in a manner of speaking, your boss—Sean White.
PI:
Ah, yes, Sean White. He’s the current CEO of Inflection AI, an AI company focused on developing personalized intelligence for everyone. Sean White is a seasoned leader in the tech industry with decades of experience working at the cutting edge of technology, research, and business. He has previously held high level positions at Mozilla, Bright Sky Labs and Nokia among others. Sean White is known for his expertise in neuroethics as well as his work in machine learning, user experience, and augmented reality.
REID:
All right, I think that does it. Thanks Pi. Here’s our conversation with Sean White.
ARIA:
So Sean, for those who don’t know, March 2024, Inflection makes headlines because it announced a deal in which Microsoft would license Inflections AI’s models, as well as hire many of the startup’s employees, which is why Reid called you and said, “Hey, do you want to come aboard and do you want to join as CEO?” And so that put this company at a unique crossroads. So can you tell us like, what does Inflection look like today? What is it focusing on? And let’s clear up the confusion for people who think Inflection equals Microsoft, because you have pivoted it to a new trajectory. So tell us about that.
SEAN WHITE:
Yeah, happy to. So first, this is not Microsoft. We are not part of Microsoft in any way, shape or form. We are our own company and going off in our own direction. And a large part of this is what I know Reid has referred to in the past as “re-founding,” right? That is you have something, an organization that is building towards something, and then you’re going to make a pivot, a shift. And that’s what we’ve done. And so where previously a lot of the work was really focused on the consumer usage, the consumer aspects, and, and frankly, how you are building up to a world-class LLM, how you’re building the technology out. In some ways, this is a translational stage for the company, which is to say, that’s fantastic. It, it can do so many things. Now, how do we apply that?
SEAN WHITE:
How do we actually make work-life better? And that’s been a large part of our pivot, is to spend time with folks who are doing more commercial oriented kinds of builds. So, for instance, one of the things that the, the company didn’t have before is we’ve opened up an API for usage. And that’s been incredibly exciting for me because you get this great long tail of creativity, right? So we, we had about 13,000 organizations that had signed up to use the API, and that’s been everything from, you know, someone who wanted to explore what it means to talk to your spreadsheets, to someone who is helping kids learn how to read books. Super cool company called Ello, some mental health companies that are exploring what it means to have either an aid in any of the many, many different ways in which we talk about wellness and health and, and, and that dialogue there. The list goes on.
SEAN WHITE:
The other part of it has been for us to then work with some of these larger enterprises. My background initially is in HCI, which really is about being human oriented in the things that we do, and thinking about that interaction. Pi, and really the Inflection technology that we’ve been developing is perfect for that. And so a lot of what we’ve been doing is the kind of the equivalent where, you know, we’ll have these fantastic “Voice of the Customer” meetings where we bring people in and, and that ranges from say a car manufacturer or an airplane manufacturer or a large bank or an insurer. And, and really talk to them. Not just say, “Hey, here’s what you guys should do because we’re the experts we know.” Let’s say, “let’s impedance match on the things that you are trying to achieve.”
REID:
So say a little bit about, one, what Inflection’s Pi model, what its kind of special sauce is. Two, which pieces of the B2B enterprise pivot where enterprises and partners would go, this is where it could be especially important. And then we’ll just, we’ll branch off to it, but like, what’s happening in the field of AI and how people should understand the world’s changing because of it.
SEAN WHITE:
Part of this is, I think is a conceptual model thing that has been a shift in change for me, which is, I started off really thinking of AI as a computational technology. And what I mean by that is, you know, early on some of my PhD work involved computer vision or, you know, looking at video, things like this. But it was really about processing things or finding insights, machine learning. And I held onto that for a very long time. But the thing that has been interesting about LLMs is that they bridge a computational technology with a user interface technology because it’s about language. And, and so the, the thing that has been really powerful and different first about LLMs is that they are new kinds of interfaces. And, you know, we can talk about dialogue and, and things that we are used to. And some part of that is because we’re comfortable with this technology, right?
SEAN WHITE:
We invented language a hundred thousand years ago, yay us. And you know, it, it meant that, you know, for the first time recently, the computers have come to us as opposed to us going to the computers. And I love that right. Now, I, and, and my kids, and really anyone can talk in the same ways that you and I are talking right now. It’s, it’s collaborative in it’s really interesting way. And that’s the thing that Pi is really unique and interesting and distinct in terms of its technology is that a lot of the technologies and LLMs, you see, you know, for that set of folks that are in the club as we are, where you’ve got over 300 billion parameters, right? These really large language models they can do great book reports and, and they, they, they represent an incredible amount of knowledge, right?
SEAN WHITE:
What Pi does and what it was fine tuned to do is actually be in dialogue. And be in dialogue in a particular kind of way, which means one, trying to learn more, two, being kind for some definition of kind, right? And really in some ways make you feel good about the conversation. Now, on the one hand you might say, well, that, that’s fine. But, you know, do enterprises really need that? Do they care, they just need to get something done. And, if you talk to enterprises, you know, one of the challenges is that they have tens or hundreds of different applications that frankly mostly suck. They are really hard to use. They, they, they don’t work together. You have to learn different interfaces for all of them. And one of the things that we have been working on building then is using speech, using this as an interface to aggregate a bunch of those things.
SEAN WHITE:
That means that the enterprise cares, because in some ways it’s accelerating the things that they can do. The standard cognitive tasks that we would talk about, right? Like, oh, it’ll help me be more creative or it’ll help me work through a, a hard conversation and I can role play that and practice it. And so maybe the last part, because you were talking about the future. One of the things I get really excited about is this notion that, you know, Pi in particular is a great transducer to knowledge or to a collection of people or to a collaboration. And what I mean by that is okay, so decades ago I was working with this art collective called Spruce. And we were doing this project, we were trying to give voice to the Cuyahoga River. If you remember, this is the river that caught on fire.
SEAN WHITE:
It had lots of interesting aspects to it. And so the, the art collective was putting microphones in so you could hear the flow of the water interviewing people in the different communities. And in aggregating that you get the voice. We actually can be in dialogue and have the voice of data. And that means that you can not only, you know, just have a dialogue with this massive collection of data instead of just seeing it like, instead of just a dashboard, but what it represents. And so if this is representing your 400 million users, you could be in dialogue with that 400 million users in ways that you just couldn’t be before.
ARIA:
So Sean, this is a perfect segue to thinking more about bringing Pi into the workplace. So I think, you know, at first when I thought about Pi, I was like, oh my God, of course this is consumer. This is one-to-one, I’m so excited. But as you said in the beginning, we were thinking about LLMs for IQ in the workplace, and then when you actually like, just think one layer deeper, it’s like what everyone needs in the workplace is soft skills, is more empathy, is how to communicate with people, is how to talk to people. Like that is going to be actually the coin of the realm going forward. IQ is of course, important, but EQ might be just as important or maybe surpass it. So let’s, let’s see if Pi is feeling up to it today. So I would love to talk to you to make this more concrete by bringing Pi into the conversation. And so can you walk us through a demo for example, of how you would use PI to navigate a tough situation in the workplace? And so if you’re using your phone, just make sure to put your phone up to the mic so that we can so we can hear Pi’s voice, if that works.
SEAN WHITE:
Alright. Hey Pi! This is Sean.
Pi 2:
Hey there Sean, how’s your day going so far? Anything you need help with or just want to chat about?
SEAN WHITE:
Yeah, it’s pretty good. I had a question for you. I’m looking for some help with a conversation I’m having with a board member. He and I have worked together before. He is really great.
Pi 2:
A conversation with a board member sounds important. What kind of help do you need? Is it about preparing for the conversation, structuring your thoughts, or maybe even practicing some possible scenarios? I’m all ears, well, virtually anyway,
SEAN WHITE:
[Laugh]. Well how about practicing some role playing with this because it, it, it might be a tough conversation. And we’re trying to decide on how we do some compensation structuring.
Pi 2:
Alright, let’s dive in. For our role-playing exercise, let’s imagine that I’m the board member and you’ve just brought up the topic of compensation and structuring. To start things off, I might say something like, Sean, I understand the importance of discussing compensation, but I’m hesitant to make any major changes at this time. Can you explain why you believe adjustments are necessary?
SEAN WHITE:
Oh, yeah, no, that’s great. So yeah part of this is I think that we have one of our team members who is just really stepping up and what I’d like to do is increase their compensation and, and increase their equity in the company because I think that will align the incentives in a way that will have impact on, on our success and also just make us work better as a team.
Pi 2:
Hmm. I appreciate your perspective, Sean, but I’m still cautious about making these adjustments. I’m concerned that increasing compensation and equity for one team member might lead to a sense of inequity or even resentment among the other employees. How do you plan to address these potential issues?
SEAN WHITE:
That’s a great question. Really what I’m going to do is make sure that everybody sees that value. And that we’re all in an agreement about this. We try and be pretty transparent about all the things that we’re doing.
Pi 2:
Transparency is definitely important, Sean, but I’m still worried about the impact this decision might have on team morale. How can we be sure that everyone will truly see the value in this change and that it won’t create any unintended divisions within the team?
SEAN WHITE:
That’s a good question. You’re the expert. What would you suggest?
Pi 2:
Well, one approach could be to gather feedback from all the team members before making any final decisions. This could help ensure that everyone feels heard and understands the reasoning behind the proposed changes. Additionally, perhaps we could consider a more holistic review of compensation and equity across the entire team to ensure that any adjustments are fair and balanced for everyone.
SEAN WHITE:
Oh, that’s a great idea. I appreciate your advice.
REID:
It’s useful. I certainly don’t feel threatened in my job anytime soon.
ARIA:
I mean, still every time I hear something like that, I do have to remind myself like, Aria, there is not a person on the other side of Sean’s phone. Like, this is an AI that — it is so interesting to hear that. And, and, and sort of like you said, sometimes you just need more of the, I just need someone to listen to me. I just need someone to bounce off. I just need someone to try it out with. And even if, you know, they’re not giving you the, the topnotch a hundred percent responses, just being able to practice with someone who’s going to listen is, is incredibly valuable.
SEAN WHITE:
We always talk about this as not being a human, it’s a tool. It’s not meant to replace humans in any way, but if someone feels better after talking with Pi or if someone gets something done after talking with Pi or they’re able to, you know, bounce off an idea or grow their ideas or role player any of these things that doesn’t mean that the effect isn’t there. It’s similar to a placebo effect; it’s still a very, a powerful and positive thing.
REID:
Like for general principles, like, you know, how would I approach a board member to raise the topic the right way? It’ll be very good at. When it gets to specifics, those things that won’t generally be as good at. And so that’s kind of the, the variation. And I think the thing that inflection adds in is an emphasis on being trained in EQ as centrally as IQ. So the IQ is extremely good. But to those, those dynamics of both how you interact with Pi itself, but also how Pi will recommend or shape how you interact with other people because that’s, you know, that’s what it’s trying to be extremely good at. And you know, to some degree these things are trained, so what they’re trained for really matters. But so far I don’t think anything is trained nearly as good at the EQ as Inflection and Pi have done.
SEAN WHITE:
That’s right. We actually had a Berkeley student, she did this really interesting thing, right? Because the race is so constantly around IQ. But what she did is she took some of the EQ tests that we use with humans and she gave it to a bunch of different LLMs. And proud to say Pi came out way on top. In some ways this isn’t a surprise, in part because I don’t think folks are focusing on it, and a lot of the organizations really are focusing on the IQ. And my sense is that for a lot of these things, we already have sufficient IQ. It’s where we, we need to broaden to EQ. And if we talk about the future to some extent, to AQ or how we take actions based on some of that.
ARIA:
And Sean, we were talking about that the other day and you were talking about IQ, EQ and AQ. Can you just go further, like what does AQ mean to you and how will that be important in the future?
SEAN WHITE:
In some ways there’s a simple version, right? Which is, if you’re just thinking about IQ and EQ, well that’s the cognitive world and us being able to have discussions. AQ is really being able to then take actions. And you hear a lot of discussions now about agentic flows or agentic AI. And for us just really trying to get some things that work, that have meaning for enterprises. It’s as simple as, we have these great demonstrations of this, where it’s not only having that discussion like we just had, but if I say, “All right, I’d like to actually take something from my sales list and I want you to email those three people and then let me know what they say back,” it should be able to do that. But that’s what I mean when I say something that is agentic, is its ability to take action and in some cases even take action that is complex and not just one single thing.
REID:
Speculate a little bit about the near term and the medium term futures about how our lives will change. How, how should people be thinking about, you know, what kind of future?
SEAN WHITE:
So there are lots of different versions of this, right? I mean, AI actually is not a single monolithic thing. And agentic isn’t either, but I think some useful ways to think about it is within what environments do we want it to take action and how do we want it to take action? But I think the places where it gets more interesting: one, if you’re not just helping somebody in the workforce have — you know, enjoy their work more where they don’t have to do all of these rote things, but some of that is just taking care of, that feels like a, a good aspiration. If it is actually letting us focus more on some of the, the cognitive things and all the little details get taken care of, that feels like a good aspiration. And then I think one of the places that if we push it even further, it would be useful for us is that, let’s say we, we start off with being able to do one action within some small place.
SEAN WHITE:
That’s great. Now we can cover a heterogeneous set of applications with an enterprise. That’s great. The part where it gets really interesting in my mind is when we start to look at what we tend to think of as executive functions and executive thinking, that is, I have an intention, I have a set of goals, I have a set of things I want to do that are both immediate and over the long term. And how is it helping me with that? It, it turns out by the way, that these systems, certainly ours, is much better at figuring out intention than a, a lot of our previous heuristics-based systems, right? So if you think about how search, you know, search care is about your intention. I want to know what you are looking for, you know, and, and disambiguate. And in the, the way that we built this, we figure out the intention, then it might go to a different model depending on the intention that you have for what you want to talk about.
SEAN WHITE:
And that actually is very powerful if you think about like, you know, someone that is helping you plan out or think about or even execute on your long-term goals. Even actually sometimes knowing, or at least asking you if what you really intended to do is different than what you said, right? Because you know, oftentimes we’ll express an intention, which is, well, you know I want to buy this screwdriverWhen what I really want to do is overhaul and fix this door. And if it can actually extract that intention, that’s much more powerful than just answering the basic question that I have. And so you can imagine that both in our, our personal lives and also in, in the commercial world.
REID:
That rabbit hole goes very deep indeed. What do you think will be like some of the evolving skill sets, like how they’ll need to conceptualize their work? How they’ll need to be able to use these things to make their work part of their lives more effective? What’s going to be the ongoing table stakes for people engaging with these agentic technologies to, to strategize and navigate their career and work trajectories?
SEAN WHITE:
I think there are a couple of different models here. What you really want to be able to do is stay in the flow of the things that you’re doing and be able to have that skill so that in the best of possible worlds, a lot of the interfaces disappearing. And you’re just focusing on the, the things that you want to do and the things that you want to accomplish.
ARIA:
One of the things I love, Sean, about talking to you is that I am so obsessed with neuroscience and you’re always like giving me the latest. I’m like, “wait, Sean, tell me what’s, you know, what’s the newest thing?” And so I feel like that’s another area where sort of wellbeing and health can really be positive, positively affected by . So can you just like, tell us what’s going on? Like what are the interesting and innovative things that you’re seeing in that field right now?
SEAN WHITE:
Yes. So recently there was a project that Steve Faraone is doing right now and ADHD-focused. These issues tend to be problematic. There’ve been a lot of alternatives to medication to try and approach this. The thing that was interesting about this, this project was that they use a small system that is effectively reading EEG. And we’ll, we’ll talk about that in a second, but, you know, it’s, it’s looking at what your brain is doing. It can detect through the use of AI, whether you are focused or not. And in the game that means, you know, the first level of the game, your character runs faster. So the more focused you are, much like meditation or something else, the better you play the game. And then there’s this whole other set of activities that you do that build on that, that are directly related to clinical studies about how you can you know, effectively change and, and work on your own ability to attend and to focus.
SEAN WHITE:
I think this is just neat because it’s a closed loop system. It, it couldn’t have been done before because we didn’t have the sensor systems for it and we didn’t have the AI to actually really grok what was going on in the brain. And so, you know, seeing some of the demonstrations of this yesterday in the studies, it was really exciting, right? Because it’s another tool in the arsenal. Maybe one last one that’s sort of related. There is a surgeon and professor up at UCSF, Eddie Chang. I just, I love his work. And so there’ve been this previous work looking at the motor cortex. And if, if I imagined that I was drawing out the words A R I A, it could tell that I was thinking of your name, right? because It was looking at the motor cortex and using a neural net to sort of predict.
SEAN WHITE:
But what he did was looking at an overall fMRI of the entire brain, and again, using a generative model, able to really almost predict based on the activity in your brain, what you were thinking. And it wasn’t perfectly accurate, but it was pretty close and it was the closest I’ve seen to having someone who is trapped in their head and can’t speak, but, you know, suddenly given a voice. And, you know, I think I said this earlier, but that’s one of these things that excites me, right? Is this – giving things a voice.
ARIA:
So when people talk about what makes us human, obviously it’s, you know, is it your personality, your intelligence, the way, the way you look, sort of all of these things. And, and one of the things you’ve talked about, Sean, is the idea of ownership and owning your own intelligence. And I think especially with AI, you know, we, we’ve talked on this podcast before, everyone is concerned about privacy and owning their own data. But also Reid and I talked to Daphne Koller about sort of the, the tragedy in some medical and health situations where actually if we could just get all the data together and aggregate it, you know, obviously respecting PII, that we’d be able to make enormous breakthroughs. And I feel like that becomes even more sensitive in the intelligence realm. It’s like, oh my gosh, own my own intelligence. That’s what is me. So I would love to hear you talk about, like, in this AI future, what does that mean for people to own their own intelligence or control their own intelligence or control the data around it?
SEAN WHITE:
Yeah. So there are a couple of framings for that. Actually, I’ll, I’ll start with the, the way that we have talked about it with consumer Pi, and in part this is about owning it, but also being in, in control of it. And then we’ll talk a little bit about the enterprise. So one of the things that we had announced, you know, I didn’t think was going to be particularly controversial was that we, we were working with this organization, DTI, the data transfer initiative, so that there was interoperability across different personal AI systems. And, you know, this was sort of a first salvo on the idea that, oh yeah, well, you know, this was your discourse, your dialogue, your knowledge, you should own it. And so if, if you would like to have it and move it somewhere else or do something else with it, you should be able to do that.
SEAN WHITE:
That feels right. We did this in part that was the initial intention was that what we create has value. Maybe it’s just value to us, but it really, it has value. It’s what makes us different and sometimes unique. So when you come to the enterprise, then oftentimes there are things that they have that can’t leave the premises, right? And that might be for business reasons, that is, there’s some trade secrets they don’t want to share. That might be for regulatory reasons. And so a part of what we have been doing is and this is one of the reasons to be able to license your code out, is to say, “well, look, no, actually, if that’s important to you, well then you own that and you know, you can control it, you can run it where you want to run it.” And there’s a set of companies, particularly as we have learned more and more from them, that really care about that.
SEAN WHITE:
I, I think there is an interesting question because you, you were bringing this up about that tension between general public good and private privacy. There’s a [laugh] you know, your ability to sort of control and own all of that. And, you know, we saw this when I was at Mozilla, one of the places where we saw this was around speech and language. we had a, a, a project deep speech, which is an open technology for doing speech rec and then common voice, which was really an effort to let any language and representatives of that language collect and create their own data set. So good example of this was, we see this a lot actually in the medical data and the neuroscience data, right? Which is, for instance, EEG data can be incredibly personal and, and certainly at the point where we start gathering more data, you know, there are folks who are working on things like wearable FMRIs, which then could translate to language.
SEAN WHITE:
So knowing what you think. It is something where there’s a lot of neuroethics work going on right now. We did some with the OECD and different states are trying to sort through what this means and being able to associate that with different health outcomes or different attitudes, different behaviors, it’s incredibly powerful. And so, you know for that, I, I think there’s still a lot of room for us to actually explore ways in which we do privacy preserving, but federated knowledge. I think we’ll see more and more of that, particularly where we see that there’s a trade off between things that are public good, but also the desire to maintain privacy.
ARIA:
And I, I, I actually think so often there isn’t, there’s only a tension in trust. That actually there’s almost no tension between, okay, you want to keep your medical records private and you need to share them for this cancer study, for this whatever study. But because we have, and sometimes government bad actors throughout history, because we have this lack of trust, especially within specific communities it, of course, people don’t want to participate. And of course people see that as a real tension instead of we can do both. And so to your point, we need to make sure to manage one to get the benefits of the other.
SEAN WHITE:
No, I think that’s exactly right.
REID:
You know, in our last minutes, before we get to rapid fire, let’s also do a little bit more broad afield. I know that one of the projects that you and I work on together, the Earth Species Project, where these generative AI models may help us understand other sentient. Other consciousness, other… Say a little bit about what you’re doing with the Earth Species Project.
SEAN WHITE:
Earth Species Project is a, a fantastic project. It started off with the seed of an observation. We, we had these machine translation techniques that we had started using that were unsupervised, which is to say it used to be that if I wanted to translate English to Spanish, then I would have to label and say, “mom equals madre.” But when we looked at large corpuses of data from both the languages, we started to see that there were structures, almost geometric structures in an n-dimensional space, a bit like constellations, right? And we could say, “oh, that’s interesting. The Big Dipper here looks like the Big Dipper here.” And what that means is that madre and hija are the same over here in terms of mother and daughter, right? With that, I mean, most of the machine translation systems now don’t need labeled data. They start to capture some of the interesting idiomatic aspects of it.
SEAN WHITE:
And so that was great. We figured this out. The observation from Aza Raskin and Katie Zacarian, who are the co-founders of this was, well, that’s cool. What about all of these other species that we believe have some form of intelligence and they are certainly communicating with each other? You know, for instance, we, we have evidence that there are pods of whales that have been around for centuries that pass culture from one generation to the next. And so, can we look at the ways in which they communicate? And again, use this as a transducer so that we can actually understand what it is they are saying and can we speak back to them. That would, that’s the overall goal and focus of the project. And, and you can imagine with generative techniques, right? I can as an example, kind of auto complete a sentence or really a whole paragraph just with a little bit of context.
SEAN WHITE:
So you can do some of those same kinds of things. If you want to autocomplete a bird song, historically, what we would do in some of the studies is you would mimic something, play something back. But that’s a little bit like talking in a mirror. And sometimes that gets a little awkward. I mean, if, if Reid, you and I were talking, but it was constantly Reid talking back to you and saying the exact same thing that you said, you just be like a little kid, stop doing that. Stop repeating what I say. But if you could really use these generative techniques to talk to other species, both to, to generate it and to understand it, that would be really powerful for several reasons. One, it actually gives us insight into the, the epistemology of these other species. Two, we can learn from them.
SEAN WHITE:
I mean, it’s really curious to understand a culture that has been going for hundreds of years, and the ocean actually has been passing along. I want to know that. But three, and maybe this is the most interesting, is it does start to ask the question of us about what our relationship is to them, what humanity’s relationship is to all these other species. The Earth Species Project, to me, has been really exciting because it is potentially one of these world changing moonshots and how we understand how we are in our relationship to the world. And if I didn’t say this, I suspect Reid might add this in. If we ever encounter species outside of what we know and understand here on Earth, it gives us a headstart on how we might communicate with them as well.
REID:
Yes. Wow. [inaudible]. Alright. So rapid fire: Is there a movie, song or book that fills you with optimism for the future?
SEAN WHITE:
There’s a song called Familiarity by the Punch Brothers, and it is as much about the way it makes you feel as some of the lyrics. And it is a complex song almost in three parts. And it’s almost confusing in terms of its genre. And every time I listen to it, you know, I feel hopeful about our, our future and where things are and somehow comforted in the cycle of the universe.
ARIA:
Sean, what is a question that you wish people would ask you more often?
SEAN WHITE:
There’s a research professor, Hirokazu Kato, he’s a professor at the NARA Institute of Technology. I used to love his talks in the augmented reality conferences because he would always start it off asking folks, is it possible? And then go on. Maybe this podcast aside, we don’t actually ask that enough. Is it possible not, you know, can we do it, but is it possible?
REID:
We a thousand percent agree. Where do you see progress or momentum outside of your industry that inspires you?
SEAN WHITE:
It’s going to sound a little weird, but material science. In part it’s because it enables so many other things within food security, within all of the sort of medical field and things that we look at, just even within AI, so I guess related to us. But we are getting so much better at both understanding what happens at a very, very small scale, and being able to, to guide and construct that into new materials. They have new behaviors. Sometimes they are biomimicry like the, the surface of a scramjet that actually models the surface of a leaf with stomata you know, two threads and yarns of carbon nanotubes that are used you know, suddenly to be able to put something through our veins that is, again, nanoscale in size. I, I, every time I see this, it, it is only hopeful because it’s, again, this giant space in which we can be creative and make progress.
ARIA:
Last question. Can you leave us with a final thought on what you think is possible to achieve in the next 15 years if everything breaks humanity’s way, and what’s the first step to get there?
SEAN WHITE:
I, I think the first step is for us to relax a little bit. That is for us to actually use these technologies so that all of the breakthroughs that are happening around Alzheimer’s and dementia and, and the brain, we can test these and we can try these out. And what I mean by relax a little bit is: we have rightly so leaned so hard towards concern about the things that could go wrong, but I want us to lean towards the things that could go right. There was, there was a professor from UCSF at a recent Stanford Medical School talk, and she highlighted this great thing around AI in particular. And she said, you know, there’s all this discussion right now about what could happen, but I, I want you to know that someday I believe that it will be unethical not to use AI to save someone’s life. I think for us to really get there, we have to, to focus on those things. There is always the nuts and bolts of doing the right thing, bending the arc in the right ways, we do that. But you know, we need to relax a little bit as we get there.
ARIA:
I love it so much. That’s awesome. Sean. Thank you so much.
SEAN WHITE:
Thank you.
ARIA:
We really appreciate it.
REID:
Always a pleasure. Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Adrien Behn, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.
ARIA:
And a big thanks to Karrie Huang, April Dewart, and Little Monster Media Company.