This transcript is generated with the help of AI and is lightly edited for clarity.

FEI-FEI LI:

Humans are capable of creating God-like technology so that we can improve our medieval institutions and raise above our paleolithic emotions, or channel our paleolithic emotions into creativity, productivity, and benevolence.

REID:

Hi, I’m Reid Hoffman.

ARIA:

And I’m Aria Finger.

REID:

We want to know how, together, we can use technology like AI to help us shape the best possible future.

ARIA:

We ask technologists, ambitious builders, and deep thinkers to help us sketch out the brightest version of the future—and we learn what it’ll take to get there.

REID:

This is Possible.

ARIA:

Reid, it is so good to be back. Happy New Year, even though we’re already halfway through the month, we can still say that. And of course, welcome back to our loyal listeners. Thank you so much for listening. If you caught last month’s featured miniseries called Artful Intelligence, we hope you enjoyed it.

REID:

Yes, happy New Year, Aria, and great to be with you. The turn of the calendar year is always a good time to reflect and reset. For me, the end of 2024 actually triggered a good deal of reflection on the state of AI. November marked two years since ChatGPT was released to the public, and it’s hard to overstate just how much has happened in the world of AI development since then.

ARIA:

I could not agree more. And as we look ahead, especially with respect to AI, I think it’s valuable to put into context how far we’ve come in such a short time.

REID:

Well, lucky for us, we’re joined today by the godmother of AI. In addition to being a brilliant and world-renowned computer scientist, entrepreneur, professor, and humanist, Dr. Fei-Fei Li is also a friend. I’m excited to have her with us to talk about her pioneering work in AI, including in her current role as co-founder and CEO of the spatial intelligence startup: World Labs. We’ll get into the mechanics and value of training machines to develop human-like spatial intelligence, along with her views on public-private partnerships, regulation, and the importance of empathy in AI innovation. These are themes Fei-Fei touches on in her moving memoir The Worlds I See—which details both her rise as one of the world’s leading AI experts, but also her childhood in China’s middle class and the difficult transition she and her family faced when they immigrated to the United States.

ARIA:

Fei-Fei is also the creator of ImageNet, the database that revolutionized computer vision nearly two decades ago. As Stanford’s inaugural Sequoia Professor of Computer Science and co-founder of the Human-Centered AI Institute, she has a long track record as a leader in ethical, inclusive, and human-centered AI innovation. She has been named one of TIME’s “100 Most Influential People in AI.” And honestly, that’s just the tip of her biographical iceberg.

REID:

Here’s our conversation with Fei-Fei Li. Fei-Fei, it’s excellent to see you. Welcome to Possible.

FEI-FEI LI:

Likewise. Great to see you and Aria.

REID:

What gave you the idea for ImageNet? Like what was the, “ah, we need to do this.”

FEI-FEI LI:

It’s very hard to pinpoint a very moment, but it centered around the time of 2006, where I was so deeply doing research using machine learning algorithms in trying to understand objects and images. And, no matter where I looked, I couldn’t escape the fact that there is a mathematical concept called “overfitting” with machine learning models. And this is when the model complexity and the data the models are using don’t quite match, especially if the data—it’s not just the data volume, it’s the data complexity and data volume—don’t really drive models in an effective way. And of course, not all models are created equal. We now know neural network models have so much higher capacity and representational power. But those jargons aside, there is definitely an interplay between data and model. And everywhere I look, people are not paying attention to data.

FEI-FEI LI:

We’re only paying attention to model. And that really was kind of the moment I had the insight, “I think we need to not only look at models, or in a way we’re look in the wrong way. We need to look at data and use data to drive models.” And of course, at that point moved my early career job to Princeton as a faculty. I came upon the work called WordNet. And WordNet was nothing to do with computer vision, but it was a wonderful way to organize concepts in the world. And I like the name WordNet. And one thing led to another. Imagenet came about because of the need. I, I was so passionately believing the need for big data and diverse representation of the visual world.

REID:

I started kind of the, at a midpoint in your AI career with kind of the amazing ImageNet, and now we have World Labs. And so I want to kind of draw the line from ImageNet to World Labs. So what’s, what’s the idea with World Labs? What’s the thing that you are building towards that is a key portion of kind of where we’re going and how to understand this both World Labs itself, and as a trend in AI.

FEI-FEI LI:

Yeah, Reid, we talk about this, right? It’s our favorite topic is—where technology is going. One thing I have been obsessively thinking all my career, especially post ImageNet, to be honest, is really what is intelligence and how do we make intelligence happen in machines. And it really, really, to me, boils down to two simple things if you look at human intelligence. One is that we say things. We use linguistic communication as a tool to talk, to organize our knowledge and to communicate. But there is another—the, the half of intelligence that’s so profound to us. And it boils down to we do things, you know, we do it by cooking an omelet. We do it by, you know, taking a hike. We do it by friends having fun and really enjoying each other’s presence that goes way beyond any word we say. Just, just the way we can comfortably sit in front of each other.

FEI-FEI LI:

Hold a beer can. All this, right? It’s part of intelligence. And that part of intelligence is really grounded in the ability to make sense of the 3D world we live in, to perceive it and to translate it into a, a set of, you know, understanding and reasoning and predictions so that we can do things in it. That ability—in my opinion, is called spatial intelligence—is really the fundamental native ability that embodying intelligent animals like humans have, which is the, the ability to deal with 3D space. So ImageNet happened because I was on the quest of putting labels on the pixels in 2D images. And 2D images for humans are projections of the 3D world. So you can see it was a baby step towards understanding the fuller world—visual world we living. And that baby step was critical, because whether it’s for humans or animals or machines comprehending those objects and images, labeling it is a key first step.

FEI-FEI LI:

But now that, gosh, more than 15 years, 14 years have passed, I think we’re ready for a much bigger quest that’s almost a homerun quest to unlock the most important other half of intelligence—which is the question of spatial intelligence. Now, what makes spatial intelligence really interesting is that it has actually two aspects: One is the physical 3D world, another one is the digital 3D world. And we never really we’re able to live in, in between. But spatial intelligence now can be a unifying technology that makes sense of both the 3D grounded world as well as the digital 3D world.

ARIA:

So when I think about sort of the, the promise of spatial intelligence, you know, if you went back to 1880—horse-drawn carriages, unpaved roads—like you would be like, this is a totally different world. But if you go back to 1980—okay, like people are driving different cars, but they’re living in the same buildings, they’re still driving cars. Sort of like the, the mechanics of this sort of real world are pretty much the same. Do you think that sort of this other half of intelligence will change that over the coming decades? That we’ll actually see the great transformation in the real world that we’ve seen in the digital world over the past few, few years?

FEI-FEI LI:

I think so Aria, and I think the, the line between real and digital will start to blur. For example, I think about me driving on the highway, and if I have a flat tire, I have a feeling I’m going to—despite the fact that I’m a technologist—I have a feeling I’m going to have a hard problem with that flat tire. But if I can wear a glass, or even point my phone to the car, I’m having the flat tire and just collaborate with that potential application to guide me through the process of changing that tire—whether it’s through visual guidance or some dialogue or mixture—I think that is a very mundane daily life example that really breaks the boundary of physical 3D world and digital 3D world. And that image of technology like this empowering people, whether it it’s changing a flat tire or doing a cardiology surgery, is a really exciting imagery for me.

ARIA:

And so you say, you know, you use LLMs all the time to teach yourself things, which I always think like should be inspiring. Like my kids, they’re always like, “oh, I don’t, I’m so good at math, I don’t need to learn anymore.” And I can be like, “no, no, Fei-Fei Li is using LLMs to learn. I think you have some, you know, you have some more to go.” But what do you, what do you see when you’re talking about large world models versus LLMs? Like how do you explain that difference to people and how do you think that’s going to play out in the future?

FEI-FEI LI:

Well, fundamentally, like I said, one is about saying things. The other one is about seeing and doing things. So, so they are very fundamentally different modality. The large language models, the basic unit are lexicons—whether it’s a letter or a word. And in our model, the world models we use, the basic units are pixels or voxels. So they are very different language. I almost feel like language is the language of human. 3D is the language of nature. We really want to get to a point that AI algorithms allow people to interact with the pixel world, whether it’s virtual or physical.

REID:

Your answer is reminding me of another quote you’ve used, which is quoting sociobiologist Edward O. Wilson saying, “we have paleolithic emotions, medieval institutions, and God-like technology, and it is terrifically dangerous.” So given, you know, kind of reasoning, language of nature, education of people, how do you invert that? And what is the opportunity upon us for humanity in the age of AI?

FEI-FEI LI:

Yes, I still believe that. And because I believe that you and I and our friends started the Human-Centered AI Institute. So if I were to reverse it, I almost would reverse the sentence. Humans are capable of creating God-like technology so that we can improve our medieval institutions and rise above our paleolithic emotions, or channel our paleolithic emotions into creativity, productivity, and benevolence.

REID:

What would you say is the key thing in how we build a technology to help us realize our aspirations? Is it a focus on compassion? Is it a question of kind of the human-centered and the symbiosis of the interaction? What, what’s the thing that you would build as the next step on this, having technology and AI help us realize our better selves?

FEI-FEI LI:

I can see why you are a Sym-Sys major, Reid. The combination of philosophy and technology in you. I agree. And, and, you know, just the previous quote, we almost use paleolithic as a negative word, but it’s actually not a negative word. It’s a very neutral word. The human emotions or, or human self image of who we are is deeply rooted in evolution—in our DNAs. And we’re not going to change that. And the world is simultaneously beautiful and messy because of that. So thinking about the technology and the future of technology’s relationship with humans, I think we need to respect that. We need to respect some of the most fundamental, truly paleolithic roots of of who we are. A couple of things that technology development really need to respect, and the more we respect it, the better we are. One is the respect of human agency. I really think one of the public communication issues of AI is we too often use AI as a subject in a sentence as if we’re taking away human agency. Sentences like, “AI will cure cancer.”

FEI-FEI LI:

I’m sometimes even guilty of saying that. The truth is humans will use AI to cure cancer. It’s not AI curing cancer. Or, “AI will solve fusion.” The truth is, human scientists and engineers will use AI as a tool to solve fusion. And what is even more dangerous is, “AI will take away your job.” You know, and, and I think we really need to recognize this technology has so much more opportunity to create opportunities and jobs to empower human agency. And that, that is a very important first principle I care about. And, and a second important first principle is the respect of humans. Every individual wants to be healthy, wants to be productive, wants to be respected members of society. And no matter how we develop or use AI, we cannot lose sight of that. And losing sight of that is dangerous, is counterproductive. And I think these two things alone are critical in guiding our development of this technology.

REID:

Fei-Fei, I love you zooming in on the concept of human agency—which as you know, is at the center of my new book, Superagency. Can you talk a bit more about agency as well as the importance of making AI human-centered? What does human-centered AI mean? And how should technologists and companies be thinking about this?

FEI-FEI LI:

Yeah, Reid. I mean, look, you and I started on this journey together before the establishment of human-centered AI Institute at Stanford. Talking about this is is really rooted in this deep belief that the so what of any technology, any innovation is to be human benevolent. And that is the arc of human civilization—is that every time we created a tool, we wanted to use this tool for good. Of course, it’s a double edged sword. We could misuse the tool, we would have bad actors to use the tool. So even looking at the dark side of technology and tools, it pushes us to even want to double down how to make it better. How to, you know, make it human-centered. And, and that was really truly the, the fundamental principle of human-centered AI institute is that we see AI—you and I, and our friends at Stanford—see AI as such a powerful tool. It’s a civilizational tool. We better put a framework around it as early as possible that put human and human benefits in the middle of, of this. And one of the absolute most critical aspect of human-centered AI and how I believe it should be guiding every company, every developer, is really this concept of empowering people.

ARIA:

You have been working in AI for such a long time in many different capacities. I feel like some people are just getting up to speed on AI now. Like what do you think about this moment in AI innovation—both in terms of like where we are, what developers are facing—like what do you think we need to do to get to the next level of solving these problems today?

FEI-FEI LI:

It is a phenomenal moment. The reason I, I think this is absolutely the, the inflection of a, a revolution is because of the application. The AI now can be used by everyday people and businesses. And many of the dreams that us, the early AI pioneers have thought of in our early phase of the career have been realized or almost realized. For example, the rhetoric general public praised Turing Test is pretty much a solved problem. Now, Turing Test itself, I wouldn’t call that the be all and end all test of intelligence, but it was a yardstick that was so hard that it was a legitimate yardstick, and it’s a, a solved problem. And cars driving by themselves, right? It’s not fully solved, but it’s so much more solved than 2006. So, so I think because of the power of these models being productionized into the hands of everyday people and business, I think this is a phenomenal phase of the AI revolution. But I’m also keenly aware, Aria, we live in a Silicon Valley bubble because I still think the entire global population still are being brought up to speed of where AI is. But we do see the future and where future is going.

ARIA:

So I think so many possible listeners can resonate with what you’re saying about this could be an enormous human amplification, it could be enormous positive, and like we do have to worry about sort of the negative consequences. We want to be able to steer it in the right direction. Like what do you think, like from a development perspective, what do we need to do to make sure that AI is going into this positive direction? And if there’s, you know, government or cross sector collaboration that you think is needed, I would, I would love to hear about it.

FEI-FEI LI:

Honestly. I think there’s a lot we can do, and I think we should be doing it yesterday, and it’s not late. We should just really commit to do this. One thing that I think we should do is base all this on science, not science fiction. There has been so much hype rhetoric on, you know, the extinction of humanity because of AI or, you know, world peace because of AI. Either side [laugh] is more science fiction than science. So when we think about how we approach AI policy, AI governance—basing it on data, basing it on scientific facts, basing it on scientific methodology is so important. And second is I really believe just like many other technology and tools, putting guardrails around the application where rubber meets the road—where, where humans are being impacted—is the right place to focus our governance energy rather than stopping upstream development.

FEI-FEI LI:

Think about the early days of car. It wasn’t very safe, you know. There was no seat belt, there was—at the beginning, it didn’t even have doors. There was no, you know, speed limit and all that. And then we did have lessons, you know, that caused human lives. But what happened is we didn’t go to Ford and GM and say, “shut down the, the factory.” We created regulatory framework for seat belts, for speed limits and all that. So today AI is similar. It, it is a profoundly empowering technology, but it comes with its harm. So what we should look at is when AI is applied in medicine, how do we update our FDA regulatory measures? When AI is applied to, you know, finance, how do we put regulatory guardrails? So the application is where we should focus governance energy on. That’s a second thing. Last, but not the least to me, is that we need to understand that a positive future of AI comes from a positive ecosystem.

FEI-FEI LI:

And that ecosystem, it needs private sector. I think private sector, both in terms of big company as well as private sector in terms of entrepreneurship is very important. But we also need public sector, because public sector produces public goods. And in my opinion, there are two forms of public goods. The one form of public goods are those curiosity driven innovations and new knowledge—whether it’s using AI for fusion or using AI for curing diseases, using AI for, for empowering our teachers. All these different ideas, a lot of them come from public sector. Imagenet came from public sector, you know, [laugh].

ARIA:

Yeah.

FEI-FEI LI:

Then another form of public goods are people. We need to educate more and more youth and the public about this technology. And public sector shoulders—from K 12 to higher education—shoulders, the bulk of the, the societal duty in education.

ARIA:

Yeah.

FEI-FEI LI:

So I think these are the different aspects of AI governance and policy that I care a lot about.

REID:

Actually, what you just spoke about, I think one of the things you should also emphasize a bit of the AI4ALL, because one of the other things you’ve been doing is this work to make sure that AI is not just the province of amazing, you know, professors from Stanford with PhDs and physics from Caltech. Right? But everybody else. You know, say a little bit about the AI4ALL and what the, the mission and contribution is there.

FEI-FEI LI:

So AI4ALL is a nonprofit organization that I co-founded with my former student and colleagues. And the mission there is really to give opportunity for K-12 students from diverse background to get into AI through summer programs in universities and, and internships. The idea is to try to get to the public good. You know, education part of AI is, we know AI will change the world, but who will change AI? We want more diverse group of people to come and be inspired by using this technology, developing this technology for all kinds of great causes. So we, we have been focusing on women and students from rural or inner city or, or different, just historically underrepresented communities and and backgrounds to participate in these summer programs. And it’s just so inspiring to see these young people using AI or studying AI and improving ambulance dispatch algorithms—to using AI to assess water quality in rural communities. It’s still a small effort, but I hope it keeps growing because this very goal of including more and diverse people in AI is a very important one.

REID:

So one of the things that you’ve also worked on is healthcare. And you know, one of the things that I think people, the areas that people should also track about the elevation of the, of humanity, of the human condition is what AI can do with healthcare. So say a little bit about that and say what some of the work you’ve done and some of the work you’re, you know, kind of hopeful for, for AI and healthcare kind of looking forward.

FEI-FEI LI:

Yeah, Reid, I, as my book also describes, I am passionate about AI’s application in healthcare for many reasons. You know, healthcare is absolutely the very core of human centeredness. So healthcare is a very vast industry. It goes from basic biosciences of drug discovery or diagnostics all the way to the clinical diagnosis, clinical treatment, healthcare delivery, and public health. Right? So the, the exciting thing is every point of this system we’re seeing AI could be of great help. My own area I love and focus on is healthcare delivery. That’s where humans are helping humans. We have far fewer nurses in America than our patients need. The job is grueling. We have strong nurse attrition. There’s some stunning statistics. For example, in one shift, some nurses average more than four miles of walking just by fetching medicines and, and equipments and all that.

FEI-FEI LI:

In one shift, our nurses can do up to 150, 180 different tasks. In the meantime, we have patients that fall from the hospital bed because they are lacking enough care. We have a lot of issues triaging very sick patients, not so sick patients. So healthcare delivery needs a lot of help, let alone our elderlies living at home alone, you know, on sole risks or dementia deterioration. So my work in the past 10 years plus has been looking at using smart cameras, cameras that are non-invasive and nonno contact to help our caretakers to pay attention to our patients. If they’re in the hospital bed, pay attention to their movement to prevent fall. If they’re at home, pay attention to their behaviors or loneliness or nutrition intake so that we can track what’s going on. If they’re in surgery room, pay attention to the instruments that the nurses have to count every single minute so we don’t lose them in the body of the patients. This kind of smart camera technology we call ambient intelligence, is intended to be helpful to our doctors and healthcare workers so that we can collectively improve the quality of care for our patients.

REID:

Amen. So now, you know, AGI is a term that’s bandied about a lot. I think you might have said somewhere around, I’m not even sure what AGI means as obviously so many people mean it. It’s kind of their own Rorschach test. So say a little bit about why, what this AGI discussion is maybe what it should mean, right? And what, what, what would make this a little bit more rational versus a set of scattered, you know, “it’s great, it’s terrible. It’s, it’s going to destroy all jobs, it’s going to help all humanity,” et cetera.

FEI-FEI LI:

I know, Reid, I mean, this is, this is both a fun but also a frustrating conversation. I, I genuinely don’t know what AGI means because I think the term came from a commercial side of the world around 10 years ago when AI is coming of age and there is a lot more commercial interest. What the initial term—which I respect that intention—is to add the word G, general, into AI to really emphasize that the future of AI is more generalizable capabilities rather than the very narrow one. For example, today’s self-driving car is much more generalizable than just a camera that detects trees, right? So, so that narrow focus of one specific task versus a, a powerful technology that can do a suite of tasks is real. The reason I feel I have always not been a hundred percent clear is if I go back to history, to the founding fathers of AI, the John McCarthy and Marvin Minsky of AI. If you go to their, their dreams, a hope starting the 1956 summer. That has been their dreams all along, is to make machines that can think and help people to make decisions that eventually can even do things.

FEI-FEI LI:

Nobody in their original dream of AI said, “we dream of extremely narrow AI tasks that detect trees.” The birth of this field as artificial intelligence is thinking machines. So from that point of view, well, we share the same dream, we share the same scientific curiosity, we share the same quest, which is machines that can perform extremely intelligent tasks. So from that point of view, I don’t know if I should call it AI or AGI, to me it’s the same thing.

ARIA:

Well, in thinking about sort of AI that can do things, like you said, it feels like recently with the new improvements in voice and thinking about agentic AI, we’re sort of creeping closer where you’re like, oh, I’m just having a normal conversation with my AI and we’re getting close to it, sort of doing things for you. Are there ways that you use agents now in your life that you think are particularly helpful or sort of the promise of agentic AI and voice in the next few years you think are going to change things?

FEI-FEI LI:

I definitely think the natural language way of sharing knowledge, allowing people to be able to—whether it’s search or just ideate or learn something—is a very powerful tool. Even for myself, I use LLMs to try to understand a concept, to try to understand a paper, to try to just ask a number of questions that I didn’t know. What really excites me the most is seeing people and kids using it as a tool to improve their own learning. It, I, I do want to focus that no matter what, let’s make sure we keep the agency, the self-agency in people and give them good tools to, to learn and to empower them. And I think as we deepen the powerful tools—and I myself is working on that—we’ll see more and more that collaborative capability to allow humans to use these tools to do things more precisely. And I would be excited to see those happening.

ARIA:

I think it’s not only important because of course it’s the right thing to do, but I think also you then get the narrative that, “oh, these people developing AI are trying to replace humans and get rid of them. And I don’t want to look at a screen for 10 hours a day.” And listen, there’s no one who wants to look at a screen for 10 hours a day less than me. I think human interactions are so critical and so important—and they’re so important for everything, for, for teaching, for community, for empathy. And you know, one story you told in your truly beautiful book, The Worlds I See, was about a math teacher. You had a high school math teacher, Mr. Sabella, which just showed that it is important to have human interaction. So can you say a little bit more about that and perhaps the memorable advice that he gave you?

FEI-FEI LI:

The book really acknowledged my early days as an immigrant kid who, who came to New Jersey at the age of 15 and landed in a public high school not speaking English. And that was the, the, the beginning of my journey. And very lucky for me, very soon I met a math teacher called Bob Sabella, who really treated me with that sense of respect and unconditional support. And not only was my math teacher, but also became my friend in a very difficult teenage years as a new immigrant and, and throughout my life till his own passing away. But what has he taught me is not through words. He never sat me down and say, “Hey, Fei-Fei. AI is going to take over the world. Let me tell you to be human-centered with AI.” Like, I don’t think that word even, you know, was, was in our dictionary.

ARIA:

Sure.

FEI-FEI LI:

He just taught me through action is that at the end of the day, the meaning of our society, of our lives are the kind of positive things we do to each other for each other. And the kind of beliefs we hold and the, the beacon of light we chase. And in, through his action, I came to appreciate respecting and lifting other humans is a beautiful thing, even if that’s a clueless kid who, who didn’t speak English and didn’t know what she was doing in a new country. So, so I think that kind of generosity, fundamental kindness, and compassion is the core of what being human is about. And that, you know, the biggest learning from him for me is put humans in the center.

ARIA:

It’s beautiful. Thank you.

REID:

Yeah. Beautiful. So rapid fire. Is there a movie, song, or book that fills you with optimism for the future?

FEI-FEI LI:

My Neighbor Totoro is one of my all time favorite movie [laugh]. Yes. I can hear the music. I’m not going to attempt to sing because I’m lousy, but yes, it, it’s so simple, so beautiful. I, I’m the sucker of beautiful pixels and, and yet so, so profound. I have the excuse to watch it with kids, but honestly I don’t even care how they feel. I just love watching it [laugh].

ARIA:

That’s awesome. That’s awesome. So, Fei-Fei, what is a question that you wish people would ask you more often?

FEI-FEI LI:

I wish people would ask me more about how can I use AI to help people. Because I can talk for hours about that and I can, I can think about so many of my amazing colleagues at Stanford or worldwide who are doing that. I can—if I don’t know what they’re doing, because they’re just their own genius experts in the discipline, at least I can point people to their work.

ARIA:

Absolutely. There’s so many people doing amazing things right now, and we, we need to inspire more people to do the same.

REID:

Where do you see progress or momentum outside of your industry that inspires you?

FEI-FEI LI:

I actually think our humanities focus on energy really does inspire me because I think, again, I guess I just cannot completely untether myself from AI. Even AI’s development is pushing against this very real question of energy, right? Electrical power. And I think that the changes of the environment—also the democratization of having energy for the global population—is so critical. And also, you know, we cannot always rely on fossil fuel. So a lot of these advances and, and just the global movement in the field of energy is, is exciting.

ARIA:

So for our final question, always, can you leave us with a final thought on what you think is possible in the next 15 years, if everything breaks humanity’s way? And what’s the first step to get there?

FEI-FEI LI:

I want to see a global increase of knowledge, well-being, and productivity, especially with the emphasis on shared prosperity. The reason I want to emphasize on that is—on the technology side, I am an optimist. I know that technology can help people. I know if we use it right, it can discover new knowledge, it can help us to innovate, it can increase our well-being. But I think it’s so important that we are learning that lesson over and over again—is that when that happens, we need to recognize, we need to share that prosperity. We need to democratize that benefit.

ARIA:

Absolutely. I hope so too.

REID:

Fei-Fei, thank you, as always.

FEI-FEI LI:

Thank you.

REID:

Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.

ARIA:

Special thanks to Jessica Williams, Aki Shurelds, Harini Sreepathi, Russell Wald, and Little Monster Media Company.