This transcript is generated with the help of AI and is lightly edited for clarity.
ANDREW BOSWORTH:
Every person: Superhuman vision, superhuman hearing, superhuman memory, superhuman cognition. That is the vision that we have for these wearables. A tremendous equalizing technology who has incredible memory, incredible vision, incredible hearing. Some people are born that just have those talents. The vagaries of the bell curve and the ranges of human capabilities are beyond just the demographic ones. They exist in terms of fundamental capabilities. And how interesting will we be as a society if everyone has the full access to those facilities?
REID:
Hi, I’m Reid Hoffman.
ARIA:
And I’m Aria Finger.
REID:
We want to know how, together, we can use technology like AI to help us shape the best possible future.
ARIA:
We ask technologists, ambitious builders, and deep thinkers to help us sketch out the brightest version of the future—and we learn what it’ll take to get there.
REID:
This is Possible.
REID:
It’s no surprise that innovations in tech are changing the way we interact with each other. But now we’re living in an era where devices and programs are changing the way we interact in the physical world. And they’re doing so through advancement in augmented reality, virtual reality, and, of course, AI.
ARIA:
The future is already here, and it is quickly redistributing more evenly. And while we talk a lot about software, it’s happening increasingly with wearables—namely electronic devices designed to blend seamlessly into our everyday lives, like watches, clothing, and glasses. However, these aren’t just fun gadgets, but devices that could refactor how we navigate our world.
REID:
Our world is soon to be one where glasses don’t just help you see, but where they enhance what you see. Now add AI to the equation. The glasses could potentially guide you, talk to you, and troubleshoot your problems, while keeping you rooted in the physical world. The possibilities and applications in this space continue to grow.
ARIA:
One of the key figures shaping the future of this technology is Andrew “Boz” Bosworth, Meta’s Chief Technology Officer and the Head of Reality Labs. Boz has spent nearly two decades at the forefront of digital innovation. He joined Facebook back in 2006, where he developed pivotal features—like News Feed, Messenger, and Groups—those functions playing a crucial role in transforming Facebook into the tech giant that it is today. But his influence doesn’t stop there.
REID:
Boz’s journey in tech runs deep, from his early days as a teaching fellow for an AI course at Harvard, working on Microsoft Visio, and later founding Meta’s AR and VR division, which became Reality Labs. Today. As Meta builds towards the Metaverse, Boz is focused on the next frontier in mixed realities. His latest project, Orion, a next-generation pair of AR glasses—still in development—aims to revolutionize how we interact with the digital and physical world while striving to make this groundbreaking technology accessible to a wider audience.
ARIA:
In this episode we’ll be diving into the intersections of Boz’s career, Meta’s ambitious AI hardware projects, and the broader implications of having AI and mixed reality in our day-to-day lives.
REID:
So let’s get into it. Here’s our conversation with Boz.
REID:
So you were part of the California Program, where you raised livestock and essentially grew up on a farm—which, by the way, I went to a high school on a farm, so I’m very sympathetic. I’m curious, how did that experience influence the person you are today?
ANDREW BOSWORTH:
People are often surprised by the fact that, you know, I grew up on a farm, and my family’s farming from way back, and still is farming to this day. But actually, really, if you know about farmers, three things that are important to know about farmers: Number one is they’re governed by time—daylight, and seasons. They have X number of daylight hours, to which they get the work done, and they gotta get it done, because the seasons are moving forward. And that forces two other things. The first one is: Every one of them is an engineer. You gotta fix that tractor now because you gotta get that crop done. You got to mend that fence today. You don’t have time to be dealing with getting these livestock back in the pen. So they’re all engineers, not scientists. I mean real, like, get-it-done-with-what-you-have-on-hand kind of engineers.
ANDREW BOSWORTH:
And the second one is: They’re entrepreneurs. You know, my family runs a horse ranch. Well, a horse ranch means you got a lot of manure. Well, you got two ways to handle that. The one is, you can pay someone to pick that manure up and haul it off, and that’s going to cost you money. The second one is, you can market that manure as fertilizer, and you can make a little money. That’s a two-for-one swing. And so my cousins who are still farming and my uncles who are still farming, it’s either a cost or an opportunity. And the margins are slim, so you gotta make those opportunities work. You know, in my experience at least, it’s not as big a stretch as people thought. But I do want to shout out National 4-H organization and California State 4-H. I learned how to program in 4-H. It’s not just cows and cooking, as we like to say. The first person who taught me to program was a fellow 4-H-er—and got me into computers that way. So it’s a great program, not just from an entrepreneurial and engineering standpoint, but also directly programming computers.
REID:
Was there anything additional in farming that also shaped how you think about the interface between the digital and physical worlds?
ANDREW BOSWORTH:
Well, it’s interesting. I think—the point that humans are busy. They got a lot of things on their plate, and the tool either has to work for them, or it’s just not worth it. I really think that when we think about how we build these tools, a lot of times we’re beset by the tremendous value that we see—especially in our industry. We see how great it could be. And this isn’t new. This goes back to Douglas Engelbert, right, who invented the mouse. To some degree, his vision at SRI failed because he had such a complex idea. He wanted humans to do a ton of work to get skilled enough to unlock the full power of the machine. And my understanding is, even in his retirement, late in the nineties, he lamented the fact that we took the easy exit. We took the computer mouse, and we ran with it and just did this point-and-click stuff. Whereas he wanted to replace the keyboard with the cord set—he had all these ideas. And I think that’s a lesson we keep relearning, which is: It has to be so easy. Like, the big pot of value at the end is great, but it still has to be so, so easy to get at it that you can lead people down that path. If they have to take a course to learn how to do it, they’re not going to do it. It doesn’t matter how valuable it is. And that’s got nothing to do with anything but just how humans are. That’s true in a wood shop. That’s true in an auto shop. The tool that you can pick it up, and you’re like, “Yep, I see it, I get it, I use it, it works,” that’s the tool people reach for time and time again, not the super elaborate, complex one. I think about that all the time. That was the truth growing up on a farm. You just had to get it done. You didn’t have time to be trying to learn a new thing. You had to get it done.
ARIA:
So I love that way to start because when people think about technology, at least lately, they think so much about the digital world, and it feels like there’s nothing more “real world” than working on a farm. And using tools to fix your equipment. And making sure the cows get milked. And all that stuff is in the real world. But we’re in this new era—this new ecosystem of device wearables and smartphones, and, of course, glasses. And so how do you think about this new world, where we’re actually going to navigate and engage with the physical world through digital in a way that we used to interact with digital spaces?
ANDREW BOSWORTH:
Yeah, this is the construct of the Metaverse—which I think has been pretty broadly misunderstood, or at least it’s understood differently by different people—is this idea of blending of the digital and the physical together. Actually, let’s keep with the farming thing because it’s great. Farmers are pioneering some amazing work. You know, autonomous driving is not as far along as autonomous farming is: You know, tractors that are able to plow the fields automatically, drones that are out there doing reconnaissance on what fields need what kind of treatment—that is actually pretty advanced technology. And it really is very much about blending the physical and the digital capabilities together. And so we had this amazing explosion with the internet and software, and I think we took software as far as it could go within the construct of a phone, a laptop. It’s like we just took it to the absolute limit.
ANDREW BOSWORTH:
And what’s exciting me now is these really physical manifestations where through advanced sensors—whether it be audio mechatronics, whether it be drones or robotics automation—and then ultimately, I think, through wearable devices, we’re getting to a new plateau of hardware that can further allow software to breathe and expand. AI is such a fun example because it’s so vogue right now. And if people had a chance to ignore all the hype and really try to use these tools in their daily lives, there are some areas where it’s mindblowingly useful. You know, I’m doing a little home automation project, and I’m debugging. And there’s these obscure Internet of Things devices that have unlisted APIs, and it would take me a long time to build a fuzzer to discover those. Man, you can do it in minutes with these tremendously useful AIs.
ANDREW BOSWORTH:
Of course, there’s a ton of things that they’re not good at. But I still find the interface to them very awkward—whether I’m going to my phone, or I’m going—whether it’s voice or text, it’s this very transactional thing. And what’s so funny, I find myself, I’m like the cut-and-paste machine suddenly—like I’m doing a coding project here, and I’m like, “Okay, I got to cut this result, you know, the debugger output into here,” and then it gives me the answer, and I’m cutting and pasting back into the thing. This should all be integrated.
REID:
So let’s broaden out to AI generally. I mean, I’m enough of a geek, and the interface with the digital and the physical world, and how that transforms and what it is to be human and Homo technae—I could spend the entire discussion on this, which would be amazing. But there’s also AI more broadly. So what’s the Meta view of AI? And in use in products, use in the world, what is the AI philosophy?
ANDREW BOSWORTH:
So there’s three layers to this in my mind. The first one is: We’re building these very exciting models. And I find myself actually sometimes at war with both sides of the AI discussion. I have a profound belief that these are hugely important, meaningful things that will meaningfullu advance human capability. I kind of liken it to a word calculator, right? In the year 1960, a calculator was a person. In the year 1970, it was not a person anymore, it was a different thing. And at first they were like, “Ban them from school and get rid of them.” And I grew up in the, “You want to have a calculator with you at all times.” I have three calculators with me at all times. My high school teachers were wrong about that. I think of the AIs that we are building as word calculators. I really mean that word “image”—like visual. They’re really complex calculators that have moved beyond the simple symbolic space of mathematics into this higher order space.
ANDREW BOSWORTH:
I also don’t think it is even the kind of thing that is human intelligence as we understand intelligence—and agency and consciousness and thought. And so I have to fight both sides. Like, AI is both a huge deal and also not that kind of a huge deal. So that’s my first belief about it, which gives me tremendous confidence in how I use the tool. The second one is: We’re running into the information-theoretic limits of it. If you go all the way back to Norbert Wiener and his cybernetics—the first, you know, constructs of information theory—
PI:
Hi there, I’m Pi, and I’m here to add some context. Norbert Wiener’s construct of information theory can be explained in simple terms as the study of how information flows and is used to control systems, including mechanical, biological, and social systems. His work on feedback loops and data processing laid the foundation for modern control systems.
ANDREW BOSWORTH:
This idea of, “How many bits can we pull out of something that are sufficiently generalizable bits?” And we’re finding out that for all the corpus of human media ever produced, it’s not enough. It’s not enough. We’re finding that in robotics, robotics is an effort that we’ve kicked off recently, inside of Meta, in partnership as kind of an adjunct to our Llama program. And no matter how many videos you have of somebody grabbing a coffee cup, you’re actually not getting the data you need, because you don’t know the proprioception of how much force is applied and how we detected, “Okay, this is a plastic cup, it’s going to deflect to a certain point. And there’s condensation on it, so I need to apply a little bit more force to counter the loss of friction that I’m experiencing.” We do that autonomically. There’s not a single conscious thought in our head when we’re doing those things. Aria, when you’re taking your phone out of your pocket, you don’t know what the angle of your second digit is, or how much force you’re applying with your thumb to avoid getting the keys.
ARIA:
Yeah.
ANDREW BOSWORTH:
To some degree the things that we think of as intelligence, we’re talking about the higher-order functions of the human brain. That’s arguably the less impressive part of intelligence. The deep brain—the mammalian, the amygdala, that lizard brain intelligence—that is wildly hard for us to capture in the modern era. So, as much as I’m excited about the word calculator, I really do believe in Yann LeCun’s vision that you have to do this pioneering work to break through to a world model that has common sense, that understands causality in a more substantial way—not in a statistical, kind of soup way, but in a model-based way. And then my third layer of this stuff is: I want it to be embodied. It almost goes back to J.C.R. Licklider, who’s one of the first computer scientists to sit down at a terminal and do a live programming.
ANDREW BOSWORTH:
And he believed in that vision. He really was the one who then funded what would become SRI—funded Doug Engelbart, funded DARPA, the ITO [Information Innovation Office] at DARPA. I feel like we’re in that era. We’re in the terminal era of AI, and it doesn’t want to be like that. It wants to be everywhere. It wants to be ubiquitous. It wants to be in full context of what your life is, and with history of who you are and what your life is. I think about Douglas Hofstadter’s book, I Am a Strange Loop, and how we all have little, mini versions of each other’s consciousness—simulations of each other’s consciousness—running in our brains that allow us to collaborate effectively. And my AI obviously doesn’t have that. It has no idea what I want, what I’m about. It can’t infer anything from context. So for me, I love where we are. I am a huge believer. I also want us to invest in these world models. And I want to free it from the terminal.
REID:
And are the world models, and the freeing from the terminal, are those fundamentally new, different technologies from the scale transformer? What’s the looking-through-a-glass-darkly kind of thinking about how this would be done?
ANDREW BOSWORTH:
I think the embodied part can do both. The embodied part will benefit a lot, and maybe help a lot, with world modeling. Once you have these sensors out there and you have better richer data. When you have robotics data, which will give you proprioception, which will give you friction. I think that is going to be a big unlock. It benefits the current models a huge amount, to get that data and to be in that context. And also, it probably is, some of the data that you need to start to understand what it takes to build the world model. Which we appear to be born with.
REID:
Tell me about the Orion project. I had the pleasure and honor of coming by Meta HQ and playing with it and getting some of the detailed exposures. Tell me about what you see as the significant use cases, and where the Metaverse is moving towards with Orion.
ANDREW BOSWORTH:
So we’ve got—the Orion glasses are on right now. I’ve also got this wristband—this neural interface wristband. It’s got these little metal bumps on the back of it, all down the band, EMG sensors—electromyography—they’re measuring electrical impulses going down my hand. And what I’m able to do, I’ve got a screen in front of me, so I can do my email. I can do Instagram. We have little games that you can play. I have been caught playing the games in meetings before [laugh]. They weren’t my meetings, to be fair; I was just listening in. But I have been caught. And I can do it with my wrist at rest by using a small number of gestures. But I’m using eye tracking to try to direct things. And so we had to do a lot of tough problem solving on photonics and optics.
ANDREW BOSWORTH:
Some of the stuff we understand. We understand how to build apps. We understand how to do these things. And there’s some novel interaction design here with the wristband and eye tracking, but I think we’ve made that pretty straightforward. Doing the neural interfaces was super hard. You’re building an AI model of what the hand is doing based on these electrical impulses that you’re able to observe from the surface. And so you just need a lot of people to build a generalized model that works so that anybody can put this on. And I think we had success with you, Reid. I think we’re well into the 95th percentile, I think, of people who can put this on, and right away we know what shape the hand is in, and that allows us to do these gesture based controls. Even with your hands in your pocket, behind your back.
ANDREW BOSWORTH:
These need to be regular glasses, first and foremost. When they’re powered off, I have to be able to just see your eyes, to see my eyes, and that has to be—that human connection that we have is important. Otherwise I wouldn’t use the glasses. You have to do all of this in a comfortable all-day-wearable form factor. It’s a lot of challenges. You know, this was 10 years in the making. And we thought when we started this program that we had less than a 10% chance of being able to build it. So the fact that it exists is a true testament to a vision that Mark Zuckerberg had, that the research team, Michael Abrash—our Chief Scientist—and his team had for a long, long period of time. Now we always thought, to your point about the Metaverse, that the first thing that would happen would be these holograms in the world.
ANDREW BOSWORTH:
And at first they’d be just referenced to you—so your personal interface—and then over time, they would be attached in the world. And then eventually you would have an AI that was doing it. What’s been surprising to us is the AI came first. We had the sequencing wrong. The AI showed up earlier than expected. So what’s been exciting is: There’s a lot of products now that are totally valid products that are easy to use between full AR glasses—which are spectacular, but will be expensive—and the Ray-Ban Metas—which are super affordable, but a little bit more limited in their functionality. That entire spectrum is now open to us, and it’s really, really exciting. And we have demos internally in our wearables.
ANDREW BOSWORTH:
And with the Ray-Ban Metas that are in market today, through our Early Access Program, you can use this tool called Live AI. Thirty-minute sessions—until the battery runs out, because it was kind of bolted on after the fact. But for 30 minutes, it can see what you’re seeing. It hears what you’re hearing. And the difference in how useful it is to me as I go about my day in the world—I was doing a film development project. And film developing is a real fussy business with what kind of chemical, at what temperature, for how long. Usually I’m doing it with a laptop next to me, and I’m trying to type into the laptop, “Okay, what’s the thing?” Doing it with the Live AI session was incredible. It just sees what you’re doing. It’s like, “Okay, you got 20 more minutes of doing that, and then you’re going to do this other thing, and why don’t you go ahead and prepare this now.” It’s stunning. So for me, it is about blending the physical and digital together in both directions: You have to give the AI access to the physical context in which you operate—and the digital context in which you operate on your desktop or on your phone. And conversely, you want to be able to bring those digital constructs into reality—through robotics, through automation. So I see really a very, very exciting decade ahead of us in that synthesis.
ARIA:
So I am someone who sees 90% of my vision out of one eye. Will I be able to use these?
ANDREW BOSWORTH:
Yes. So these are binocular. You’ll experience the same limited depth perception that you would experience normally. That’s right.
ARIA:
That I have now. Yeah.
ANDREW BOSWORTH:
So you’ll still be able to use these.
ARIA:
Okay.
ANDREW BOSWORTH:
Over the future of time, I do expect—one of the real option spaces here is: What if you go to monocular? What if you just had the display in one eye?
ARIA:
Yeah.
ANDREW BOSWORTH:
And there’s some challenges there. It creates some binocular rivalry. So for people who have full vision in both eyes, they might struggle sometimes to know, “Hey, my eyes are seeing different things looking at the same space.”
ARIA:
“Where should I look?” Yeah.
ANDREW BOSWORTH:
You would actually be better off in those displays.
ARIA:
Ah, I love it. Can’t wait. I finally have a superpower.
ANDREW BOSWORTH:
You would have a little superpower. That’s right. You’d have a little advantage that monocular displays would probably be great for you. Cheaper, lighter, and probably just as good. So I think you’ve got maybe a little inside opportunity on the rest of us.
ARIA:
I love it. I love it. Okay, so for some of the skeptics out there, I feel like this is especially a good question for you because you were essentially the person who created the News Feed, and for the “youngs” among us, they might not remember this, but the New Feed came out, and everyone was like, “No, this is horrible. What are we doing?” And now we’re like, “Oh, a News Feed, this makes so much sense. This is perfect.” We can’t imagine a world without it. And so some people might be saying the same thing. They might be saying, “Why are you creating these things that we don’t need?” What would you say to the skeptics? And then also, as you see people using it, what do you think some of the mainstream adoptions are gonna be?
ANDREW BOSWORTH:
Things like AI are different kinds of things. These are different types of innovations. These are truly new tools. This is a new space. The smartphone was a kind of a disruptive thing, where it was like, “Hey, you have a phone, and you have an iPod, and you have, you like the web. We’re going to put all those in one place.” Right? That was the pitch. This will be similar to that. This will be like, “Hey look, you already have a phone that you like doing this on. You already like it. You already like Instagram. This is a better way of doing a thing that you already do.” And then, once you’ve established that beachhead through comfortable paths, then the huge opportunity presents itself. You know, you talked, Aria, about having superpowers. That is the vision that we have for these wearables. Every person: superhuman vision, superhuman hearing, superhuman memory, superhuman cognition.
ANDREW BOSWORTH:
That’s what I’m talking about. That’s what I really believe is going to happen here. A tremendous, tremendously equalizing technology. You know, we talk often in our society today about the vagaries of the birth lottery. And we’re rightly doing so. We usually talk about the birth lottery in the context of—whether it be race and gender, or physical geography. And those are hugely important factors in how our lives play out. But we don’t talk about the other parts of the birth lottery. Who has incredible memory? Incredible vision? Incredible hearing? Some people are born that just have those talents. Incredible ability to think creatively. To pattern match. And there’s no reason we couldn’t all be Gary Kasparov in chess if we have the wearables, you know what I’m saying? Now it probably takes the fun out of chess, so I’m not recommending anti-competitive performance. But my point is: the vagaries of the bell curve and the ranges of human capabilities are beyond just the demographic ones. They exist in terms of fundamental capabilities. And how interesting will we be as a society if everyone has the full access to those facilities? That is the future I see. And I think that’s a pretty compelling pitch to people to get them to try a new thing,
REID:
Exactly. And I’m going to take this moment to ask you a question that I hate getting, so it’s kind of entertaining to be the asker of it, which is, you know, if you’re predicting out—call it three years—with wearables, and AI, and everything else, what kinds of things do you see in the future that will be the kind of thing that people will be doing? Is it kind of a, “It’s giving me a constant scan on my life?” It’s doing the proactive search of, “Oh, you’re looking at film developing,” or, “You’re looking at this thing? Oh, let me tell you some stuff about this.” What is your future prediction that might cause, call it semi- or non-technologists, to go, “Oh my God, that’s coming in three years”?
ANDREW BOSWORTH:
Yeah, I think what’s interesting—the hard part of this question is the timeline of it. I think you and I probably would do better in one year and 10 years.
REID:
Yes, exactly.
ANDREW BOSWORTH:
Three years is the most awkward time because you kind of—I know one year. I have a good sense of 10 years. Three years is tough. I don’t think we’ll be at the proactive place in three years quite yet. Early adopters will probably have always-on systems that are capable of it. The degree to which I think it’s going to be reliable enough for the average consumer, I think we’re probably a little further out from that, from a world modeling and cognition standpoint. But for a decent portion of people—you know, tens of millions—we’re going to have people who are in regular conversation with AIs. Everything from—I cook all the time, I’m the cook in my family—”Hey, how many quartz is it in a gallon? What’s the conversion, tablespoons to ounces of water?” to, “Hey, what did my wife say I need to get at the grocery store? Remind me, she told me yesterday” to, “Hey, when I opened the fridge earlier, did we have that cheese that I like?”
ARIA:
That’s the best one.
ANDREW BOSWORTH:
And listen, and there’s stuff that I really want to push for, but we have to advocate for regulatorily. I mean, I think one of the classic ones is the cocktail party problem—and Reid, you must run into this all the time: You know, you see somebody coming up to you, you recognize them—you know you know them. You don’t remember why you know them or how you know them. And right now we’re just like, “Hey guy, good to see you, friend.” And you’re just looking for clues to try to remember. I’d love to be able to have your AI whisper to you like, “Hey, this is who this is—this person, the last time you saw them was here.” And you’re like, “Oh, right, right, right, right,” and that kind of thing. But that one we need help with, right?
ANDREW BOSWORTH:
Right now there’s regulations in Illinois and Texas—BIPA and CUBI—that make that kind of thing tenuous at best, if it’s doable. So there’s a bunch of very human problems where we could be a little bit more proactive—and I think comfortably so—in a totally privacy way, but we’ve got to do work. So three years from now, I do think you’ll be in this kind of—people at the leading edge, but not the earliest adopters, not the bleeding edge, just the leading edge, just the early adopters—will be having tremendous usefulness, cognition and memory help from their assistants.
REID:
And where do you see the panoply of wearables? Are the glasses going to work with a watch, work with a phone? Or will there be some like, well, once you have the glasses you don’t need the watch as much anymore?
ANDREW BOSWORTH:
No, I think they want to work together. I mean we probably are more than 10 years away from having the efficiency of compute from a thermal perspective to have the glasses standalone. I think for the foreseeable future, you do want this to be a constellation of devices. And there’s a lot of value in being on the wrist. You know, the neural interfaces that we’ve developed here show that there’s a tremendous amount of additional incremental signal and control we can give consumers without having to hijack their eyes or make them reach up and tap something on their temple arm. I think there’s also a huge opportunity for the range of glasses. So if you have glasses that have no display, you want to pair them with a watch that has a display, or with a phone that has a display. If your glasses do have a display, okay, now maybe you just have a simpler band on your wrist, and you’re wearing maybe a conventional wristwatch on your other wrist.
ANDREW BOSWORTH:
We’re in this space of wearables. So, a thing that we actually haven’t had to grapple with in the industry for a while, is the luxury presentation, the identity that people want to bring to the world about themselves. They want to look a certain way. And so if I’m somebody who wants to look a certain way, I have to have the options to maintain that vision of myself, while also being a part of the modern era. It’s funny, there’s been a lot of discussion about the app stores on Google and Apple devices. I really don’t have a problem with those. That’s not the issue I see. What worries me more is the degree to which these devices—which are the natural center of compute for a constellation of wearables—are locking down the access to third-party wearables manufacturers to things like critical Bluetooth channels.
ANDREW BOSWORTH:
You know, the famous example that I’ll use is the AirPods. There are better headphones that you can get than the ones that happen to be manufactured by Apple. AirPods are great. It’s a great device. They have 70% of the market share, and they don’t have that because they have the best product. Certainly not the best value product. They have it because they have a proprietary Bluetooth channel that makes it super easy to pair—and also I think they’re easy to lose, you have to buy a lot more. But it really bothers me. Like, that shouldn’t be that way. And listen, I’m a nineties guy, so I’m an old-school computer construct guy.
ARIA:
Right. Absolutely. I mean, I think about that all the time. I bought non-AirPods, but they were too hard to use. And so I was like, “Forget it, this is not working for me.” One thing you said earlier is that you tell your team over and over again, “What human life will be better? What human on this Earth will be using this?” So can you talk about a time—it could be for the Orions or a different product—that you really saw user testing got you, “Ah, this is the aha,” you saw something interesting, it made you have a new leap—like, that’s just such a great way to operate?
ANDREW BOSWORTH:
There’s quite a few, but I’ll tell you one that surprised me recently, which makes total sense in retrospect. One of the most popular demographics purchasing our Ray-Ban Meta glasses are blind people.
ARIA:
Uh-huh.
ANDREW BOSWORTH:
If you watch them in the user research sessions, it makes total sense. And we have members of the team who are blind, we have a partnership with Be My Eyes, which is a great service. You know, in my head, as a seeing person, I have no problem, I’m like, “Oh, I wonder if they have problems navigating to the restaurant or getting across the street.” They don’t. They have solutions for that. They’ve got Google Maps—it’s reading directions into their ears. They’ve got a stick, or a dog, or there’s a bunch of systems they have. They can get to the restaurant. You know what they can’t do? They can’t find the door. And so what they do is they ask the glasses, “Hey Meta, look and tell me where the door is.” And it’s like, “Oh, okay, it’s to your left.” And if they can’t get that kind of thing done, they can call into Be My Eyes, and now they’ve got a live video stream going to an AI agent at first, and, if it fails, over to a human, who helps them out. And so there’s these moments where you know it’s going to happen, because you yourself are a human, and you’re like, “Yes, I as a human am like other humans, and I would want this, so other humans will want this.” But there are also these really fascinating times where you build the capability, and this stuff comes out of the woodworks that you never saw coming.
ARIA:
And so obviously Meta, Yann LeCun—you guys are leaders in the AI space. And so what do you think are the things additionally that set you all apart? And what are you going to be focusing on for the years to come?
ANDREW BOSWORTH:
Well, we’re pretty proud of our open source stance with Llama. I think we were one of the earlier ones—in my opinion, some of its strategic benefit. Like, if anyone builds great AI, our products get better. But people building great AI doesn’t let them replicate our products. So we have this asymmetric benefit from AI, ao there’s a strategically commoditized complement construct here. But it really is deeper than that for us. And if you’ve spent any time listening to Yann, you’ve heard that. We really think that this is the best way to accelerate progress—you open source these things, and people learn from them and you get back tenfold. You know, when we launched LLaMA 1, I think it was a matter of days before somebody had a version of it running on a laptop. And I think it was a matter of weeks before someone had a version of it running on a phone. Just from a resource standpoint—I’m sure we had the talent to do it—we weren’t going to. It would’ve taken us years to get to go do that, because we just had other things that we were doing. And so, wow, what a spectacular closed loop to have stumbled into almost with this powerful policy.
REID:
You know, look, it’s great when the open source stuff goes to startups and goes to entrepreneurs and academics and people building stuff. More challenging if it goes to rogue nations, terrorists, criminals. What’s the way to navigate that—making sure more of that kind of innovation benefit in the loop happens that way, and less of it with, like, North Korean hackers holding hospitals ransom or, you know, other kinds of things?
ANDREW BOSWORTH:
Yeah, for sure. Well, there’s two parts to this: I mean, I think, again, getting back to this kind of asymmetric construct of AI, information wants to be free, and I have absolutely zero faith that—the most closed source thing that exists isn’t actually widely available in the nation states that are inclined to attack those. Whether it be through espionage directly or indirectly, I believe that’s likely the case. But setting that aside even, I still stand by this, because I think the opportunity that you have to try to handcuff yourself to slow the progress of your enemies, the far bigger risk is that you just get lapped by your enemies. The people we’re discussing, especially China, are highly capable. They have a tremendously talented pool of engineers. They’re looking at the same thing we’re looking at. It is the race. It is, you know, this is our Space Race. This is what it looks like in our era. And there’s very few secrets. And there’s just progress. And you want to make sure that you’re never behind.
REID:
I agree with, by the way, your two counterpoints as important points, Boz. But I do continue to linger on the—and I get the, look, we just accelerate, and we try to accelerate to get past what the bad actors might be doing. But it is important to also kind of, to some degree, slow, contain, limit bad actors. And I tend to think that there must still be some things we can do. So I’m kind of curious about—what’s the thought about the navigation?
ANDREW BOSWORTH:
Yeah, it’s funny. I think a lot of people in our space, Reid, have had this conversation around safety, and I think it’s an important one. You know, the ones that come up most often are bio, cyber and nuclear. I think there is a model there with bio. The knowledge exists, and a lot of times the threats that I hear ascribed to AI fail “the Google test.” Can I Google for this thing? And very often I can, in which case the AI isn’t really the threat. The information isn’t the threat. It’s the fact that you can mail order these things. And I have some friends and family who work in bio who are pretty consistently alarmed at what they’re able to acquire for their labs without any kind of control. So I think there are regulatory solutions in the bio space.
ANDREW BOSWORTH:
I think on the cyberspace, I actually feel way more optimistic about AI’s ability to detect cyber attacks than to generate them. Of course it will generate more, but I actually think we have been struggling on the detection side. And there’s a lot of evidence that. If you look at what nation-state actors have been able to do—the U.S. with the OPM [Office of Personnel Management] hack, with the hack on some of the crypto exchanges—I think AI is a much more asymmetrically valuable tool in defense than it is in attack. I think we have been on the wrong side of that for a little while. So that one I’m more bullish on. To tell you the one that I’m actually the most worried about, it’s not any of those: It’s fraud. Good old-fashioned fraud. You know, I’ve already had the conversation with my parents: “Hey, if somebody calls you, and it looks like me, and it sounds like me, but they’re asking you for money, ask about a fact that only I would know.” Right?
ANDREW BOSWORTH:
There’s a real education that has to happen. I talk often with people about this one, and it’s a hard one to wrap their heads around, but I have to remind people that actually the period that we grew up in was very unusual historically. Before the photograph, and before video, all media was presumed to be possibly fake, right? Like letters, the newspapers, were presumed—you know, you didn’t know the veracity of it. There was a very unique period, never probably happening again, where you could produce a piece of media—a photograph or a video—that it was impossible to imagine faking it. It was just orders of magnitude more expensive to fake than to have it be real. And so these were presumptively true. That’s not going to be the case anymore. So we’re going to return to a like pre-1900s media relationship that we have with media. The kids are already there, by the way. The kids are already there. The kids are alright. They already know. We have a generation to look after—it’s us, it’s our generation—we have a generation to look after who didn’t have the antibodies for that. So, for me at least, I think that’s a piece of education that I would put a lot of energy into as a national policy endeavor.
ARIA:
And so I think that begs the question that a lot of Possible listeners have about the new AI world and how it relates to information accuracy. And sometimes that’s AI-created, sometimes that’s user-generated. How do you think about this? Obviously a lot of people have seen the new news out of Meta, and your changes to fact-checking and content review. What’s the positive case for what you guys did, and how do you think about that?
ANDREW BOSWORTH:
I think Community Notes is just a better feature than Fact-Checking was. It works at a larger scale. Like do you use an encyclopedia these days, or do you use Wikipedia? It’s just not that hard of a—this is kind of my broader thought, which is: I think it’s not surprising for us as an American company—I think it may be more surprising in other parts of the world—for me to say, “Yeah, people are allowed to say things, believe things that aren’t true.” And I think we’ve all learned a tough lesson that the nature of truth, and what is true, is also not as firm as we’d like it to be. COVID taught us some tough lessons there. We have to adjust to the fact that we grew up in this relatively golden era—where, by the way, the two political parties were effectively almost the same political party. We’re exiting that period into what is probably, more than we’d like to admit, a normal period of American—or global democratic—upheaval, of tension, and different ideas about the future of where things are going. And the technology is playing a huge role in that. And I think education about these tools is hugely important.
REID:
Going back to the startup landscape, one of the questions I often get is—you know, because of the importance of compute, size of data, et cetera—is AI a game that is only going to be won by the hyperscalers? And one of the things I think is very helpful about what Meta is doing with the open source stuff, this is in the positive category, is obviously making that a much more of a field where a lot of different people can play. What do you think are going to be the things that the hyperscalers—you know, Meta and the others—are going to be like, “These are areas we’re going to be deploying a bunch of stuff?” And what are the things that you think are some of the range of interesting things with startups, and how will that play out?
ANDREW BOSWORTH:
We’re super thrilled about the role that Llama has played in building up the ecosystem of startups and giving them a better shot to innovate. And we’re seeing that really play out materially as hyperscalers are forced to take on innovations that came out of these little startups, and obviously vice versa is happening. Listen, the wisdom in every generation—and you know this better than anyone, Reid—the wisdom of any generation is: These big companies from the last generation are obviously going to win the next generation. And it almost never happens. It almost never happens. And we never know why or how until it happens. And then we’re like, “Oh, obviously that’s why and how.” And so I don’t know why or how, but I suspect there is a lot of room for truly disruptive technologies. Now, I will say structurally we actually know a lot more about the challenges the hyperscalers face.
ANDREW BOSWORTH:
Google has a business model challenge, right? Like are they willing to undermine and cannibalize one of the most successful business models, if not the most successful business model, of all time? Boy, they’ve got the technology, the capability. They’ve got this tension. That’s tough. Easier for us. This is all gravy for us. All of our products just get better. They just get better. It’s all good news for us. Microsoft, I think, is actually in a similarly strong position. Their products get better. The consumers who use Office products get better. Having all the AI doesn’t make you able to build Office. But having Office with AI is better. So I feel like us and Microsoft win—kind of no matter what. With respect to the haters, I’m sorry to tell you, we’re there. I think Google’s got the tension.
ANDREW BOSWORTH:
I think Amazon is somewhere in between. AWS certainly could be helped tremendously, but is it a race to the bottom, and they’re just adding one more incremental service? So maybe it’s a no-up for them. They’re announcing their partnership with Anthropic. They have a huge investment in Anthropic. Alexa has got a huge footprint. Can they rejuvenate Alexa with this new program? You know, Panos [Panay] is there. I think he’s obviously a tremendous talent. So I’m rooting for them. I think that’d be great to have these AIs in the homes in more interesting places. So ironically what we have is a lot more visibility into the hyperscalers and the landscapes they face. The startups are a total wild card, and that’s what I love about them. You know, they come out of nowhere.
ARIA:
I love that state-of-the-state and reality check. It reminds me of that meme that’s like, “Yeah, I knew search was going to be big, so I invested in Yahoo. And then I knew smartphones were going to be huge, so I did Research In Motion. And then social was obviously huge, so MySpace was my biggest investment.” You know, it’s really hard to predict what’s gonna hit.
ANDREW BOSWORTH:
That’s right. And we underestimate at the last mile. We underestimate the interface design. We underestimate the use cases.
ARIA:
Absolutely. So on this podcast, Reid always gets to ask the questions about science fiction, and I’m always woefully behind. I would make a wager that, since you have young kids, you’ve probably either seen Wild Robot or read the three books, and so I’d love to ask you a question about your thoughts on it.
ANDREW BOSWORTH:
Of course, yeah.
ARIA:
You have this robot who has empathy, who is talking to the animals in the forest. Just talk to me about Wild Robot, and do you think it’s a positive depiction of AI and robotics? Negative? Warning? How do you see it? I’m sure your kids have seen it, and that’s one thing that’s shaping their vision of robots.
ANDREW BOSWORTH:
First of all, I’m a huge film fan, and I cried like three or four times when I watched that movie. And I watched it again and cried again at the same points. Which is unusual for me. And I’m a bit of a crier, and I got no problem with that. But, it was a touching film and really a film not about robots. Really a film about motherhood. Really a film about parenthood. And so touching on that. You know, if I’m being a critic here, I’m not exactly sure how Roz’s “Hey, let’s all get along and not eat each other” works out more than one season when the carnivores need to eat things. I don’t know, kind of unresolved in the film is how exactly the carnivores survive in Roz’s brave new world that she’s tried to craft for them.
ANDREW BOSWORTH:
So I think the morality of that part is a little heavy-handed for me, and I would’ve loved them to embrace more the circle-of-life construct of like, “Yep, this is the natural way of things, and I’ve protected you as your mother.” But neither here nor there. I’m not sure that would’ve worked with the fox anyways. As far as a treatise on robotics goes, it’s an infinite magical robot that has permanent energy and has arms that extend to infinity and all these other things. What I’ve been thinking a lot about in science fiction is engineering-oriented science fictions. Andy Weir is the best at this, right? You know, Project Hail Mary. You know, people are familiar with The Martian, which is a great fun one—and I do think the book is better than the movie, with all respect to Matt Damon. I think Project Hail Mary is even better. It’s near-future science fiction, so it’s tangible. It’s optimistic about humanity’s ability to engineer our way out of grave problems. Those I love, and those are the ones that I decided to start reading to my kids.
REID:
Alright, rapid-fire. Is there a movie, song, or book that fills you with optimism for the future?
ANDREW BOSWORTH:
Oh, wonderful. You know, it’s funny—one of the reasons I started getting involved with the film a little bit is because I want more optimistic stories out there. I think I’m going to go with Star Trek. And by the way, the current series are fantastic, but you don’t have to pick the current series. It really is a beacon of optimistic science fiction in a landscape of otherwise relatively dystopian works, which I think are just a little easier to write, frankly.
ARIA:
That’s awesome. Boz, what is a question that you wish people would ask you more often?
ANDREW BOSWORTH:
Well, we’ve gotten to some of them today. It’s funny, people hear about my job, and they certainly want to understand: What’s happening right now? What is AI? Is it a big thing? Is it a bad thing? But they so rarely get into: What is the positive vision of the future? The thing that I don’t get asked often is like, “Paint me the picture of the beautiful future.” I’ve spent time doing that in the mixed reality and virtual reality space—this idea of people unbounded by geography. I talked earlier about the birth lottery, and where you’re born is a huge factor in it because it limits what opportunities you have. What if you didn’t have those limits because the metaverse enabled you to bring the full strength of your talents to bear? And how much would humanity benefit if people, no matter where they were born, were able to bring that brilliance to the forefront? And then I talk about AR, and what if everyone had that memory, the cognition, the hearing, the vision, the capabilities? How does society move forward when we all have those capabilities in equal measure, and at measures that are superior to what biology can provide us?
REID:
Well, here’s another positive question: Where do you see progress or momentum outside of your industry that inspires you?
ANDREW BOSWORTH:
Medicine. Man, it feels like the unlock we’re getting to with cell models, the unlock that we’re getting to with AI being able to take on modeling tasks that were previously incomparable. We built a custom AI at FAIR to do material exploration for the optics for our glasses. It’s one of those solutions spaces where it’s bigger than the number of grains of sand on the earth, and you have to look at every molecule and assess what properties it has. And it was short work to build this incredibly dedicated model that narrowed us down to like 20 possible solutions, of which it looks like two are going to work for our purposes. Incredible, incredible result. And so you do that, and you think about medicine—where you’re so often trying to do a certain kind of protein folding—I just think the combination of these two things is going to produce an explosion in great health outcomes for people. And I’m really looking forward to that.
ARIA:
Couldn’t agree more. And Boz, you teed me up so perfectly for our final question on Possible, which is, can you leave us with a final thought on what you think is possible to achieve if everything breaks humanity’s way in the next 15 years—and what’s our first step in that direction?
ANDREW BOSWORTH:
Ooh. I think 15 years, we’re looking at significant shared blending of digital into physical. So people coming together wearing wearables, wearing glasses, and having rich conversations with people who are both present and not present. Collaborating on work, on models, with a true feeling of presence. A true feeling that they’re all there. And someone, by the way, they’re not on the wearable, they’re on a phone, but they’re being projected into three-dimensional space effectively through a Codec Avatar. Everyone is getting the most sense they can through whatever tools they have, whatever modalities they have access to, to be present. We’re so much more in touch with the context of our lives than we give ourselves credit for. I talked earlier about the gap between the conscious and the subconscious. You know, the context in which I consume information, I exchange information, the subtleties of facial gesture and body posture, are huge portions of our brain. You know, the inferotemporal cortex—just an entire area of the brain dedicated to reading faces—no one thinks that video calling is that, which makes us bound for certain things by geography. You have to be there. Now, I don’t think anything will ever be as good as being there. But we can get a lot closer than we have so far.
REID:
Boz, always look forward to our conversations.
ARIA:
Such a pleasure to have you on the pod. Thanks so much.
ANDREW BOSWORTH:
Thank you guys both for having me.
REID:
Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jimenez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.
ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Thanasi Dilos, Ian Alas, Greg Beato, Parth Patil, and Ben Relles. And a big thanks to Meta, John Iarla, and Tatin Yang.