This transcript is generated with the help of AI and is lightly edited for clarity.
LATANYA SWEENEY:
I go online to Amazon, and I want to buy something. And I’m looking at the reviews. Did humans write those reviews, or did generative AI write those reviews? That’s a new kind of vulnerability that we have. And the issue of trust in this rise of generative AI is absolutely huge. We have to redo everything.
REID:
Hi, I’m Reid Hoffman.
ARIA:
And I’m Aria Finger.
REID:
We want to know how, together, we can use technology like AI to help us shape the best possible future.
ARIA:
We ask technologists, ambitious builders, and deep thinkers to help us sketch out the brightest version of the future—and we learn what it’ll take to get there.
REID:
This is Possible.
REID:
Imagine a world where nearly every footprint, every click, and every data point collected about us is protected and harnessed exclusively for the common good. As we know, that’s far from the reality we live in right now—but it doesn’t have to be.
ARIA:
Today’s digital landscape is filled with unprecedented opportunities and profound challenges, from discrimination in algorithms to threats against democracy itself.
REID:
And with AI evolving at an extraordinary pace, it’s more important than ever to keep asking ourselves: How can we steer this powerful technology towards serving the public interest? How do we create frameworks that protect individual privacy while fostering innovation?
ARIA:
Our guest today is someone who has been working towards these answers for decades. Latanya Sweeney is a Professor of Government and Technology at Harvard, the former Chief Technology Officer at the U.S. Federal Trade Commission, and founder of Harvard’s Public Interest Tech Lab and Data Privacy Lab. Her pioneering work is cited in major regulations, including HIPAA. She was also the first Black woman to earn a computer science PhD from MIT.
REID:
Latanya’s research doesn’t just ask hard questions. It presents solutions. She focuses on shaping a world where technology serves, rather than exploits, society. And her insights on AI’s ethical challenges—including privacy, discrimination, and the future of democracy—couldn’t be more timely. Here’s our conversation with Latanya Sweeney.
REID:
So Latanya, you talked to my friend Krista Tippett on the On Being podcast about your experience being raised by your great grandparents in Nashville, and how as a young girl you found solace in math and finding the right answer. What’s something you learned from your great grandparents—who were born in the late 19th century—that you still find helpful today?
LATANYA SWEENEY:
Oh my gosh. Where would I possibly start? I think in my own work, I was called back to think about my great grandfather many times in the early years of doing work because, in fact, he lived most of his life in the Jim Crow South. And as a Black man at that time, he had a lot of principles about how do you survive, and when you look at his principles of survival, they all came down to ways of having anonymity, and how well it had served him. And, you know, the reality that as technology was changing our lives, making us all live in these sort of transparent lives and these sort of, every minute of our lives captured in data somewhere, I often think about the inability to have that kind of anonymity, and how, if things change culturally around you, it can be turned around against you if you don’t have it.
ARIA:
I mean, that is so fascinating. One thing I just love truly about your whole being is that you are such a positive sort of light, yet grappling with these tough issues. And certainly I’m sure you made your great grandparents proud with your many, many honors. And so back when you were a grad student at MIT, you were studying technology, and you overheard someone say, “Computers are evil.” Can you say more about that experience?
LATANYA SWEENEY:
Well, I mean, to appreciate that experience, you’d have to roll your mind back to a time where, you know, as a graduate student in computer science, you sort of saw this technology revolution coming. You knew it was going to change everything in our lives. And of course we believed it had nothing but upside. You know, computers were cheaper. It would not have its own bias. It would actually just lead us to a better democracy and a better tomorrow. Sort of believing it was going to right all the wrongs of society. And so when somebody comes in and says, “Computers were evil,” I mean clearly she did not actually understand the beautiful utopia awaiting us. And I had to definitely stop and take some time so she could understand better what was going on.
ARIA:
Well, I think pretty quickly you realize that while, you know, on the Possible Podcast, we’re typically techno-optimists, that not everything is a hundred percent perfect. Similarly, can you tell us how that encounter led to your Weld experiment, which I read about, and is pretty impressive?
LATANYA SWEENEY:
Yeah, so she was an ethicist, and she and I are talking back and forth–pie in the sky–but you know, there’s a part of me that’s still kind of the engineer, “let’s get pragmatic, let’s get a concrete example.” And so when we had a concrete example, she focused on a data sharing that had happened here in Massachusetts with the Group Insurance Commission–this is the group responsible for the healthcare for state employees and their families as also retirees–and she had said, “Look, they’ve gotten this data, they’ve given this data to researchers and they’ve sold a copy to industry.” And I talked about all the amazing things that could come from such a data sharing. That we could do retrospective research, we might find ways to cut costs, and that actually being able to share health data like that is important and incredibly useful.
LATANYA SWEENEY:
And much of her conversation had been about how technology was changing the social contracts. That the expectations we have, and our laws, have certain assumptions built into them that aren’t explicit. And so in this particular data sharing example, the question was, “This is all great, if it’s anonymous.” We have that rule that they can sell it, they can share it. It had no names, it had no addresses, it had no social security numbers. Like it’s fine to share it if it’s anonymous. But that data did have basic demographics–month, day, and year of birth, gender, and five digits zip code. And so I’m sitting up doing a quick calculation. You know, there’s 365 days in a year. Let’s say people live a hundred years. There are two genders in the data set. When you multiply that out, it’s 73,000 possible combinations. But I happen to know from an earlier work that the typical five digits of code in Massachusetts only had about 25,000 people.
LATANYA SWEENEY:
That meant that that data would tend to be unique for most people in Massachusetts. And so my argument is disappearing right before my face! I’m trying to convince her and I’m like, “Wait a second, that’s not right! That’s not helpful!” So I wanted to see if it was true. William Weld was the governor of Massachusetts at that time, and he had collapsed in public and not a lot was known about why he had collapsed, but information about that visit to the hospital was in fact in that data. So I went up to the city hall–he lives here in Cambridge, not very far from where I am now actually–and I bought the Cambridge voter list, it came on two floppy diskettes. I just have to tell you, my students have no idea what a floppy diskette is. But anyway, I got the voter data on two five and a quarter inch floppy disks.
LATANYA SWEENEY:
And it had the same demographics as the health data–month, day, year, and birth, and had gender, it had zip code, and of course it had various voter information. I got it because William Weld lived in Cambridge–and he had, and we assumed he voted that he was on the voter roll. And sure enough, he was. In fact, only six people had his date of birth. Only three of them were men. And he was the only one in his five digit zip code. That meant that that combination of date of birth, gender, and zip code was unique for him in the voter data, and unique for him in the health data, and that I could use those fields to link his name and identity to his voter record. And it was a very anecdotal experiment, but how did it generalize? I mean, Cambridge, Massachusetts is home to MIT and Harvard–it’s only six square miles.
LATANYA SWEENEY:
So if he had been 20 years old, it seemed like it would’ve been a lot harder. And so the question is, “How does this scale?” And so using 1990 census population data, I was able to model and it showed that 87% of the US population was unique by date of birth, gender, and zip code. I go from being a graduate student–a month and a half later, I’m down in D.C. testifying. Because it wasn’t one piece of data that was shared that way–that was the best practice around the world. And around the world the laws changed, many of them citing the Weld experiment–as it became known as–because they had to change the way they shared data. Here in the United States that regulation was HIPAA, and in the original preamble to the HIPAA privacy rule, it cites that experiment as well.
ARIA:
And what I love about that story is that not only did you change your mind–which is so rare, you were arguing against yourself–but then you actually used data to prove what you were trying to prove. And it actually changed the law. So that is awesome.
REID:
Yeah, and who would’ve thought that one of the places where Governor Weld would be enshrined in history is a data experiment, but it is awesome. So let’s now even go further back because we share an interest in the history of technology–the industrial revolutions, printing press, semiconductors. Could you share your three-part framework to explain the historical arc of AI and bring us up to speed on where we are now?
LATANYA SWEENEY:
Whoa.
ARIA:
In five minutes.
LATANYA SWEENEY:
Five minutes, right?
REID:
Yes, or less!
LATANYA SWEENEY:
You know, I tend to think of the times we live in right now—I share this view with many historians—that we are living through the third industrial revolution. And you know, you were talking about the second industrial revolution, how powerful that revolution really was–it gave us electricity, and cars, and kleenex, and pretty much everything that we take just for granted. But it literally moved our society from one of agriculture–we had cities and so forth–and it required us to change everything, how we think, and how we operate, as a society. As a Third Industrial Revolution, many people put its start date in the 1950s with semiconductors, and then from there it goes mainframes, to many computers, personal computers, then we get the internet of things, and all of these within revolutions–including AI now–is certainly a new revolution. We have no idea when this is going to end, but it has already transformed everything in our lives–how we communicate, how we get our news, how we work, how we play–and so it’s been quite altering. It’s happening so much faster than the second industrial revolution, that we don’t have time as a society to sort of regroup and figure out: How do we keep the goods of the society and still maintain our societal norms?
REID:
I think you covered exactly the sort of thing, but it’s also what are some of the key considerations we should be thinking about as we navigate into the future?
LATANYA SWEENEY:
I mean, this is an amazing transformative moment on so many levels. You know, I often tell my students, we live in a technocracy, that we don’t live in a democracy anymore. Technocracy is another word that came out of the second industrial revolution. What it meant back then was, “Hey, we’ve got this new kind of society. We need people who are experts in economics to run our economy. We need experts in law to be lawyers in certain positions.” And so this idea that you needed a skilled expert in certain places in government in order to navigate us forward, is where technocracy gets its meaning. The technocracy we live in right now is one where the design of the technology dictates how we live. Our laws and regulations are only as good as technology allows them to be implemented and enforced. And the fact that so many of our laws and regulations can’t be enforced online is a real problem.
LATANYA SWEENEY:
And as more of our lives are spent online, it comes down to arbitrary design decisions, made by people we don’t even know. So right now it’s already changed significantly. I mean, even something as simple as free speech. I have a teenage son. His notion of free speech has everything to do with what most people who spend a lot of time online think free speech is–has very little in common with American jurisprudence notion of free speech. And the idea that free speech is supposed to protect the voice of the underdog, to let the person who can’t otherwise be heard be listened to, is not something–that’s not how free speech works online. It’s much more, “I can say whatever I want in your face, and if you don’t like it, hey, that’s free speech–just stomach it.” Right? And so those are radically different. If you ask my class, or you ask a 20-year-old, “What is free speech?” It’s scary because, more likely, what I found here on campus, is more often they’ll give you the definition that’s aligned with the online rule.
ARIA:
You know, we’ve all seen those videos of Mark Zuckerberg talking to Congress, or senators, and them not fundamentally understanding our world of technology, Facebook, social media. And one of the hopes that so many of us talk about is to have people in government who really do understand technology, to create the laws that we need to govern this new online society. Some might argue that those are the people that we have in charge today–I might argue something different–but you’re saying we don’t have–the government is not governing this online space that we need to–what would you do if you were in charge? What are the laws that we need to govern this new world?
LATANYA SWEENEY:
Well, I think the way to think about it–the way I often talk to my students about, they say, you know, “Well how do we decide our rules? How do we decide–forget technology for a moment–how do we decide how we’re going to look?” And so eventually the students will say, or if you ask a middle schooler what kind of country are we, they’ll say, “We’re a democracy, you know, and we elect people, we make decisions by our vote.” And so that’s true. And then if you ask high schoolers coming into college, “What else are we?” Someone’s going to stick out their chest and say, “We’re a republic. We don’t actually make the decisions ourselves. We vote for people who they make decisions on our behalf, because we’re a democratic republic.” And then I would argue that we’re also a capitalist society. And having ran a computer company for 10 years, there are many amazing things that are possible on that capitalist side.
LATANYA SWEENEY:
But it’s also important that how the design of the technology makes new rules that we will live by, is also a third factor. And what historically has been an American strength is the check and balances of these three arms. Any one of those on their own will take us to a place that we probably don’t want to go. And when they’re out of sync, the problem with so much of our decision making of how we’re going to live our lives is determined by technology designed today. And the regulators and others who would normally provide that check and balance are unable to do so. Then we are sort of in a kind of free fall. Part of it is knowledge of the technology. Having those who would be regulators and lawmakers better understand the technology, but also understanding what its relationship is to governance, what its relationship is to our society.
LATANYA SWEENEY:
It’s been going on so long. I mean, in many ways privacy was sort of the first big technology,society clash, and sort of security, and then followed by that algorithmic fairness. And then comes these issues around democracy and so forth, or content moderation as well. And none of these clashes have ever been resolved. Not a single one of them. Like it just keeps building and building. And so we’re in a dangerous situation. What may have worked 10 years ago–when I was at the Federal Trade Commission–is not a formula for success today.
REID:
No, it’s exactly right. Let’s go a little bit into the staffing. There’s a whole wide variety of questions and I agree with you about the–there’s technologies it forms, its the interest of capitalism, the interest of democracy, there’s actually probably even others. And one of the most central things is how do we get the instruments of government to be at least in the category of understanding? And one of the problems of course is, I think there’s a very natural and essential reasons why the generative AI revolution is being driven by corporations–both the hyperscalers and the other ones–but that’s where the most fundamental knowledge of how this technology’s evolving, what’s being built into it, is. And I find that once you get out of that group, it drops substantially.
LATANYA SWEENEY:
Like off a cliff!
REID:
Yes. Right. So do you have thoughts on what do we do?
LATANYA SWEENEY:
So generative AI is a huge leap forward, and I do think many people in government realize they sort of missed the boat in this revolution, and they want to get in front of generative AI. But the problem is they can’t get completely in front of it. If they really did get in front of it, they’d slow it all down. That would have other ramifications. I mean, the goal is how do we get the benefits of new technologies without the harms? So if you rush to pass laws, you’re likely to pass laws that could in fact prohibit the best ways it could grow forward. On the other hand, many companies want you desperately to pass laws. Why? Because they want to get out of jail free card [laugh], right? They don’t know the answer either. They know–they can see that it’s clashing–they don’t know what the right answer is.
LATANYA SWEENEY:
But if they got a law that says, “Well, whatever it does, it’s okay,” or, “We don’t have to be held responsible,” gives them a get out of jail free. So one has to shore up the government side. In particular, what are the societal norms that have to be maintained? Which ones is this technology, or this particular manifestation of the technology in a particular product, which one is it challenging, or going off the rails? What needs to be there to contain it? We wanted the benefits of it, but we just don’t want these harms. So identifying the harms and then addressing the harms. The other way policy goes wrong is it’s too prescriptive. It says, “Oh, it must do it this way,” or, “It has to do this.” It has to be the opposite–I think it’s about setting goals. You know, one of the beautiful things about technology and innovation is if you set the goals, and the guardrails, let brilliance take its form. If you try to hold brilliance and say, “I’m going to hold the pen and you have to do it this way,” we’re never going to get to the best places.
ARIA:
So in 2003 you wrote a paper: That’s AI: A History and Critique of the Field. And in that paper you described a fundamental division in AI, between those who prioritize ideal behavior based on logic and probability, and those who see human behavior and psychology as crucial for sound reasoning. Do you think that division still exists? Does it inform the research in the field? Or have you changed your opinion?
LATANYA SWEENEY:
Well, there you go. So, go back to when I was a graduate student. So I was a graduate student in AI, and back then the belief was the way AI would work is us as humans had to figure out what was an intelligent thing, action, behavior, and then us as humans design a way for the “stupid” machine to do this “intelligent” thing. And our brilliance was our ability to translate the intelligent action into this machine. And so experts today–the best model of that is TurboTax. You know, no matter who you are, the machine will guide you through completing your tax return and so forth, right? And then there was a bunch of, just a couple of graduate students over in a corner talking about statistics and doing these neural net things. And the best they could do, is if I gave them a lot of data, they could draw one line of discrimination, right?
LATANYA SWEENEY:
So the data had to bifurcate some field, it had to bifurcate into two groups, and then they would be able to have a plot, a line for us to say, “Look what it can do!” If the data were more complicated than that, it couldn’t go any further. This was not good. On the other hand, I spent a month of my life trying to get a computer to know the difference between a picture of a dog and a picture of a cat. So back then, that notion of building intelligence into computers, something humans would have to translate into machine, was really different than this idea of just having statistics that would figure things out just based on properties of the data itself. And by the time I’m writing that paper, there are two major camps happening, and they’re sort of fueling with each other–which of these camps is going to win?
LATANYA SWEENEY:
And what was causing that fuel, wasn’t that the techniques had gotten any better on the statistical side, as much as computers had gotten so much faster. And data–the ability to store data–was so much larger, and who knew we were going to be capturing so much data on our daily lives? Who knew we were going to be spending so much time on a keyboard–writing email, writing documents, doing all this writing–that a computer, that an algorithm could then begin to apply statistics to. And I mean not to take away from transformers–there were certainly very significant advances along the way–but the biggest change was that it’s still very statistical in nature, and it’s mind blowing how good it is!
ARIA:
Mind blowing.
LATANYA SWEENEY:
Right? I mean it’s just really, yeah! It blows my mind. I am constantly amazed. Even very early forms of LLMs. I teach students how to spot these clashes, do experiments to shed light on them–kind of like the Weld experiment, we just do it at scale with students. So one of my students was interested in federal comment servers. You know, one of the ways we change regulations is an agency will announce “We’re getting ready to change a regulation, we’re opening up for public comment.” In the old days you would just go down to D.C. and just start yelling in Congress and so forth inside of the senate halls, and then they moved it online so you provide your written comment. So there had been an example where it had gone haywire when someone really took a bot and it would just sort of mass produce the same content, just changing it up by randomly choosing between certain sentences, and afterwards people figured that out. But even in the babiest, earliest versions of LLM, what Max did was, he had learned on some old content what people had said about this topic, and it was writing original responses and each one was original, and any way that you think you could statistically identify it from what a human wrote wasn’t really feasible. So to prove our point, we ran tests on one of the online survey tools, and people couldn’t do better than guessing which one was from it. And then we submitted a thousand made up comments into the federal comment server. We let it percolate through, and then we notified the federal government that, “By the way, we just put these 1000, so please take those out when you’re assessing. But by the way, did you figure it out?” and they were like, “”No, we had no idea.” [laugh]. So AI is really, it’s a funny edge to it. There are things it does really well and things it still is really poor at.
REID:
Well this federal comment server actually gives one of the questions I want to ask you, a very good lead in, which is diving into a more philosophical thread about AI mimicking human behavior. You’ve written that the desire to construct a machine in the image of a human will not die, comparing this pursuit to artistic expressions–like in films and paintings–of humanity’s search for immortality and self-reflection. What do you think is the deeper human yearning behind what you call the “cultural dream” of AI?
LATANYA SWEENEY:
That will never die. I mean we as humans, we just always–whether it’s an art, or whether it’s a poem–we’re looking for other manifestations of ourselves, and that is also true in machines. And it’s never going to go away, AI will definitely be a part of it. The current trending generative AI is certainly going to help move us in that direction more significantly. I mean even down to just robots. You know, I have a Spot around here somewhere, the robotic dog, because that is just a magnificent piece of technology. And you know, we personify them. Almost any human will start saying “he” right when they see Spot, and I have Pepper downstairs and people will say “she.” You know, you just can’t help–it’s a way of recreating ourselves.
ARIA:
And so we’ve talked about the great promise of generative AI, and like you just said, it’s impossible not to treat it as a human–because it’s so good and gets so close to it. But you’ve also said that a big challenge is answering the question around truth–how do we build trust at scale? And so I would love to hear from you, what do you think it will take, and how do innovators look for these solutions around trust in AI?
LATANYA SWEENEY:
So people are already trying to answer that question, right? You know, the filters on these LLMs–I’ll just use them as an example–people are coming up with all kinds of bandaid approaches to try to build trust. So people will probably still keep trying to come up with bandaids–to Reid’s point–people who actually know how this technology works, will do something deeper than a filter on the outside, in order to try to help us better understand trust. There’s a part of generative AI that will always be able to manipulate us. And I’ll give you an example as to why I say that. So I was the Chief Technology Officer at the Federal Trade Commission, and one of the things that I learned there is: what will make people turn over their life savings to a stranger. Or get their parents to not only turn over their life savings, but their parents’ life savings, and their children’s life savings, too. And it has to do, online, when you are in community with a small group that shares a lot of the intersections with you. The more intersections that group shares with you, the more you trust them. Generative AI has the ability to build trust just between the AI and myself, right? That it understands me. And as a human we can keep it at a distance by saying, “These are the only tasks I’m using you for, dude,” [laugh], you know, “You stay over there. Listen, I’ll just ask you for these things.” But the minute you find yourself–and you may not even always know who you’re in conversation with, it’s going to be huge. How do you build trust? Because you will definitely trust it in a way that you might not trust many humans.
ARIA:
Right? That’s the flip side. Too much trust.
LATANYA SWEENEY:
Yeah, that’s a new kind of vulnerability we have. And the issue of trust in this rise of generative AI is absolutely huge, it is the right question, because there are just so many aspects of that trust. We have to redo everything now. I go online to Amazon, and I want to buy something, and I’m looking at the reviews. Now did humans write those reviews, or did a generative AI write those reviews?
REID:
Or by the way, did humans who are paid by a corporation write those reviews? I mean there’s a whole stack here [laugh], right?
LATANYA SWEENEY:
With an AI! [laugh]
REID
Yes!
LATANYA SWEENEY:
Right. A human says “Write 10 versions of this.” [Laugh]. Right? So all of a sudden, you know, things that I trusted before–like we can start making a list of, “What are the things I can’t trust anymore, right?” And then online, so we got these amazing LLMs because in fact it reviewed all of the stuff we put online–that’s publicly available, and even semi publicly available–and most of that was human generated. Almost all of it was human generated. But because it’s so good at regurgitating and remixing, and regurgitating back the remix version, and so forth, in a couple of years most of the online content is going to be AI generated. And then all of a sudden it changes our notion of trust of online content.
REID:
Well, speaking on the trust side, this actually gets to another of the questions that I was looking to ask you. And it’s funny because I asked Inflection, the AI company that I co-founded, its chatbot Pi, “What key element should be incorporated into a new social contract that ensures technology upholds democratic values?” Because I knew I was talking to you, so I’d start with Pi and get there and Pi said:
Pi:
Transparency: clear and open communication about how technology is being used, who is using it and for what purposes is essential. Accountability: those who develop and use technology should be held responsible for its impacts on society. Equity: technology should be accessible to all members of society, regardless of their social or economic status. Privacy: Individuals should have control over their personal data and how it is used. Human-Centered Design: technology should be designed with the needs and values of humans in mind.
LATANYA SWEENEY:
I love that answer. I wonder who it stole it from
REID:
[laugh]. Yes, exactly. Might have trained on someone that we are talking to, possibly.
LATANYA SWEENEY:
So I love that answer. The thing that I find funny is, and then if I ask Pi, “Alright Pi, exactly how did you come to know this?” It will probably answer with, “Well I know…” It’ll give you a nice fullthroated answer. And of course it’s not being transparent or honest. It’s doing the best it can. [laugh]
REID:
As you know, these LLMs, the basic thing is they’re generative.
ARIA:
Yeah.
REID:
Which you could say is great at essentially imagining or elucidating, but they’re trying to be constrained to answers that it thinks you want to hear. Which we’re trying to train it in ways of saying, “Yes, truthful, helpful,” [laugh], right? It’s how we’re trying to constrain that set of capabilities.
LATANYA SWEENEY:
You know, truthful doesn’t mean the same as what we mean. To the point that I’m continuing to personify now–Pi–the communication is missing each other. What we’re after for truth is debatable, sometimes. It’s notion of truth is totally different. You know, it’s like, “I’m giving you the statistical relationship between these words. What more can I do for you? You want more statistical relationships? I can take that word out,” and I will compute something else for you.
REID:
Yep.
ARIA:
Well I think, again, this social contract is at the heart of what we want for technology. We want technology to serve human ends. We want technology to make all of our lives better. And you know, sometimes the market gets it right. Honestly I think about it every single time I ride a Citi Bike in New York City. The backend technology for Citi Bike is Lyft. And I am so grateful and thankful that Lyft exists, and that that technology helps me ride a bike instead of drive a car, or whatever it might be. And you are one of the pioneers of public interest technology. Can you talk about your favorite real world examples of where you saw technology successfully implemented to address a particular social change, or where you think public interest technology sort of did it best?
LATANYA SWEENEY:
So in the spirit of the Weld experiment, I teach students how to do this, and then they do this at scale, and the work that they’ve done has gone on to change laws and regulation and business practices. And one of my favorite examples of that is in fact Airbnb. A professor here at Harvard had shown that hosts on Airbnb in New York City–if you were a black host and a white host and you were offering comparable properties, the black host made about 12% less, because that was what the market would bear, if you will. And so students wanted to do a more rigorous one–example, they chose Oakland and Berkeley, California–and they were able to show that Asian hosts made 20% less than white hosts for comparable property. So Airbnb of course, sends their attorneys forward, and so forth, and then they changed their platform.
LATANYA SWEENEY:
So if you run on Airbnb today, they set the price. And part of setting the price is to make sure that that side effect doesn’t happen. So, many kudos for them. We’ve done a lot of these kinds of experiments where we’ve been able to find places where a societal norm is conflicting with technology. How do we keep the technology? How do we fix the norm? That would be an example. You mentioned bikes–I just want to say one of my students–so bicycle sharing had the problem that when you would go to a station, there might not be any bicycles there. Or when you go to a station, there’s no place to put your bicycle. So one of my students, Drew, came up with a fantastic algorithm to solve that problem and it’s been used around cities around the country. So anyway, I’m just telling you, there’s a zillion low hanging problems to which students can address, and that’s part of what we do in public interest tech as well.
ARIA:
I love it. And I think it’s so important, because it’s something I struggle with, because the underlying technology might not be racist, or sexist, or whatever, and there’s no ill intentions from the technologist, but when it bumps up against real world discrimination, it creates this outcome. And so I think we need to get to a place where we’re not accusing anyone of anything. We’re just saying, “Now that we know, we have a duty to fix it.” Now that we know, we can actually use technology to solve the problem. And like how magical that is because you can study it and because you have the data you can see, “Oh well Airbnb made this simple change, and all of a sudden black and white hosts are actually getting the same amount of money.” What a beautiful outcome. So I love that in particular. Thank you.
LATANYA SWEENEY:
And this takes us back to the earlier part of our discussion around these three pillars of democracy, republic, and capitalism. So when they’re out of sync–we got the leaked Facebook documents that Francis Haugen produced and we’ve made them public on fbarchive.org–but one of the things that jumped out at me right away from them was how poorly, how badly, they do content moderation. The second thing that jumped out at me is: we don’t know how to do it any better. In other words, this is clearly a computer science problem that no computer science school or group of thinking has really started to address. Why? Because it’s inside of the silo of a capitalist company who has a fiduciary responsibility to their shareholders, and so they don’t want their bottom line affected. But somehow we’ve got to find a way that it’s okay to say, “I don’t know how to do this. Here’s money or whatever. Can we get them a thousand great minds to work on: how do we do content moderation better?” For example. Instead we went to, we’re moving towards, “We’re not going to do content moderation.” That’s not okay.
REID:
One of the things that I’ve been trying to kind of get to is, we’d like you to be solving this problem better and here is what the metrics might be if you were iterating to it from within, you know, your company. So for example, because your attention algorithms are increasing agitation–you know, hatred, other kinds of things–we’d like you to baseline what is kind of a reasonable agitation metric, and then I want you to be measuring your dashboards about how your algorithms are working and promoting, and making sure you’re not overly increasing it–I mean obviously some is part of the natural human condition–that’s one. How would you think about for content moderation, misinformation, fake news, other kinds of things, how would you think about trying to kind of facilitate the, “Hey, if you were doing X, Y, and Z,” inventing technology for it–because to have it be an invention loop, a reasonable economics loop–what would be the gestures you’d make?
LATANYA SWEENEY:
So Reid, the word I love the most of all the words you said was metric. Set a bar and let’s see how you can get to this bar. Because on the one hand, if you’re the company, you have a fiduciary responsibility. You’re trying hard not to screw up your money machine–and I mean that in an affectionate way, not as some kind of hypercritical complaint–but I mean that’s what they’re supposed to be doing, right? So they don’t want to mess that up. So they’re not quick to want to come forward with how the sausage is made, and where the sausage problems might be. But on the other hand, when the sausage is having a problem–where that problem is causing a societal harm–I don’t want to come in and say, “You have to do X, Y, Z.” When government does that, it’s not actually working at its best, and it’s not actually good, because they can miss it either way, and society doesn’t get the best benefit of the technology, or the society may get even worse problems, and the company has this sort of carte blanche to do so. So instead it’s much better to set metrics. Metrics that say this, “Oh wait, we didn’t realize teenage girls were having this problem.” Now that we know this, this is what you’ve got to guarantee, or this is the promise that you have to make, or this is the minimum, and you need to show us that you’re doing that. The companies though, unfortunately they don’t like metrics, because they much rather have a rule [laugh], right? They’ll tell you, “Well if we just put a notice, is that good enough?” So they want an out of bound solution because that’s easier for them. So a metric is painful for them too.
REID:
Yeah, I think that may be foolish because the problem is if we’re giving them rules, what we really need is metric outcomes. The rules are going to be so crude, as to be very damaging to all kinds of outcomes. It’s much better to say apply your innovation. For example, content moderation–a publishing house will say, “You must hire tons and tons of editors to do editing, all this content moderation, that’s just like, oh my god, that’s not going to work.” And that’s part of the reason why they fight so hard on this topic where you say, “Well actually these are the metrics that we’re trying to look for at a content moderation. Can you innovate on your technology to be within these metrics?” And obviously we might move and modify them, and as long as you’re dynamic over time and kind of improving them–which would be part of good governance on this, and they don’t have to be necessarily even publicly revealed, they could be measured through auditors and discussions with governmental agencies–this would be a far healthier dynamic position to be in. Which is one of the reasons why this is exactly what I try to tell government regulatory people. This is the kind of thing that we should be talking about.
LATANYA SWEENEY:
Yeah, but think about it this way–we’ve been talking about generative AI–if content moderation is just a set of practices: “I have the checklist of things I do, I make sure somebody saw this, I make sure my counter this or that,” then we would still get bad content moderation. If content moderation is a metric and it’s cheaper for me to do it by training an AI to do it, all of a sudden we get innovation right at the place where we need it the most. Not innovation for content moderation coming from the outside–as you point out, they know their technology and their company functioning better than anyone else–but to incentivize them to spend a resource on that is really important.
ARIA:
You know, I agree that if we have the government making these regulations, then the key is to have that metric that a company can strive for however they like. And to your point, maybe they come up with a new chat bot that does it for them–we want it to be as cheap as possible to comply with these regulations. But in the absence of government regulation, how do we foster a greater sense of social responsibility within the tech sector? Like Airbnb–you said you gave them the findings and they changed, there was no law. What do you think the levers are, in the absence of government regulation, to get some of this sort of more positive technology into the world?
LATANYA SWEENEY:
It is a series of things that have to be put in place. Some of it is a sense of responsibility. I mean, if social media was financially responsible for some of the harms that people are experiencing, that’s one way to incentivize them quickly to find solutions. Sometimes the answer might be a legal answer like that. It might be setting a metric with fines or embarrassment. I think all of these things play a role, but the most important thing is–the point that you’re making is–it’s all about how do I get them to be more responsible to society for the harms that their technology may be producing.
REID:
So I ask this question with a little bit of nervousness, but I think it’s good in this context. I usually take a lot of pride in LinkedIn being much better on all these issues than other social networks. Any commentary on LinkedIn how we should improve other things? It’s always important to do. And here I am talking to an expert, so I figured, might as well ask!
LATANYA SWEENEY:
So Reid, I just have to tell you straight up, you know, I admire you that you would ask me that question in a recording. [laugh]
REID:
Truth and improvement really matters.
LATANYA SWEENEY:
In my class I have this chart of these long list of laws and regulations, and I said, “Look, people died and fought for these laws and let’s go visit them and see how they’re no longer possible to be enforced.” So one of them–that LinkedIn shows up on the list–has to do with employment. So when recommendations are made to employers. I’m just going to say, because it’s not just a LinkedIn thing, it’s any group who has a large set of resumes, there’s a business opportunity to recommend to an employer who might be a good candidate. And so I say, “Yeah, I want to hire some people.” So the first group you give me, I say, “Oh, I like these, I don’t like these,” and the system is going to learn my preferences over time. But my preferences might be that I don’t want women, or my preferences might be a discrimination against age, and all of the ways that we would have detected that before happening at scale, we can’t do in that closed conversation, that closed loop. And so this is a place where we need the technology to make that assurance to us.
REID:
Mm-hmm [affirmative].
LATANYA SWEENEY:
That, you know, “Well yeah, I did learn their preference, but their preference is going too far one way. So I’m just going to automatically keep pushing in some of these others.”
REID:
Well one of the things that may be good news for you is I actually do know that people at LinkedIn do make an effort on this topic. We actually care about it. I don’t know if we’re doing it well enough. That’s a different question. But the other thing that both Aria and I actually engage in is a singled opportunity at work, which is to try to make sure that simply degree certifications are not overly weighted, so that you actually can have access to different kinds of job opportunities, especially within the tech industry, by talent. Not just by, “Oh, I have a degree from Stanford, blah!” [laugh]
LATANYA SWEENEY:
Yeah.
REID:
You know, kind of as a way–so it’s stuff that we completely agree with the vector you’re talking about
LATANYA SWEENEY:
And it’s not trivial to do. I mean I don’t want to make it sound like, “Oh, and you can just open up your box,” [laugh] because it’s a learning system, and we want it to do its thing. I mean it’s very much the generative AI kind of issue we were talking about earlier. If you make it an appendage after the fact, it has other consequences that may make your service not as good.
REID:
Yep. And the precise way that we update these laws–because the technology does change the landscape in which we all operate–is one of the things that’s way behind. So completely agree with all your points on that.
ARIA:
Well one of the things that Reid and I have been talking a lot about lately is agency. And so you might know, Reid recently wrote this new book called Superagency, and the idea of superagency is when millions of people get simultaneous access to new technological breakthroughs, and so it has those network effects. So it doesn’t just benefit you, it benefits everyone around you, and everyone around you having that additional agency benefits you even more. I love how you are both a techno-optimist, but you very much see the harms, and want to make sure that they’re equally applied to everyone, so that everyone can benefit. When you think of agency–and superagency perhaps in particular–in the age of AI, how do you think it could be applied to education? How could you use it as a professor, in the classroom? How can Harvard use it? You’ve spent a long time in academia–what does AI mean for the next wave of education?
LATANYA SWEENEY:
What a fantastic question. Oh my God. So I had the opportunity to lead the college in trying to get its head around what to do with generative AI. And of course it means rethinking our classes and how we teach and how we learn. So in some classes, the ability to have to interact with the AI, to interact with an LLM and so forth–as a perspective, I mean in a philosophy class, being able to say, “Driverless cars–what would Emmanuel Kant think about driverless cars?” [laugh] Right? What about, “How would you apply dialectical logic?” And then the students can argue whether or not they agree with that interpretation or not. The ways in which this can be done is amazing. So the opportunities are there. It requires a total rethinking–how do students learn to write or program? Because you know I can ask an LLM, “Could you write me a program that does my assignment?”
LATANYA SWEENEY:
And so it makes us also have to think about, there are times where we want these students to develop a particular skill, but also what is the future like when they go to a job? If they get hired as a programmer, does their boss want them sitting there writing the code from scratch when they could build on the shoulder of an LLM to give them the first draft? Helping my colleagues begin to navigate what is this, and what does it mean, was a huge honor actually. And it’s been exciting to see the new uses that it’s been put to. Of course the students are always one step ahead. [laugh]
REID:
Well, speaking of students, any recent insights that you may have gained from your students?
LATANYA SWEENEY:
You know, it’s really interesting. I live with 400 20 year olds–so I also have a role as a faculty dean, this is my ninth year doing this. And they give me so much hope for the future. So much hope. They’re amazing people. They have–you forget what it was like to be 20. And where you’re trying to figure out what you’re going to keep, and what you’re going to throw away, and who you are. But also just the energy and boldness that many of them exhibit is quite refreshing. It’s fascinating to me when I get discouraged, and so forth, sometimes to just have a conversation with them, and then see a different kind of light or a different kind of way forward. Do they see problems? There’s a long list. In fact, there’s a group of 30 students who are organized, who don’t even take my class, they just want to meet and talk about our issues! So it’s very much on their mind. If I were to go back 20 years ago, students were interested in my privacy class too, but they didn’t feel the same kind of urgency, and they didn’t feel the same kind of passion, and they didn’t feel the same kind of need. I do think the students today, their eyes are much wider open.
ARIA:
I mean, you have students who want to take a class with you, it’s not even a class. I think that’s a good sign. I think you’re doing something right.
REID:
I think we’re good to move to rapid fire, so I will start. Is there a movie, song, or book that fills you with optimism for the future?
LATANYA SWEENEY:
Well, I just got through reading a book called Queen Bess. It excites me for the future because it’s a bit political in a time where at least 40% of our country is caught up in political concern. And it asked the question, “Who would you bring from the past to help you navigate the future?” And this particular author, her answer was Queen Elizabeth. Right? The First. Who like ran through all these rough times, and then you put her in today’s setting, what would she say? What would she do? So I find it very delightful.
ARIA:
That is actually a great rapid fire question. I feel like I want to ask so many people that. It’s a good–that’s a great cocktail party question: Who would you bring from the past to help navigate the future?
REID:
Absolutely. So where do you see progress, or momentum, outside of your industry that inspires you?
LATANYA SWEENEY:
Well, I actually am inspired in my industry by AI, frankly. Yes, it’s scary. A bold new future, at a time we’re not ready for it. But on the other hand, what it might be capable of and where it could take us is exciting, whether or not we actually get to that vision of utopia, I mean in many ways I’m still the graduate student having that conversation in the lounge with the ethicist that’s trying to make good on that vision of technology.
ARIA:
I love it. And that brings us right to the final question perfectly. Can you leave us with a final thought on what you think is possible to achieve if everything breaks humanity’s way in the next 15 years, and what’s the first step to get there?
LATANYA SWEENEY:
Oh! Then I will have succeeded in my life mission of delivering to society the benefits of technology without the harms.
ARIA:
Amen.
REID:
That’s awesome. Latanya, great pleasure. I look forward to talking to you again.
LATANYA SWEENEY:
Oh, thank you, guys. I really appreciate it.
REID:
Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Vanessa Handy, Alyia Yates, Paloma Moreno Jiménez, and Melia Agudelo. Jenny Kaplan is our executive producer and editor.
ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Thanasi Dilos, Ian Alas, Greg Beato, Parth Patil, and Ben Relles. And a big thanks to Max Boland, Joshua Shank, and Little Monster Media Company.