This transcript is generated with the help of AI and is lightly edited for clarity.
PAT YONGPRADIT:
And obviously we’re gonna be talking about AI and education. My favorite usage right now would probably be—actually, it wouldn’t be lesson planning. I think my favorite thing might be simulation. For example, I simulated this podcast. I just wanted to practice and see what it felt like. So the opportunity to iterate with a thing in a safe way for both students and teachers. I think that’s super useful.
REID:
Hi, I’m Reid Hoffman.
ARIA:
And I’m Aria Finger.
REID:
We want to know what happens, if in the future, everything breaks humanity’s way. What we can possibly get right if we leverage technology like AI and our collective effort effectively.
ARIA:
We’re speaking with technologists, ambitious builders, and deep thinkers across many fields—AI, geopolitics, media, healthcare, education, and more.
REID:
These conversations showcase another kind of guest. Whether it’s Inflection’s Pi or OpenAI’s GPT-4, or other AI tools, each episode we use AI to enhance and advance our discussion.
ARIA:
In each episode, we seek out the brightest version of the future and learn what it’ll take to get there.
REID:
This is Possible.
ARIA:
Hey everyone. So this is not our first time talking about education on the show. We’ve heard from Sal Khan of Khan Academy, Ethan Mollick from Wharton, and Ben Nelson of The Minerva Project. We continue to feature leading innovators and educators, because we believe that education is one of the most promising frontiers for AI to benefit people. Today’s guest, Pat Yongpradit, spent a decade teaching computer science in American classrooms before becoming the Chief Academic Officer at Code.org—which is a nonprofit promoting inclusive computer science education from kindergarten through high school. This episode also marks the first time we’ve recorded Possible in front of a live audience. What you’re about to hear—so please forgive the sound quality—was recorded at the annual ASU-GSV summit in San Diego this past April. It was in front of an audience of ed tech and workforce innovators. And here’s Reid kicking it off.
REID:
Hello, I’m Reid Hoffman. Today we’ll get into the future of education, including what it takes to equip the next generation of students and teachers. We sometimes hear that AI is coming for coding, making it less essential. Today’s guest believes otherwise. Pat leads the steering committee at TeachAI—a global initiative, empowering educators to teach with and about AI. So let’s dive in. Welcome, Pat.
PAT YONGPRADIT:
Thank you, Reid. Thank you, Aria.
REID:
I’ll start us off. So before you joined Code.org, you spent 13 years teaching middle and high school kids. You also made a point of giving your students assignments with social missions and real world applications built in. And one example, which I refrain from asking before, because I so much wanted to hear the answer: Create video games that involve saving people rather than destroying them. What was one of your favorite final products that you saw come from that? And can you share a little bit more about what the students got out of that exercise?
PAT YONGPRADIT:
Okay, so I taught middle school, so shout-out to any of my middle school teachers out there. You know, you know, we know. And so I had my students create games for social good. And one of the student groups created a game called “How to Get a Date with an Environmentalist.” And it was basically a choose your own adventure, where you had to answer questions to impress this environmentalist in order to, in the end, get a date with them. Now the interesting thing about that is it took at least two years of programming for them to have the skills to do something like that. And obviously we’re gonna be talking about AI and education, and it’s amazing how quickly you can get to that level of fun and excitement and changing the world now in this age.
ARIA:
So Pat, you know, that I used to run DoSomething.org, which is all about getting high school students, college students involved in volunteerism. So you were at McGill. You were studying neurobiology. I don’t know if you had always wanted to be a teacher or what, but a volunteer opportunity changed it all. So tell us, how did you go from student to special education science, computer science teacher?
PAT YONGPRADIT:
I actually, so I went to a high school that specialized in science, math, and computer science. And computer science was at that time, like a parlor trick to me. This was before the graphical internet and right when email came out. And it was always a parlor trick. We did early models and simulations as well. And I never thought of it as a career. So I entered into a neurobio program at McGill University, and about halfway through, I volunteered in a special education program in a neighboring high school, got the opportunity to tutor a bunch of kids, and instantly realized that that was my calling. And then when I told my parents, they almost disowned me. They convinced me to finish my degree and then get a Master’s in education and then start my teaching career.
ARIA:
Love it.
REID:
So, you know, part of the mission with Code.org is to enable, you know, students everywhere, basically of all ages getting into college, to learn computer science. We obviously have AI coming with this kind of massive wave—where you and I have both heard from the same source where it says, “the programming language of the future is English or other human languages.” And obviously there’s a bunch of AI efforts, like Devin, trying to create you know, kind of software engineers. What does this mean for Code.org, its mission, what students should be learning? What’s the kind of view into the future here?
PAT YONGPRADIT:
Yeah, so Code.org’s mission is providing every student the opportunity to learn computer science, and with a focus on underrepresented groups currently in computer science. And that mission actually has always had even a higher mission, which is providing opportunity, period, for kids to participate as active creators in this digital world. And so that doesn’t change because of AI. Now, what might change about what we do is what we emphasize and de-emphasize per the—well, let’s just take the coding process. And as a lot of people know here, coding is just one aspect of computer science. And so, even though gen AI could cook up basic code pretty well, that doesn’t change the need for kids to still practice it, to have a hands-on experience with understanding what computing is. Now it’s obviously a little different at the industry level because folks have already learned and they’re able to use these tools in super productive ways. But kids still need to learn the basics in some way, shape, or form. And what might change is how they learn the basics, or what we emphasize, deemphasize. For example, kids might not have to learn how to write code as much as read and evaluate, test, and debug code. And then even the bigger, more exciting things, like user experience and making sure that it actually solves a real world problem. So I’m super excited about the future of CS education.
REID:
So as a follow-up, if you were going to task making a video game, or an exercise making video games as collaborative games, together with generative AI, what kinds of mental tools—for, whether it’s architecture or evaluation, its connectivity to real world problems, whatever the things—what are the kinds of learning concept tools that are gonna be important for the kids for the next, you know, decades?
PAT YONGPRADIT:
Well, the computer science education world calls this computational thinking, and that consists of skills like abstraction, decomposition, pattern recognition. And these things are just things that are gonna be important, even more important in an age of AI. And so what these tools allow you to do these days—like, let’s just say ChatGPT—it decreases the barrier between idea and implementation. And so it really democratizes the idea of creating a digital artifact for everyone. And I cannot wait until someone creates the mega IDE that allows you to—you know, where I’m going with this—just to speak into it and out pops like a workable prototype of anything. I call this thing “The Dream Maker.” [laugh]. Yes.
ARIA:
Alright, so let’s get to that Dream Maker. So when you think about computer science and coding, even though not every kid is gonna be an engineer or not every kid is gonna be a coder, you still think it’s important to learn those philosophies and those thoughts. And I actually have to give a shout out, because my senior year AP Computer Science teacher is in the audience. And even though I didn’t become an engineer—yes, exactly. Incredible. Just such a dream—I was, I had facility with that language. I knew I understood it. I knew I could do it. So now in the age of AI, what, what are we talking about when we get to AI literacy and young people? How do we build it in, either to the curricula, to the school system, at the classroom level? What do young people need to know?
PAT YONGPRADIT:
Oh boy. What do young people need to know? [laugh].
PAT YONGPRADIT:
Well, I think the goal of AI literacy, whatever it is—because there are a lot of groups trying to figure this out right now. A colleague of mine, Jeremy Roschelle, leads a lot of the AI literacy work at Digital Promise, and they’re trying to figure out what AI literacy is right now as well. But I think the goal of it is to create critical consumers and responsible creators. So whatever, however way we decide to define it—operationally, or just principles, or very like specific skills, concepts, practices, statements of learning, benchmarks, whatever—the goal is critical consumption and responsible creators.
ARIA:
Cool.
REID:
So one of the things that I think most people don’t understand about the training of these AI models—which is one of the things we learned early days at OpenAI—is, we literally threw code into the training set just as, like, we were throwing in every piece of anything that looked like language. Then all of a sudden it started producing code, and we’re like, “oh, that’s cool, we should do more of that.” But what’s interesting is in the training on these things have learned that the, that actually, in fact, you are including code in training of any model—like Inflection’s Pi, we included code because the code was teaching a version of reasoning. And so one of the things that I think is really interesting as you intersect these, you know, LLMs and generative AI, and Code.org is that there’s a similar kind of pattern of, “look, yes, we’re teaching computer science so you can be digitally native and facile in a world where more and more of the world is digital,” but it’s also teaching patterns of thinking. And one of the things that I’m super curious to see is how these patterns of how we teach LLMs will also be how we teach humans and how, what the feedback loop of that is. And so what are some of the things about this, you know, concepts in computer science, this structured reasoning, structured problem solution—what are some of the things that you’re seeing in AI that excite you as a lens to the future in how this loop between humans and generative AI is actually leading both to being much better?
PAT YONGPRADIT:
It’s been an experience coming to ASU+GSV, and the first-ever AIR Show, which is this K-12-educator-focused aspect or prelude to the main summit. And so they, ASU+GSV offered it for free. And then there were a bunch of early AI ed-techs showcasing their work. And I was particularly excited about the ed-techs that had really understood teachers and could identify the key needs and pain points that they had and their ability to create very specific features to address these things. So for example, there’s this group called Merlyn Mind, and they created an AI tool to just voice control all the tech in your room. And you would think, well, why didn’t they spend their time creating like some kind of tutoring chatbot? Well, that’s because humans are way better tutors than chatbots right now. And what teachers need is the ability to do their job with kids and not have to worry about all the tech in their room. So I was really impressed by that and lots of other groups that are doing wonderful work.
ARIA:
So, I hope this is sort of a thing of the past, but we’ve seen so many headlines about, you know, ChatGPT comes out, school districts ban it. You know, this happens—universities, no more ChatGPT, no more GPT-4, whatever it might be. What do you think the guardrails should be for AI in the classroom? Like, should teachers say some classes it’s off limits, some it’s not. You can use it for homework, you cannot? How do we set those guardrails for the classroom right?
PAT YONGPRADIT:
I wanna get a little controversial here.
ARIA:
Do it.
PAT YONGPRADIT:
So I stopped teaching in 2013. That was a time when cell phone usage was really reaching an inflection point, meaning that I could tell in my day that I was telling kids to put phones away more and more and more. Nowadays, most schools have just given up and they just allow kids to use phones all the time. And teachers just are tired of telling kids, oh, gimme that phone, or, and then there are parents complaining and things like that. So the controversial thing I wanna bring up is that we were actually wrong in education back in the day in allowing, or giving up, and not weighing in on how phones should be used and how social media should be addressed in schools. In the same way, right now with AI, I appreciate the fact that many states are leaning into AI and not just letting it take over and create this Wild West of inequities.
PAT YONGPRADIT:
So right now, there are nine states that have provided AI guidance for their school districts—which ranges from guardrails, things like addressing the misuse of AI-detection tools. By the way, as a teacher, I would hate to create some type of police state in my room where I’m constantly checking whether kids are creating stuff with AI. Honestly, that’s not a fun activity, even if these AI-detection tools worked like perfectly, no false positives, no bias or anything. I still wouldn’t wanna use that. So there are a lot of states that are weighing in on that and not just kind of leaving it up for teachers or whatever, just to figure it out on their own. And what that’s gonna do is, it’s gonna help level the playing field for a lot of kids who normally would be left out of the conversation because schools are leaning in to figuring this out with the rest of society.
ARIA:
I’ll just say plus a thousand, not controversial at all. I have three young kids. Like, all I want is their middle school and high school to ban cell phones and not have them have a cell phone in the classroom. And I absolutely don’t want them banning generative AI. And so I think we don’t, we don’t need to sort of overlearn that lesson. We got it wrong before. Let’s fix that now and, you know, let’s do it the right way.
REID:
So what would you say are some of the most interesting, kind of, things you’ve seen so far using generative AI to help amplify teachers, amplify students, whether it’s in a coding context or kind of more general? Because I think too often people say, “oh, I need to have it—like, my old process, I need to have it completely controlled. It’s gonna disrupt the old process.” But hopefully, and I have a very high belief that you can create an amazing set of new processes using this. What are some of the things you’ve seen that have kind of surprised and delighted you?
PAT YONGPRADIT:
So, TeachAI is an initiative between ETS, Khan Academy, ISTI, and the World Economic Forum, and Code.org as well. And you know, so even amongst that, that small group, there are organizations that are doing pretty amazing things. You all interviewed Sal Khan of Khan Academy, so I don’t need to go into what Khanmigo is or anything like that. I love the fact that they’re leaning in and trying to figure out this very hard issue of trying to democratize education and create a world-class education for everyone. And, you know, tutoring chatbots right now are very, very difficult to do. And I love, I know that team very well. I appreciate how thoughtful they’re approaching it in terms of just, you know, leaning in on pedagogy, the Socratic method, not just giving answers, things like that. At Code.org, we just deployed this AI teaching assistant that helps novice CS teachers grade the bazillions of coding projects that they have to grade. So folks like my colleague Leigh Ann, who was Aria’s high school computer science teacher spent—her and my ilk—have spent bazillions of hours going through these projects trying to grade them. So on the Code.org platform, we create this TA, this teaching assistant, to help teachers do what they do. It doesn’t grade for them. It actually provides them a bunch of information just to give them, kind of, like a head start, especially important for novice CS teachers.
ARIA:
So I was just talking to someone earlier in the week and they said—so correct me if, if I’m wrong on this—that teachers spend about 54 hours a week and only half of that time is in the classroom. Because like you said, they’re grading papers, they’re creating lesson plans, they’re doing so much stuff that isn’t the classroom. And like we all need to appreciate that being a teacher is so hard—it’s like one of the hardest jobs in the world. So beyond a teaching assistant, what is the other—what advice would you give teachers? Where do you see generative AI actually helping teachers, as opposed to making their job more difficult?
PAT YONGPRADIT:
You know, in most cases I actually don’t see it making their jobs more difficult in terms of teacher support. So on the student learning side, obviously there’s the issue of like cheating and plagiarism. And that does make their world a little bit more difficult, but actually it makes it better as well. Let me explain. So yes, it is disruptive to not know whether a kid wrote an essay or whether, you know, some chatbot wrote an essay. But it gives teachers an opportunity to even rethink, should I even be—how should I even be teaching essay writing these days? Is that durable? Why do I even do this? Like, what, what would be an alternative? And so just like the opportunity to rethink why we do what we do is a positive thing. So it’s not necessarily like making the world harder, as much as making the world potentially better.
PAT YONGPRADIT:
In terms of aiding them, my favorite usage right now would probably be—actually it wouldn’t be lesson planning. That’s another thing that you hear a lot about. The problem with lesson planning right now is that you get, you crank out one single lesson plan that’s not coherent with everything else that you’re doing. And so it actually leaves—it looks like it’s super worthwhile, but it doesn’t really solve, like, the big problems. I think my favorite thing might be, might be simulation. For example, I simulated this podcast. I just wanted to practice and see what it felt like. So the opportunity to iterate with a thing in a safe way, for both students and teachers, I think that’s super useful.
REID:
One thing that I might suggest—and I’d be curious what your, your comment on the suggestion would be—is, you know, per teachers having a huge shortage of time relative to giving feedback, like on essays, it would be relatively straightforward to take each essay, run it through an AI tool and say, “give commentary,” and then go through and go, “yes, I like this comment, delete this one, modify that one,” and that’s much quicker. And then you can get a much deeper level of commentary on the essay. Would that be a good use? Or would that be, would you elaborate or change that use in some way?
PAT YONGPRADIT:
I mean, that’s what Code.org’s new AI teaching assistant, that’s literally what it does. It highlights sections—so it loads in rubrics and then it highlights pieces of code, comments them for the teacher so that the teacher, you know, knows where to look and, you know, again, doesn’t do the grading for them. So I mean, yeah, I agree, whether it’s an essay or a program or whatever.
REID:
And when you saw Devin—which is a replace, or it’s basically create an AI software engineer that will actually be added into the team. I actually don’t think the replace is the, is likely what’s gonna play out here. I think it’s just gonna be, you know, new software engineers added to the team in whatever project way. What did Devin present to you as a lens into the future?
PAT YONGPRADIT:
You know, when I saw Devin, the demo on YouTube, the first reaction was, “oh, snap.” Or like, actually it was, “oh snap!” And then I thought like, ‘it’s really real now.’ You know, because you asked the question about what we think about our work at Code.org per gen AI and all that. I thought, “we’re, we’re, you know, a couple maybe years away from something like Devin.” If people don’t know what Devin looks like, just imagine the actions of a software engineer—well, this is oversimplifying it, but yeah, just all the different things that software engineers might do happening, like, right in front of you. And I think software engineers are gonna turn into something like an orchestra conductor. And so we all know that, you know, the way AI is moving, we’re gonna get into this world of AI agents very soon that are gonna actually do real things for us, like book travel and things like that.
PAT YONGPRADIT:
In the same way, you know, software engineering can be kind of decomposed into different AI agents that do different aspects of a software engineer’s work. And then software engineers turn into these conductors that kind of guide these different AI agents to do different things. It’s gonna obviously increase productivity, but it’s also gonna change how software engineers think about their work and usher in the real opportunity for software engineers to really think about security, ethics, et cetera, because their job—yeah, you can snap for that: ethics, security—because it’s gonna give them the opportunity to think beyond just getting code to work and debugging it and testing it and things like that.
ARIA:
I’ll actually pivot and say, Reid, one of the things that I think is really interesting is that you, you made your career in Silicon Valley, you’ve built AI companies, you’ve built, you know, Web 2.0 companies, but you were not a coder when you were eight. Talk about what you think is important in terms of sort of multidisciplinary—whether it’s learning to code, but also the holistic thing as Pat talks about ethics, morality, et cetera.
REID:
So one of the things I like to do to wake people up, so for example, in frequently when giving talks to business schools, I’ll say my background in philosophy was much more important for entrepreneurship than a business degree. Yes, [laugh], and that gets like, “what?” [laugh] because you’re saying, you know, usually at an expensive business school. And so the point is: teaching patterns of thinking—and by the way humanist thinking is actually all the more important now that we’re kind of in the age of AI. I gave a speech last year in Bologna kind of remembering the renaissance on this—and in philosophy in particular, but there’s different disciplines across all these humanist, you know, disciplines which are, or which are highly useful—but philosophy is: Imagine possibility stated as crisply as you can.
REID:
And then you parallel that to entrepreneurship, which is, okay, well what might the world look like with this product or service? How would the human condition evolve, or how would it engage with this product and service? Those are statements of possibility. The path, the strategy by which you get there, game playing—like part of the reason I love what you were doing with the collaborative games, Pat—is that the game playing as a way of formulating strategy and how to get to that new sets of possibility. And so that’s part of the reason why, kind of, call it multidisciplinary or non-disciplinary thinking, is really important. Although it’s, again, part of the thing that I love about what you’re doing with Code.org, and is important, is like, look, even if the mechanics of code generation— like, we’re no longer teaching specific coding languages or other kinds of things as ways of doing it, because that’s the tactile—the patterns of thinking are still what’s really important. And part of what’s interesting, of course about that is like, “okay, so if we learn that putting in all of the structured code into the LLMs training set is part of how they learn thinking, what are the other things that become part of this for learning thinking?” And that’s part of the reason, I think, where Code.org is not just gonna be teaching computer science, but teaching patterns of thinking.
PAT YONGPRADIT:
There you go. And that is actually kind of what we’re thinking about. So you mentioned your philosophy background and the importance of humanities and humanities thinking. I think humanities are gonna have a renaissance in an age of AI. And I think that it’s gonna start to reverse the two decades now, decade-and-a-half-long trend of STEM and computer science dunking on the humanities, basically, like over and over and over again. If you look at higher ed—I don’t know what to call it, enrollment or desire or applications for computer science programs—across all, like top-tier universities, all of them have been inundated by this surge over the last five to 10 years. And that’s because everyone understands the money involved, the salaries involved in computer science, but also the innovation, the possibilities, et cetera.
PAT YONGPRADIT:
I think we’re gonna start to see, I wouldn’t call it reversing, but more people valuing the humanities, which I think is a great thing. So I am not concerned about that for Code.org, because, what you said, there’s still a role in just creating patterns of thinking that appear in the humanities or appear in computer science fields. I’m gonna deploy my single question now [laugh]. So I told Aria and Reid that I like to ask questions in podcasts where I’m participating, and I have a burning question for Aria about DoSomething. So I was very impressed by what DoSomething is and what you made it, and the different projects that you worked on. So Aria, if you were to pick like a DoSomething style initiative right now for AI in education—so this is like a learning opportunity for me, Aria—lwhat would you pick? What would that project be? What, what would the goal be? Who would it serve?
ARIA:
So because that question was so flattering, you get a second. So feel free to ask Reid a question next. One of the things that I feel like we get a lot in our world is a battle of the generations. So even though I was running an organization that was all about young people 25 and under, and how can we empower them to change the world, I still cared about what we affectionately call “old people.” My title was Chief Old Person because I was the CEO, and I was like 32.
REID:
And I would’ve been Chief Ancient Person on that measure.
ARIA:
But that being said, you know, again, not just young people should learn. Not just young people should have wonder and think about changing the world. And so, we had this one campaign affectionately called Grandparents Gone Wired, and it was all about young people teaching technology to their grandma, grandpa, great grandparents, any sort of older adult in their life.
ARIA:
And like, obviously the trick was that your grandparent is gonna teach you something and you’re gonna teach your grandparents something. And so I think that mix between the generations is like even more critical with generative AI, especially when maybe you have new people coming up who are more, you know, facile with the technology. You have someone who’s 60, 70—they don’t wanna learn a new thing, they don’t wanna know what’s going on. But we see, we can see amazing examples of that. And so I think if we can all be teachers, we, you know, we can all do Grandparents Gone Wired.
PAT YONGPRADIT:
That is super inspiring. And it reminds me of this SNL skit, this Alexa SNL skit.
ARIA:
Oh, it’s so good. So good.
PAT YONGPRADIT:
Oh yeah, where they’re calling it different names and asking it ridiculous things. Yes, and actually you just brought up something. So this was wonderful that I got to ask that question because I have to be honest, I haven’t thought of seniors and the opportunities for generative AI for them, in terms of engaging with technology in a way that doesn’t require a Gen Zer to kind of show them the ropes. And that’s really, really interesting. That’s super interesting. Thanks, Aria.
REID:
Because part of what we’re learning with this is part of always being learning is formulate the questions, right? And how to do the questions. And so for example, one of the things that I learned from a friend of mine was his daughter’s really into organic chemistry. She’s pasting technical papers into GPT4 and saying, “I’m 15, explain this to me.” By the way, that works in any context, isn’t just for youth. It could be the, you know, “I am Chief Old Person, explain this to me.” It could be the, “I’m a venture capitalist, explain this to me, or mod, or summarize this from, from my kind of point of view.” And that’s part of the kind of always be learning and whether or not it’s like you look at something you know, like, like I say, yeah, can’t remember being recorded. I, I won’t out a friend of mine, but it was like, “oh yeah, that book was too long to read.
REID:
I just went to ChatGPT-4 and said, “gimme a summary.” Right? But that’s fine. Whether it’s volume, technical specificity, mapping to your interest—that always be learning, AI gives us that tool in order to do it. And that’s part of the reason why it’s kind of human amplification as a way of doing this. And that’s part of the reason why, like, when you think about education, you think, how do I get amplified as a teacher? How do I amplify the teaching experience? How do I help my students be amplified and what they’re doing? How do I bring parents or, you know, grandparents into, into the, into the arena? It’s, how can this be part of the solution in terms of what I’m doing? And part of that I think is to begin to learn. Like one of the things I think that, that it’ll be interesting to see if—learning how to ask good questions. And the good question, because asking the good question is part of how we make progress. Whether it’s scientific progress, like, you know, is the question, like for example, if you go back to, to, you know Ptolemy, you say, “well, we’re at the center of the universe, obviously, and everything’s…” No, no.
REID:
If we ask the question of, “No, no, we’re one of the things that’s moving around in the universe, then what does the universe look like?” And it’s figuring out how to ask those questions. And by the way, one of the things that current generative AI is actually pretty bad at is asking questions, right? So it’s kind of like, and it might—it will certainly get better—will be the dynamic of, of, of kind of where we are as adaptive, as human beings, will, will we always be learning how to bring more interesting questions into the mix? That’s one of the things I think is possible. And I’m hopeful for. And, and part of the question of course, in teaching in education is can we teach that asking of the question? Like what is the, the next question that is so interesting.
PAT YONGPRADIT:
Yeah, I mean, we can, there’s a, a colleague of mine at King’s College London His name is Oguz Acar, and he wrote an article in Harvard Business Review maybe a year ago, early on, and said, “Hey, this prompt engineering thing, this is great and all, but again, underlying all this is the skill of problem formulation.” And we can teach the skill and we should teach the skill. I would say another skill that kids need to learn these days—which is not a new skill by the way—it’s just a return and a, a reemphasis is like just being skeptical and learning how to evaluate things. And again, this was the issue with like social media. Like 10 years ago, schools decided that—not all schools—but schools decided that it would be a family thing, let families figure it out and all that.
PAT YONGPRADIT:
And if I were to go back in time, I would’ve said, “Hey folks, we need to lean into this and we need to teach kids to learn how to evaluate content, be skeptical, have a healthy skepticism, and we should help kids learn how to, we should lean in, and help kids learn how to communicate in these new environments.” Like where you put a statement out and everyone in the world can see it. And then you’re having these discussions where everyone can see your, your private thoughts with someone else or your argument. And like, how does that even work? I’m gonna deploy my second question now, Reid. [Laugh], alright, starting with you and then Aria as well, If she wants to answer this question. So this is the Possible podcast. It’s, it has this very like, optimistic vibe, which I certainly appreciate.
PAT YONGPRADIT:
I think that there are a lot of risks per AI, period, and AI and education. But if we are just focused on the risks or lead with the risks, we kind of, we, we don’t help ourselves in the end. We have to be very cognizant. But, you know, I love how, how positive this podcast is. But… I would like to know what you are most worried about, but not AI in general, but AI and education. And this is why Reid: because I’m an educator and people listening to this are probably gonna be educators, we’d love to get an outside opinion. We talk amongst ourselves all the time, our, our own little mini echo chamber of the risks. I’d like to get your perspective. I’d love to get Aria’s perspective on risk in education.
REID:
I think all three of us need to answer this question. So the question will come back to—your question, will boomerang back to you as well. For me, what I would say is, you know, part of the actual roots of intelligence is laziness, and it’ll become too easy to treat AI as the oracle. And action, in fact, always having a, “hey, what could be better at this answer? Is the answer accurate? Is the answer true? Is the answer partially true?” And when you’ve got this thing that’s given you a hundred, you know, like credibly strong answers, you then go, oh, the hundred first. That’s a, that’s a good answer too. And keeping that always at that nimble, like what always what could be better? And are there parts of this answer that are incorrect that I think is very important, that’ll be easy to slouch off of as the tools get better and better. And that would be a general thing for all use of the AI tool, but also specifically for students, specifically for teachers as well, because it’ll become very easy to just go, “oh, that’s pretty good.” We’ll just proxy to it. And you’re like, don’t do that. Now you may say that’s a good enough answer, great, but always have that skeptical edge.
ARIA:
I’d say to build off that. And actually as we were talking about earlier, hopefully all of the problems that AI creates, you can use AI to be part of the solution. So I hope that is gonna happen in this case. But to build off that, I just think there’s so many echo chambers now we’re, we’re hearing our points of view be validated. And you can of course build an AI and, you know, post train an LLM to make it, “oh, always give the opposite point of view, always make sure to,” and I fear that in a world of generative ai, it’ll be almost easier just to have your points of view validated. And so we need like for young people to learn and grow and all of those things, you know, they need to be kicked in the head and told that they’re wrong and that they need to think about the opposing point of view. So I think we can use AI to solve that, but it’s something that I think is critical.
PAT YONGPRADIT:
And I agree with many teachers. So there are a lot of surveys coming out these days about well, AI, period, in different sectors. And teacher and administrator surveys usually rank these as the top two risks and concerns: cheating and plagiarism, and over-reliance—what you said, Reid—over-reliance and loss of critical thinking. And so it usually flip flops between those two depending on the survey. For me, I’m concerned with over-reliance and loss of critical thinking because as adults we can use gen AI in very positive ways because we’ve already learned a lot. Or we’ve learned that we have a, we’ve had a foundation already. Kids are learning things for the first time sometimes. And if some overzealous teacher decides to use gen AI in that process, we might be short circuiting their learning and not really giving them the experience that they needed to actually create the foundation by which they get to the place that a lot of us adults are at. So that is my biggest risk. And I also agree with Aria that AI could also be the solution to that as well, but we have to be thinking about that.
ARIA:
So we have one more quick question before we head to Rapid Fire. Can you go back and talk about Code.org’s secret sauce in 2024? Like, what are the unplugged activities? Why are they important? Like tell us what’s to come.
PAT YONGPRADIT:
Oh, so, so moving ahead. Yeah. So, you know, you mentioned this idea of unplugged activities. I mean, unplugged activities have been around for a very long time. For those who don’t know, unplugged activities are opportunities to learn about computer science, but without a computer. So, you know, fun, physical kinesthetic activities that help you experience computing concepts, but again, not by just coding or, or things like that. Very fun, exciting—not just for kids, for adults as well. Moving forward, we have a project with the Computer Science Teachers Association. So Teach AI—we—with this Computer Science Teachers Association to kind of explore some of the early issues pertaining to the future of CS education in an age of AI. So basically, my answer to your question is, we’re trying to figure it out and we’re working with other people to figure that out and answer questions like,how can AI be used to serve students with disabilities and create more inclusion in computer science? So things like that about that. Yeah, or, you know, like an age old goal of computing. Everyone knows that diversity issues in computing—an age old goal is to broaden participation in computing. And so we have a pretty broad question, right? So, you know, that’s, you know, how can AI be used? Actually, it’s not even “how can AI be used?” How can teachers use AI to broaden participation in computing or to turbocharge it?
REID:
So in our next conversation, I’ll ask you about what an unplugged activity to teach will be, but for this one, we will go to rapid-fire. And is there a movie, song or book that fills you with optimism for the future?
PAT YONGPRADIT:
Shout out to my colleague Ethan Mollick, who is also—
ARIA:
Love him.
PAT YONGPRADIT:
A Possible podcast alumni. Ethan Mollick and his wife Lilach Mollick are doing amazing things at Wharton. And he just wrote a book called Co-Intelligence. Ethan Mollick is, both Ethan and Lilach are advisors to TeachAI. And it’s people, it’s people—so it’s a book, Co-Intelligence, but it’s people behind the book. First off he wrote it—the audio, I listened to the audio book, and Aria and I were talking about the audio book; it’s four and a half hours, y’all, it’s awesome. And he reads it in his voice. So it’s wonderful. It’s like you’re hanging out with Ethan. And super conversational, super approachable. I suggest it, but it’s not the book that excites me. It’s the people behind the book—people like Ethan, Lilach, people like my colleague Jeremy Roschelle at Digital Promise, Leanne, your former teacher who’s now leading CSforALL—all these people who are doing amazing things and working together to figure out this AI and education thing together. The worst thing that can happen right now, and this is why we created TeachAI, is that we split up into these camps and we don’t talk to one another. And people are engaging in these turf wars around who’s the leader in AI in education. And so far, I’m pleasantly surprised that folks are working together and we’re trying to drive towards a common-ish vision. And in ways where we still disagree, we’re disagreeing and talking about it and being open to like other perspectives.
ARIA:
I love to hear it. So, second rapid fire. What is a question that you wish people asked you more often?
PAT YONGPRADIT:
Oh, well, speaking of turf wars and all that I wish people would ask me “Hey, pat, did you really say that? And what, what did you mean by that? And can you explain it more?”
REID:
Where do you see progress or momentum outside of your industry that inspires you?
PAT YONGPRADIT:
So I cycle. I you know, as a, you know, something weird happens when you hit middle age, I’m 45 years old. Like, it’s like this disease where dudes just wanna like, wear skin tight clothing and race around on bikes with other middle aged dudes. It’s crazy. I actually broke two collarbones, one after another in the span of four months pretending to be a speed racer. But I’m excited about the recent progress. It excites me, like all the different changes happening in the biking scene and all the different modalities of biking. Now, there’s this thing called gravel biking, which is like road biking, but on safer roads, and it’s like transforming how people can enjoy themselves outdoors.
ARIA:
I didn’t think we were gonna go to like men and tight pants, but I love it. Pat, thank you. For our final question that we always ask, can you leave us with a final thought on what could happen in the next 15 years if everything breaks humanity’s way? And what’s the first step to get there?
PAT YONGPRADIT:
I’ll reiterate what I was talking about before, because it has to do with what TeachAI is all about. So the first, we’ll, we’ll talk about the vision first. I mean, the first step first is that people who are leading different areas of education. So that could be a policy organization, it could be a major curriculum provider, it could be a major research organization like Stanford Accelerator for Learning—which is a wonderful group doing wonderful things per research and education. But the first step is to get these people together constantly and coordinate around who is doing what, and how we can work together, and what are we driving towards in the first place. And having super open conversations and honest conversations about what we’re worried about and what we’re hopeful about, rather than people doing different separate things.
PAT YONGPRADIT:
I’m one of those crazy people who think like you can actually change the education system. You know, Code.org has done a lot—a lot of partners have done a lot—to change how computer science education is valued and viewed in, in global education. There are, there are bazillions—well, not bazillions—I mean dozens of countries who, who were not valuing computer science, who didn’t even have it on the menu, who now have it in the menu because of the advocacy of the community. So folks need to work together. That’s the first step. And then the vision is that, I’d say—you said 15 years, Aria? That’s a good, long time for education. So 15 years, that’s about the minimum, I’d say that we actually see some of the things that we’ve been trying to scale for the longest time; project-based learning, active learning, flipped classrooms, real-world learning, authentic projects, we see that at scale in 15 years because of the tools that are created to help teachers do those things.
REID:
Possible is produced by Wonder Media Network. It’s hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Shaun Young. Possible is produced by Katie Sanders, Edie Allard, Sara Schleede, Adrien Behn, and Paloma Moreno Jiménez. Jenny Kaplan is our executive producer and editor.
ARIA:
Special thanks to Surya Yalamanchili, Saida Sapieva, Ian Alas, Greg Beato, Ben Relles, and Parth Patil. And a big thanks to Deb Quazzo, Will Cullen, Colin Faul, Bella Willis, Braeden Rutherford, Whitney Kim, Samantha Urban Tarrant, Hadi Partovi, and Little Monster Media Company.