CSC News

March 04, 2024

Will Our Best Teachers Be Robots?

JoAnna Klein | Newswire | Executive Vice Chancellor and Provost News

 

On August 25, 2023, I interviewed James Lester and Noboru Matsuda, researchers in computer science and education at NC State. We talked about the state of AI and education today and where they see it heading in the future – near and far, for better or worse.

 

In the rosiest future, they envisioned a world where AI is a tool that enriches the human experience and works for it, not against it. They saw narrative-driven, immersive, interactive AI learning experiences that encouraged deeper connections. Perhaps the greatest imaginative leap they made was to a world in which every human has their own lifelong AI “pal.”

 

This interview, conducted for the Long View Project, has been edited for clarity.

 

What big question do you hope to answer in your research?

 

James Lester: Speaking for the AI Institute for Engaged Learning, if we were to force ourselves to pick a single question to ask, I think the most consequential one is: If AI technologies continue to move forward as quickly as they have been, which seems all but assured, how can we re-envision education to be more effective and more engaging, all the way from K-12, to higher education, to adult education and lifelong learning?

 

Noboru Matsuda: My biggest question, I think, is: Can we make education “better?,” meaning, education that works for everybody. We tend to do research just within our own narrow perspectives, but there are so many different people all over the world with different backgrounds, different needs, different atmospheres. My question is: Is it possible to build technology that fits all these needs? Many people say one size won’t fit all. But maybe technology can be smart enough to fit all these things. AI may have potential because AI can grow, in theory, by itself, based on its experience.

 

I’m particularly interested in understanding how people fail to learn because that gives us some hint about what they need. Good students are okay, but those students who are not so good need technology. They need scaffolding.

 

How does your work in AI address teaching and assisting those who need that extra help?

 

Matsuda: One of my projects is about the application of a machine learning agent that you can teach. It’s an interactive technology that you can literally teach. Using this technology, we study how people learn by teaching.

 

Theory says: When you teach, you learn. I’m trying to push the envelope a little farther. Is it possible for middle school students to learn how to solve an equation by teaching a computer agent?

 

We did a lot of studies looking into the connection between learning by teaching and motivation. Make it fun: Would that help students learn? Cognitive theory says if you are engaged, if you’re motivated, then you learn more.

 

But what if the student cannot really teach this computer agent? What kind of help should we provide? If students cannot really teach this agent, what they need is to understand how to teach this agent instead of how to solve equations.

 

They start from teaching this agent totally incorrectly. They just make a random guess on day one. But on day two, day three, day four, they gradually learn to solve the equation correctly. I didn’t mention this, but when they teach, they also ask the agent to take a quiz. So say their agent fails on the quiz. The system provides feedback to the quiz: Like, your agent solved these questions correctly, but your agent failed on these questions. By seeing that simple feedback, somehow, even when [the agent doesn’t explain] why it’s wrong and how it should be, the student eventually learns to solve an equation.

 

Why use a computer rather than a human? What’s the advantage?

 

Matsuda: There has been a lot of literature about humans teaching humans. But that kind of study takes a long time. It costs a lot. And the students may teach their pal incorrectly, which has a consequence for humans. But if it’s a computer agent, it’s easy. It’s like a crash test dummy. You put the dummy in the car, and then make it crash. We can do so with the agent, but we cannot do so with the human.

 

James, in 2000, you co-authored a highly influential paper about pedagogical agents. What are those? How are they being used in classrooms?

 

Lester: There’s a very long history of work on one-on-one human tutoring. It turns out, as a result of studies in many, many different subject matters, in many different student populations, many different learning contexts, that one-on-one human tutoring, where there’s a human tutor and a human student, significantly outperforms classroom learning. For example, controlling for subject matter, student population, time on task: If you have a teacher standing in front of a classroom and you compare that to an individual tutor helping an individual student, the second is far more effective than the first. Human tutoring is far more effective than classroom learning. This has been replicated hundreds and hundreds of times. The intuition behind pedagogical agents is: What if you could take a collection of capabilities of human tutors and somehow replicate them in a pedagogical agent, in an AI-driven agent that has human-like characteristics?

 

What if you could take a collection of capabilities of human tutors and somehow replicate them in a pedagogical agent, in an AI-driven agent that has human-like characteristics?

 

The curious thing is that there are lots of possibilities about why this one-on-one human tutoring is so effective. This is not a new sort of intuition. People who are of means, for 2000 years, have been helping their children learn by hiring a tutor. This was the ancient Greek way. But what is it that makes it work? Is it because there’s this interactive problem-solving that the student engages in? Is it because of natural language processing, a conversational give-and-take between the human tutor and the student? Is it because of what’s sometimes called the affective component? The emotional, motivational piece of learning is really important, particularly for younger learners. Is it some combination of these, or something else?

 

They all contribute. It’s clear. But the hypothesis is: If you could somehow take some subset of those capabilities, or maybe all of them, and introduce them into an AI agent that could serve as a one-on-one human tutor, then you might be able to make some progress on this hard problem that Noboru described: How do you help people learn?

 

What’s happened with these agents over the past two decades since your paper?

 

Lester: There have been many, many, many advances in the field, many stemming from advances in AI. Natural language processing in what are sometimes called “intelligent virtual agents,” so, the ability for these agents to use gesture, posture, gaze, other nonverbal communication and natural language – to engage and manage natural dialogue. Natural language understanding, speech synthesis, speech understanding, computer vision: There’s a wide set of human-equivalent capabilities that we can now include in a pedagogical agent, which are by the day, becoming more powerful.

 

So, does it work? Can you make a computer behave like a really good tutor?

 

Lester: It is really effective. In a collection of studies, people have investigated different characteristics of pedagogical agents for different learner population and subject matters. The role, or value-add, of a pedagogical agent for one particular set of circumstances might be higher (it might be lower) than for another set. It’s certainly not the case that pedagogical agents are a single solution to this really, really hard problem. But they’re certainly a very promising one that has demonstrated a lot of ability to capture some of this really interesting set of capabilities that very accomplished tutors have.

 

But – there can be experts that understand a subject matter who are remarkably poor teachers. Everybody’s had interactions with somebody like this. That’s also true for pedagogical agents. One of the key capabilities of pedagogical agents, which is where they get their name, is pedagogy. They have to understand pedagogy. Concretely: When during a tutorial, dialogue interaction, should a pedagogical agent introduce an example? What’s the right moment for doing that? When should they provide corrective feedback that’s either quite direct, or maybe a little bit more indirect? When should they be encouraging?

 

This is something that really good tutors for younger students are just fantastic at. It’s very difficult to articulate the rules for when these kinds of capabilities should be brought to bear, but when you see a good tutor, a really world-class tutor, interacting with a student, it’s just like magic.

 

When you see a good tutor, a really world-class tutor, interacting with a student, it’s just like magic.

 

That magic is a model for how one type of AI might interact and engage with learners in ways that are surprisingly effective.

 

There are lots of different models for pedagogical agents, and they’re all surprisingly promising. Because AI is moving ahead so quickly, these are changing every day.

 

So there is an agent that students can teach and a tutor agent that can coach a student one-on-one. Are there other types of pedagogical agents?

 

Lester: There’s also embodied learning companions, or, virtual learning companions. In this model, a pedagogical agent acts in the role of a near-peer to a student.

 

Imagine an eighth grade student who’s having trouble with algebra, and imagine having a pedagogical agent that looks and speaks like a middle school student. It’s embodied as a middle school student, but it has strong pedagogy behind it. It is performing a delicate balance between being a peer of the student on the one hand, and being an incredibly accomplished tutor on the other hand. The interactions are quite different. You want a kind of friendly, learning companion-esque kind of interaction, as opposed to a kind of authority figure.

 

Currently, how is AI being used in classrooms?

 

Lester: It’s a pretty wide range. An AI in a classroom could very well have a model of a student. That means having a model of the student’s knowledge, current problem solving competencies and goals, understanding when students will likely apply particular problem solving strategies to solve a problem, having a model of the student’s affective state (whether they’re engaged or disengaged, for example), their metacognitive abilities, their ability, for example, to select different strategies. And it goes on. These student models can be quite simple, or they can be quite complex.

 

An AI in a classroom will use the knowledge in the student model to make decisions about two kinds of categories: What subject matter a student should be working on at this particular juncture in time so that they can learn most effectively and how to provide support in the middle of learning.

 

How does having the presence of an AI agent helping students affect the human teacher’s role in the classroom?

 

Lester: A very important thing to talk about here is that there is an incredibly important role for teachers in the classroom. I know many, many AI in education researchers. I don’t know a single one who’s interested in replacing teachers, because first, that would be an incredibly misinformed, misguided thing to try to do, and second, the capabilities of the teacher, not just cognitively, but also emotionally and socially, are so important in learning that people wouldn’t want to do that.

 

I know many, many AI in education researchers. I don’t know a single one who’s interested in replacing teachers.

 

There’s a very active area of research in AI-driven teacher support, augmenting teacher activities in the classroom by helping to inform the teacher about what kinds of help an individual student needs. Or there’s a more interesting version of this question in collaborative learning: If there are groups in the classroom that are working together, which group is, at this very moment, most in need of this precious resource that’s very hard to replicate, which is the teacher’s attention. So the AI can sort of wave its hand and say, “hey, this group here could really use some help.”

 

In what types of schools are these AI technologies being used? Can you find them broadly across public or private or K-12 and college?

 

Matsuda: Part of the answer depends on publishers. Publishers form collaborations with AI tech companies to make a computer tutoring system “part of the textbook” and then sell it to the school. That may happen statewide or districtwide, or just at a single school. I’m not super familiar with how those publishers go to schools.

 

Another thing is: We researchers may sometimes make a connection, partnership or relationship with a school — sometimes just a teacher, sometimes a whole school or district — and then those schools have the technology.

 

I don’t know how popular these types of AI tech are in daily classrooms, but think about AI broadly – like, Google. It’s definitely an AI. That happens in every single classroom. The teachers use that for preparation also. Most of the AI we talk about today is, technically speaking, machine learning. This is a branch of AI powered by AI. This is about how machines can be smarter. But AI was around 50 years ago. At the time, AI was programmed by people. There was machine learning research back then, but there’s also the branch of AI that was not necessarily machine learning. It’s more like a clever program that people made. It was about letting the machine exhaustively test all the possible combinations and find one which is the best, which is very different from the AI nowadays.

 

And what about education about AI. Where is that today? Are students still mostly learning just programming?

 

Lester: The really interesting thing is that now, there’s a recognition that AI is a very, very, very important subject matter for students to learn. There’s an argument that it’s an imperative to have students learn about AI early on to be informed citizens. We’re seeing this unbelievable, rapid growth of the field of AI K-12 education. Students are having to grapple in their real, everyday lives with AI. Just as students must understand math, they’re going to need to really understand the principles of AI, the concepts of AI, how AI is used.

 

Just as students must understand math, they’re going to need to really understand the principles of AI, the concepts of AI, how AI is used.

 

AI ethics is incredibly important for students to understand.

 

Now there are significant efforts underway by federal agencies to put funding in place to research how students should learn AI and how teachers should teach AI. This is true in the US, but it’s also true, internationally. Most countries now are making efforts to begin planning AI curricula. So: What should a fourth grader know about natural language processing? What should a seventh grader know about natural language processing? Some of these kids are gonna go on to become AI engineers and AI researchers. Not all of them. What do we want everyone to know? What do we want these “special track” kids to know who are going to be pursuing more advanced study in AI?

 

I’m involved in a project that’s supported by the National Science Foundation where we’re working on AI education for upper elementary students. We’re looking at how can we teach upper elementary kids about computer vision. How can we teach them about machine learning? How can we teach them about planning? These are fundamental concepts in AI, but what does it mean to do this for upper elementary school kids? And in particular, what does it mean to do this with an umbrella of ethics? All really, really important and very fast moving.

 

Just a few weeks ago, I heard a researcher in China mention that the Chinese government currently has a national plan that they’re rolling out for teaching all K-12 students about AI. There are significant efforts in Europe and in many countries to do this. Everybody recognizes this as a topic that’s going to profoundly affect society, and that means that it’s going to be very, very important to figure out how to do this well for K-12 students.

 

Today, there’s a whole economy built upon our private data. How do you think privacy is going to change as AI develops?

 

Lester: Privacy is, of course, incredibly important as an AI ethics problem. But just as there is this amazingly important field of AI ethics, which is incredibly active, all the way from the most technical questions about machine learning models, to much higher level questions, these are all societal-level questions that every country has to grapple with, and it’ll be different. The different norms in different countries and just different cultural approaches to what privacy means for a particular society.

 

There is this emerging field of AI education ethics. This is ethics for AI educational technologies, and that’s, in some ways, a special case of this AI ethics privacy question. For AI and education, it’s incredibly important, it would be almost impossible to overstate the importance of privacy within AI and education.

 

You asked earlier, what’s the level of dispersion of these technologies? Are they uniformly distributed across all schools and all school districts? Are all students in one school district seeing the same technologies as students in another school district? Just as background, the US education system is quite different from almost every other country because we’re district-based. It’s an odd and peculiar model, but that’s the way it works here. That has implications for procurement. That means that, as Noboru was saying, publishers, which is where these technologies typically come through, publishers have to sell into school districts. So one school district, which can be right next to another school district: Those are going to have different technologies, different textbooks, and so forth.

 

Would that depend on how much money a district has?

 

Lester: It’s a function of many things, one of which is money, but another is: What kinds of privacy policies would a particular district like to have in place? Different AI technologies have different kinds of privacy supports built into them. You can imagine there are knobs that can be turned for increasing the privacy or reducing the privacy.

 

So it’s gathering data? Well, I guess if the AI is learning, then it’s gathering data as it’s being deployed in the schools?

 

Lester: Many AI technologies work this way. For example, suppose that it’s autonomous vehicles. That autonomous vehicle machine learning system is learning to better navigate roads in a particular city, let’s say. The analogy is data that could be collected about students’ interactions. In the US, we do not want AI systems to willy-nilly collect information about students. This is a great concern and a great policy issue that people right now are grappling with. What, as a country, do we want the policies to be regarding privacy in US public schools? It’s incredibly important. And aside from it being ethically an incredibly important question, it’s also economically an important question because ultimately, these are commercial technologies that can be sold to individual districts. It’s going to be really important – we’re in the early days of all of this right now –  to get some good, ethical ground rules put in place to protect students, but also their parents, the communities in which they live. Very, very, very important questions.

 

What does AI look like in your classroom of the future? Imagine a kindergartener, fifth grader, high school or college student, or even someone working at a trade or at home undergoing informal learning. In five or 10 years, how is their learning experience different from the one they’re experiencing today? You can think of this in different ways: How is it horrible? How is it great? Paint a picture for me, if you can.

 

Matsuda: Part of this is kind of my hope and my dream. In the future, I think students feel it’s fun and not painful to learn. There are so many painful things nowadays. Learning is not everybody’s friend – for now. It may change in the future.

 

Learning is not everybody’s friend – for now. It may change in the future.

 

Or maybe we should change it in the future.

 

The saddest story for me is the student who really, really wants to learn something, but can’t. There are students like this. They need scaffolding – like me, same here. I have lots of things I want to learn now, but I really have trouble learning. I need help. In the future, this kind of stuff gets better with technology. Maybe it’s because of the private, pedagogical agent beside you. Maybe it’s because of a better connection to other friends.

 

The physical connection?

 

Matsuda: Physical connection, digital connection. They’re mixed, kind of.

 

You mean less lonely?

 

Matsuda: Yeah. Nowadays, a video call is much better than just a voice call, for example. In the future, that may change.

 

Do we have holograms?

 

Matsuda: Maybe. Many, many people want to feel connected. So, a better connection. I’m not alone. The pandemic: Many, many people felt like they were alone. They wanted to get connected. That kind of connection. Social networks are one way to make connections better. But something must come next. Whatever it is may help students to learn. They don’t have to learn only from their teachers. They can learn from many things. They may start learning from media, microlearning, supplemental learning, like what people learn from YouTube. Or they will buy technology, like AI.

 

One thing that I think about is misinformation, especially with generative AI. How do you know what’s real? How are we going to make sure that people are learning valid truths in the future? Is anything being done to address that now with AI and education?

 

Lester: It’s even more broad than education. There are two different kinds of problems. There’s misinformation in generative models, and there’s disinformation in generative models. Misinformation is sometimes called the hallucination problem. This is where, quite literally, the generative model will make up a fact. It’s completely false. That’s just a byproduct of the way the underlying architecture works. That is an inherent problem of this particular version of the machine learning current day technologies. That’s a very serious problem. And an even more serious problem is disinformation, where, imagine generative models that have gone rogue, are trained to produce information that is not correct. You can imagine all of the phenomenally challenging propaganda applications of this. Those are really, really important problems.

 

One of the really interesting conversations that we’re having with educators all over the country is: What’s the approach that they should take to ChatGPT in their classrooms? What approach should a principal take or district take? We’re seeing a pretty great divergence of opinions here. There’s what you might take as being a very, very, extra careful approach – not letting ChatGPT into your classroom – and there’s some reason why that might be: Among other things, the misinformation problem, the hallucinations. The other approach is to openly embrace it. This is not without its problems, because it’s going to be producing sentences, producing essays, that are factually incorrect. However, this is exactly the same set of problems that we all have to cope with if we’re going to be using these kinds of models. Despite that, they’ve proven to be incredibly useful. So the argument for this more inclusive approach to generative models like ChatGPT in classrooms, is that this is exactly the sort of tool that is proving to be incredibly effective in the workplace.

 

More broadly, adoption of large language models is massive and growing very quickly. It’s because they’re effective. They’re great at analyzing documents, analyzing videos, analyzing collections of images and documents and videos and being able to synthesize them. Being able to solve major problems in science, like the protein folding problem, being able to (slowly but surely) drive autonomous vehicles. There are so many different possibilities here, and we’re not even touching on healthcare, agriculture. It’s going to make such a difference in so many segments of society, and that promise, the early, growing version of that, is in models like ChatGPT. So the argument for including it in classrooms is that, despite the issues that they have with these models, students need to learn to use them.

 

A super interesting trend that we’re seeing is that those districts that are more resourced tend to be more inclusive in their willingness to provide the technologies to their students. And districts that tend to be less resourced: We’re starting to see that there’s more of a reluctance, which is also understandable, to letting ChatGPT and its ilk loose in the classroom. It’s a real question. It’s a policy question, but it’s also a very practical and interesting, and in some ways, very exciting question that we’re having to sort through in real time. Right now, today, people are having to make determinations about this. It’s kind of an amazing time to be alive.

 

Based on your knowledge of the field, how do you see AI playing out in education a decade from now? What are your students, kids or grandkids learning a decade from now?

 

Matsuda: Life is going to be a little more complicated than now because AI is going to be everywhere at that time. The computer started as a “computer.” Now, it’s called “computing” because the hardware is secondary. It’s more like: What is it doing? So in the future, AI is everywhere. It might not look like a machine. Nowadays, most of the AI is on the internet, on your mobile device, but AI is going to be everywhere. I think this room is going to be full of AI, maybe.

 

How does that look?

 

Matsuda: Assistive technology, for example, can help you to be more productive. So the same thing will happen in the classroom, I think. So, here again, for students, learning is going to be a little more enjoyable.

 

Say I’m a student in the future, and I’m going into class. What’s so fun?

 

Matsuda: I won’t be surprised if in the future a piece of paper starts speaking to you.

 

Is it a screen? Or it’s actual paper?

 

Matsuda: Actual paper, with some computing things in there. If you draw something, if you write an equation on a piece of paper, the paper may say something to you.

 

Like you get quick feedback?

 

Matsuda: Maybe. And then you can easily connect to somebody in your classroom also that way.

 

Would we be passing notes again?

 

Matsuda: Yeah. Well, you can share a drawing. I’m just making this up.

 

Do you have any thoughts about it, James?

 

Lester:  We’ve spent much of our discussion talking about pedagogical agents, which are fascinating, and they will absolutely become much, much more powerful. As we said, the underlying AI technologies for natural language processing, computer vision – even if you just took natural language processing, computer vision, machine learning – projected forward 10 years, you would see curves that are exponentially increasing in accuracy and power and efficiency and ability to, as Noboru was saying, be ubiquitous. That seems certain. But then the interesting question is, what are the implications of that for pedagogical agents?

 

Well, it means they’re going to be increasingly able to communicate effectively with students. They will have increasingly powerful pedagogical models. (So you know the two things it decides that we talked about [deciding what subject matter to cover and when or how to provide support], imagine what it means if those are 100 or 1000 times stronger than they are today. It’s hard to imagine what the consequences of that are.)

 

But there’s also reinforcement learning capabilities, where you’re training these pedagogical agents to become better and better and better at providing learning experiences that are more effective for students, and, as Noboru was saying, more engaging. Imagine a very tight feedback cycle there, where you have these agents that are really able to, in some sense, optimize learning, not just the cognitive piece of it, but also the affective and motivational piece.

 

That’s just pedagogical agents. The NSF AI Institute for Engaged Learning focuses on a very different vision for the future. As we all know, there’s this very powerful role that story plays in humans. Humans are incredibly, easily immersed in a story. When you watch a movie, when you read a novel, when you read a piece that you write that is based on narrative, it has a way of captivating human readers or audiences. The same underlying cognitive capabilities that drive that, that make the experience of narrative so powerful, can also be used to make learning more powerful.

 

If story is important to humans, how might future AI use story for learning?

 

Imagine an AI that could create an interactive narrative experience. For example, the student is playing the role of a virtual scientist, and all of the characters in this virtual world that they interact with are all optimized to create the most engaging, interactive story around, let’s say, a science mystery, that students could experience. Then imagine that you’re taking all of that and doing it in a collaborative fashion. So you have not just a single student, but groups of students who are immersed in this. Then suppose that all of this is being driven by what are sometimes called multimodal learning analytics – multiple data streams are being used to train the models to be ever more engaging, and ever more effective. I was using an example of science, but you could do this for math, or language arts or social studies.

 

Then imagine what happens if you’re a journalist 10 years from now, and you are about to have to do a piece on, let’s say, AI. You will have an AI who can teach this to you. But you can also have, say, a three or four minute interactive narrative experience around AI, where it creates an interactive story for you, so that you can instantly grasp: What are the promises of AI, if that’s the subject matter, or, what are the alternate futures? All these questions that we’ve been chatting about: Imagine if there can be interactive narrative experiences around that. And that’s true for training in every kind of possible job that you imagine. This will really fundamentally change how people learn. Taking advantage of this narrative apparatus that we have deeply built-in, to engage us, but also to help us learn, and doing that, not just for a foundational knowledge like we have in K-12 and college, but in very practical needs that we have as we’re developing new career paths all the way through our lives.

 

And just one last comment. There are all kinds of issues that have to be dealt with in AI. There are national security issues. There are the issues that we talked about with misinformation, and so forth. But there also is this incredible set of very bright futures where we could all thrive as learners and as teachers and as citizens. Our job is to make it possible that the technologies that are becoming ever more powerful every day are being used to promote these bright futures.

 

What needs to be in place to make it possible for everyone to live this future, and how long do you think it would take?

 

Matsuda: In some sense, the technology is already there. But in another sense, the technology needs to mature so that everybody can access it. How long until that happens? Maybe five years? I think the shift will be gradual. Also, the world is going to be changing. The complications in our life will be changing. So then, new technology may appear.

 

Do we have the computing power to handle processing all this data?

 

Matsuda: Yes and no. Yes, because even a mobile phone can carry this kind of super-advanced technology nowadays. No, because unfortunately, there are certain things only companies and industry people can do, not academics. A funny thing is, a few weeks back, I was attending a talk by someone from Google. And he said, oh, this is an example of something that people cannot do. He goes: It requires so many computations that only those big companies can afford.

 

Lester: Right now, one of the most rapidly growing areas of AI is on the hardware side. Many people aren’t aware of this. It’s because the particular kinds of chips that are most useful for being able to efficiently support large scale machine learning are in very, very, very short supply. Right now, substantial venture backing is going into new companies to develop chips specifically designed to support this particular form of machine learning that’s proven to be so effective. And at this very moment in time, in fact, there is a shortage of computing power.

 

I would be surprised if, in 10 years, we have that problem. We’ll have other problems to contend with, but this seems like a mid-term sort of problem. There will be variations of that problem in the future, but it’s an interesting thing that we have these technologies that are so sophisticated, and yet the limiting factor is a chip.

 

AI in 10, 20, 30, 100 years – you pick the time. What is your biggest dream, your most hopeful situation?

 

Lester: I think it’s supporting thriving. What can we do to develop AI (and I mean “develop” in the broader, societal sense, not just the technical sense) that will promote thriving? That means having people, a society that is healthy, a society that is happy, a society that is leading meaningful lives, and in the most fundamental sense, the level of wellbeing is the highest that it could be: What does a set of AI technologies look like that can promote that? That poses a very, very interesting set of technical questions. I have no doubt that there will be phenomenal progress over, pick your timeframe, as short or as long as you like.

 

I think that the more interesting set of challenges are really the social challenges. When you have AI that’s going to become as powerful as we know it’s going to be: What does that mean about job disruption? What does it mean about some other issues like national security? A really beautiful vision of this, I don’t think necessarily requires some kind of utopian view, because we as a species have never had a technology that’s this powerful. So in the set of futures that are good and positive, and really promoting humanity to be the best that it can be, there is a real possibility to create a set of conditions, which will be different for different countries. Different countries have different things that they optimize for. But we will have the technology that makes that possible.

 

We’ve been talking mostly about education today. There is no doubt that in ten years we will have a set of AI learning technologies that can create incredibly effective, incredibly engaging learning experiences for learners of every age, for every subject matter, whether it’s in formal settings like classrooms, or informal settings, like museums, the possibility for creating a very, very bright future is fully before us. It’s our choice if we wish to do this.

 

There is no doubt that in ten years we will have a set of AI learning technologies that can create incredibly effective, incredibly engaging learning experiences for learners of every age, for every subject matter, whether it’s in formal settings like classrooms, or informal settings, like museums, the possibility for creating a very, very bright future is fully before us. It’s our choice if we wish to do this.

 

Matsuda: AI is getting smarter and smarter. Soon in the future we will have to think of AI just as a tool that makes our lives better, as opposed to making AI even smarter. Because, I won’t be surprised if we can make a kind of duplication of human intelligence some time soon, a machine that has full human intelligence.

 

Like, how long?

 

Matsuda: I don’t know. Maybe in five years, 10 years. But I think at some point humans need to think of AI as just a tool. Currently the goal is to make a smarter machine, but in the future, I think the goal is going to turn to making AI just a tool to make our lives simpler, easier. Because technologically speaking, I think we can make a pretty, pretty smart machine that can actually compete with us, and that’s really scary to me.

 

If I could make a wish, I wish I could have a pal beside me.

 

From the point I’m born, this pal is going to be with me forever. That may happen in the future. So this pal learns – with you. Right? And it’s always beside me. And it’s always making suggestions. That may be an interesting twist in the future.

 

I thought about that a bit when James mentioned how a narrative experience could optimize learning so that you instantly understand something. A pal beside you that could optimize learning because it gets you, would be kind of like a soulmate, you know, that one thing that just gets you. And that’s kind of creepy. I don’t even know what that means.

 

What has to be in place for your pal to become a reality? What challenges would we have to overcome?

 

Matsuda: I would think the challenge would be more of an ethical issue at the beginning: To what extent should this guy be able to do things that it can do? But yeah, again, I think the balance between the smart machine versus just a tool. So you have your family. And then each family has their own pal beside them. I don’t know how to say that in English, but it’s like your – you know in Star Wars they have a little creature here [on the shoulder]? What’s that creature called? Gosh, I forgot the name?

 

I don’t watch much Star Wars, but if you say it, I’ll look it up later.

 

Matsuda: Yeah. It’s like a little creature. Yoda. I think it’s called Yoda.

 

[It’s Grogu, “Baby Yoda,” which has various powers, like telekinesis and calming The Force.]

 

-klein-


Return To News Homepage