How educators can help future learners outwit the robots

Video transcript

Thank you so much for inviting me. I love coming to Cambridge and I love coming to Cambridge Assessment and Cambridge University Press. I also have my own triangle, which I will be talking about later. Triangles are clearly very important to what we're talking about today. And you might think, why are you talking about artificial intelligence? If she's dealing with the social dimension of the future of learning? Well, hopefully that will become clear as well, because as she said, social interaction is fundamental to the way that we learn. Right. So is education ready for artificial intelligence? Hopefully, by the end of this talk, I'll persuade you that perhaps it's not quite, but that we can move in the direction where it can be. But let's see. But first of all, let's think about what is this thing called A.I. And don't worry, I'm not going to talk a lot about artificial intelligence, but I have to have, you know, a couple of slides to kind of lay out the ground material for what it is I want to say, which is all about intelligence and definitely encompassing the social aspects of intelligence. Let's take the very basic definition of artificial intelligence: Technology capable of actions and behaviours that require intelligence when done by humans. Very basic. A bit like translating the Cambridge University motto. Many ways of looking at defining artificial intelligence and lots of controversy between those working in this area about the right definition. But I think this is good for today. And why is artificial intelligence so much in the news at the moment and having a really big impact in many, many areas and including education in many parts of the world? Well, it's really because of a change between artificial intelligence that cannot learn, and artificial intelligence that can learn, because as we know, learning is the holy grail. Learning is what we're all trying to achieve. Learning is what helps us to develop our intelligence. And so back in 1997, when I was doing my PhD there was this huge celebration when IBM built this machine called Deep Blue that beat Garry Kasparov, the world chess master. And here is the sequence for those chess players where a fundamental change in the A.I. beating Garry Kasparov happened. But this was AI that couldn't learn. And I remember so clearly huge excitement in the artificial intelligence community because, yeah, we've cracked chess. Intelligent people play chess. We've cracked intelligence. Not so much. Suddenly, people thought, actually being able to see is much, much harder to do than being able to play chess. And we've spent decades developing AI systems that could do visual processing accurately. And that's a big important thing as far as I'm concerned to remember when it comes to intelligence, artificial and human, the things we take for granted are fundamental to our human intelligence and they're often the things that we overlook. But as we move forward in an A.I. enhanced world, they're perhaps the things we need to give much bigger credit to, because we don't necessarily want to focus in our education systems on the things that we can automate through artificial intelligence. So another game this time 2016 and we've got Lee Sedol up there like this. The moment he realizes he's going to lose, he's going to lose to a computer playing Go. And this is AI that learns and this is really what's brought about the huge change. Now, I would also flag at this point I'm not a Go player, by the way, but people who do play go tell me it's not just about winning the game. It's also about the social interactions that form around the Go playing. It's about the relationships you build with other people as you play. I suspect AlphaGo didn't do terribly well at those. So it depends what we mean by winning the game. But if you take this ability to build AI systems that can learn and you combine it with the masses of data, and we talked about big data earlier this morning in Stephen Toope's introduction, data is being collected all the time. Sometimes we're conscious of it. Sometimes we're not. We have masses of data and we now have masses of sophisticated A.I. algorithms that can learn from that data. Plus, we have affordable computing power and memory. Now, when I was a lowly undergraduate, if you wanted to run any program of any size, you had to run it overnight. I've got more computing power on my phone than we had then. And even when I was a PhD student, it was the same thing. We have amazing amounts of computing power memory and it's really these three things coming together that creates the perfect storm that we're now in where A.I. is having a huge impact on our lives and increasingly including education. But really important thing to remember, that just like oil, because we have phrases coming out from the media all the time about data is the new oil. Well, yes, it is certainly making a few people a heck of a lot of money, that's for sure. It's certainly like oil in that respect, but it's also like oil in the fact that it's crude. And until it's processed, it's of little value. So unless we collect the right data and we design the algorithms to process that data in a way that tells us something we want to know, it's not valuable. So it's all in the way it's processed and that data has to be cleaned before it can be processed by an algorithm. And that's also an important part of the process. So to sum up for this part of the talk. Artificial intelligence is intelligent. No question about it. But in a very particular sort of way. Now, there's lots of talk about artificial general intelligence, but we are nowhere near artificial general intelligence at the moment because artificial general intelligence would be the kind of intelligence that a human brain has where it's not just subject specific. So AlphaGo can't diagnose cancer, AI systems that might be able to diagnose a particular sort of cancer can't play Go. We've got very specific smart AI systems. And it's also important to remember that human intelligence and artificial intelligence are not the same thing. Yes, of course, we've built our AI systems in our own image. How else would we do it? But we've built our A.I. systems in a particular part of our own image. We've built them to do the things that we regard as intelligence. Hence, why was chess such an early target? Oh, that's an intelligent behaviour. Let's look at that. And now we're starting to realise that as I say some of the things we take for granted are intelligent behaviours and actually they're much harder to automate. So what does this all mean for education? The social bit will come in increasingly as I talk about this. Well, I think it's useful to think about how A.I. impacts on education in three buckets, if you like. But they're not separate. They're interconnected. We can think about how AI can be used to support teaching and learning, and increasingly it can be. We can think about how we need to educate people about AI because it's changing our lives. I don't mean by that everybody needs to learn to code, by the way. I just mean everybody needs to understand the principles of what A.I. is, what it can do, what it can't do in order to benefit from it and to keep themselves safe. And then the third one the gray one at the end, which is the hardest one of all, and that is trying to work out how we need to change the education system across cradle-to-grave, across the lifelong learning spectrum, because we now have this thing called A.I. and it's really fundamentally about recognizing that we need to rethink the way we talk about intelligence, about human intelligence, because we have this thing called AI and thinking how that impacts on our education systems and on society. And as I say, they are not mutually exclusive. They are highly interconnected. If we use AI effectively in education, we will help people understand AI more. And then as more people understand it, they will see why things need to change. First of all, very briefly, the reality of A.I. in education today and in some countries, it's being used enormously. China being one example, and in others it isn't. But the reality of the kinds of things that are available now for you to go out and buy if you wanted to are systems like this made by a company called Alelo in the U.S. Very, very nice language learning support, not just language learning, but they specialize in language learning, trying to help people learn a language not just in terms of their syntax, semantics, but also the embodiment of that language within culture. And the second slide is of a workplace training system as well. All of these pieces of software built by this company use artificial intelligence to individualise the way that they teach somebody language. And they're available online. You can pick them up wherever you are in the world. Have a conversation often with an avatar, an onscreen character, speaking, character, listening, and you're having a conversation and improving. And then there's a range of different platforms and different systems that are very, very good. Tutoring particular areas of the curriculum, particularly STEM and language learning. And they individualise instruction according to the student's needs. And they can give very useful feedback to educators, to parents and to students themselves about how they're progressing. Then there's a slightly different raft of systems where it's not about individualising the instruction to the child or to the adult. It's about recommending resources to be used, a more sophisticated version of Amazon. If you've read this, you might like that. If you bought this, you might like that. Much more sophisticated, but looking at a raft of different resources and trying to pick the ones that are best for your class if you're a teacher or for yourself as a learner. And then there's some really nice uses of A.I. that I think it's really nice to think about again, thinking about how we're bringing in social learning into this. So this is a company called Chatterbox that I particularly like that match-makes refugee expert language speakers with students who want to learn that language but tries to match make in a way where there's more than the language learning in common and also provides nicely tailored instructional resources as well. So you get the human support from your refugee expert speaker and the tech gives you some resources to support you as well. There are a whole raft of companies that are looking at ways of helping to process speech and helping to give advice about pronunciation to try and including improve people's conversational skills when it comes to language. There are a couple of language learning and then thinking very much about what Usha was saying about babies, you may or may not approve of this one, but this is the kind of thing. So we don't just think of A.I. systems being there to support adults or or school age children. This is a system for babies. This little monitor sits in the room and monitors the particularly the language interactions that happen within the room and give parents feedback about how they can improve the way that they're talking or singing or playing with their child. So not just for adults, that's for sure. It's an increasingly popular kind of approach to have that kind of monitoring going on and giving feedback. And finally to get away from thinking it's just about learning a particular subject or a particular skill. We can also look at systems like this one, a company called MyCognition, which is just starting to embed AI in its system. What it does is try to support what they call cognitive fitness. So it's looking at executive function memory, speed of processing, does an assessment to start off with online, and then you play this little game here called Aqua SNAP, which is individualised, to try and focus on the areas of your cognitive fitness that need particular attention. I really love it. I have it. I use it. I bought it for my family last Christmas because I think it's a really nice little app that you can use five minutes a day and then it's set. I think it's a nice use of starting to use AI there. So there are things that are readily available off the shelf in the hundreds and thousands more. But just to give you an example. But the potential is where it's really interesting. As I said, data is the new oil. And this gentleman here is is Lewis Johnson behind the company, Alelo. And he describes it as a game changer for his company. And as we know, data is all around us. We can collect it through technology, through our physical interactions, through directly inputting into computers, but also through observation and through our use of social media. There are so many ways in which we can collect data that can be useful for building A.I. systems to support teaching and learning. Or more importantly, to support our understanding of ourselves or of our students. As I've already said, data needs to be clean to extract its value. And it also needs to be processed. And this is where Usha was talking about, the amazing increased understanding of language learning and in cases where the students who have problems gained from psychology and neuroscience. We need that kind of understanding to inform the algorithms that process the data that are going to help us understand learning. This is where the real power comes. I call this learning sciences, cognitive psychology, neuroscience, education, sociology, all of these like learning sciences coming together to help us improve the algorithms. Now, if we get that right, it stops being about individual pieces of technology and it starts being about this: an intelligence infrastructure, and an intelligence infrastructure is that combination of data with learning. Science is informed. Algorithms processing that data to unpack the learning process. And if we get that right, that intelligence infrastructure can surely empower our interactions with technology in the workplace when we're training maybe through robots, through our phones, augmented and virtual reality, but really importantly, it empowers this, it empowers the social interaction that's so important for learning and it can empower our own understanding of ourselves. So when we're learning alone, we understand ourselves and our learning process is so much more because we have this intelligence infrastructure. And it's not just for able-bodied. Of course, we can start enhancing intelligent exoskeletons, glasses to help the blind see enhanced with artificial intelligence, embedded technologies in the skin and of course, tapping into what we're learning from neuroscience and adding that into the mix as we're trying to understand learning as it happens for our students, for ourselves. That's where we really get the gold, is that potential. We absolutely need people to understand about A.I., but it doesn't mean everybody understanding how to build A.I.. It just means everybody understands how to use it and really importantly, understanding the ethics. And when it comes to ethics, this is something I believe passionately in that we have to focus on now. We have to think every time we're thinking about data. Every time we think about processing it, every time we're thinking about building any kind of AI or using it, we must be thinking about ethics both in terms of the data, in terms of the way that it's processed and also in terms of what happens when it's processed. What happens? What does the learner receive? What happens if we have deep fake videos out there where it looks like it's Saul or it's me or it's Tim or it's Usha talking to people, but it's not. It's looking like us, but it's not. It's somebody else. And it's saying stuff we don't, we wouldn't say. We really have to start worrying about a lot of this stuff. And that's why we launched the Institute for Ethical A.I. and Education, because there's lots of work being done with data and ethics and AI and ethics. Nobody was holding the education piece. And education has to be the most important thing. It's something we want people to interact with throughout their lives. Other areas that are important, of course, include things like medicine and health. But we don't actually want people to interact with the medical system or the health system all their lives. We probably want them to interact with it as little as possible because they're fit and they're well. Whereas with education and learning, we want people to be doing it all the time. And when it comes to that ethical piece, education is crucial because regulation will never be enough. We can't keep up with the technology and with people who want to do harm, so education is crucial. But this is the hardest bit and this is where the social stuff really comes in. We're now in the early stages of the fourth industrial revolution. There's no question about that. It's happening all around us to a lesser or greater extent. And there are lots of reports. This is an example from last year. Robots doing my job. None of them agree on precisely which jobs, precisely how many, precisely where. But there is general consensus that these kind of things like transportation and storage are the most vulnerable to being automated. And hey, look, education right at the bottom of the graph. Yes. We're not about to automate education. Thank goodness for that, because we don't want to because it's fundamentally social. But that doesn't mean it won't change enormously. I think it will be disrupted enormously. It's just that it won't be that we'll replace educators with technology, or it won't be if we get it right. One of the other things where there's a consensus is that if you have a higher level of education, you are least vulnerable when it comes to the fourth industrial revolution. Also, we could call this the fourth education revolution. Anthony Seldon does. So, yes, there's some gender differences and there are some age differences. But this is a biggie. You know. Better educated, more resilient, better equipped to deal with the changes that are coming, because the problem is we don't actually know what the changes are, we can predict some things quite accurately in the near term. But when you think about the bigger picture, when you think about lots of things happening in the world, not just automation through AI, not just A.I. enhanced technologies but all of the geopolitical situation. There's lots of uncertainty as well. And it's a bit like driving a car in fog along a road you don't know. So you've got thick flog. You don't know where you are. How useful is a map? Not very. And so all of these reports that I alluded to earlier are very much trying to create that map. And I think that's the wrong approach. And I talk about this a lot more in this book, which I'll just flash up there for the detail because in five minutes, I can't give you a huge amount of detail. But if you want to know more, the book is there. Sadly not published by Cambridge University Press, but we need to correct that. So what's the human equivalent of that car that I need in the fog if I'm driving along a road I've never driven before? Thick fog. I want a car that's reliable. It's got good brakes. The steering is reliable. Good visuals. Visuals for what might be available. So not too many pieces of solid metal in the way. But I also need a driver that's not intoxicated, that's not sleep deprived, that knows how to drive. That can hear. Well, that can see. Well, I need all of those things. But what's the equivalent in terms of our learning, our intelligence that we need as we approach this fourth industrial revolution? Well, I think it's a combination of factors. And actually this builds on what Usha was saying earlier as well. Yes, of course, we need interdisciplinary academic intelligence, not gonna move away from that. But it does need to be more interdisciplinary because many of the problems we face in the world today need that kind of interdisciplinary approach. We certainly need social intelligence. I look at our current state. Sorry, it's a bit of a political point, but not particularly party political. I look at the place we are currently in the UK and I think if only our politicians were more socially intelligent, more able to do collaborative problem solving, more contextually intelligent, maybe we wouldn't be in such a pickle. But there you go. And then there's this thing called meta-knowing meta-cognitive, meta-subjective, meta-contextual - and I'll come to perceived self-efficacy - these meta-intelligences, that's where I think the most attention needs to be paid. AI cannot do this. AI does not understand itself. We can. We don't always understand ourselves very accurately, but we can learn. And that's really important. So if you add social intelligence and these meta-intelligences, that's where AI really, really struggles. The meta-knowing is about understanding what knowledge is. I worry a lot about the fact that I think people don't understand what knowledge is. They don't understand where it comes from. They don't understand what good evidence is, they don't understand how to make good judgements about what should be believed, and should not be believed. And I say that as somebody who teaches in a top university and I've just noticed as my students - all of whom are post-grads these days - come forth, they seem to have less grasp on the fact that knowledge isn't something to be handed out. It's something you construct. It's relative. There's very few things that are certain. I think we need to focus on our relationship to knowledge fundamentally because it's so important. Precious, as the Cambridge University motto goes. Metacognitive, yes, understanding our own cognition, understanding when we're distracted, understanding when we're focusing, understand what we do know accurately and what we don't know. Subjective intelligence, emotional, relates to social intelligence, but not just our emotional intelligence, our understanding of our emotional intelligence, the way it's developing and of its relationship to other people and their development of emotional intelligence. Meta-contextual intelligence, this is, again, something I think we underestimate enormously. And it's so hard for AI. I've never given a talk in this room before, but I kind of know how it works. You're sitting there. You're the audience. I'm up here. If I suddenly did three back flips down the of the room, I went and sat on that gentleman's lap. I think you'd all go whoa know that. I'd be surprised because I can't do backflips, but the point is, as a context, as an environment, which is only part of the context, I kind of understand how it works. Even though I've not been here before, that's really hard for AI. We just do it. We just breeze through in any circumstances and we shouldn't underestimate that. And if we get all of this right - and they're not individual intelligences, they're just different elements of our complex human intelligence - that is this accurate, perceived self efficacy, understanding what goals we should be setting ourselves, how likely we are to be able to achieve them, how motivated we are, are we focusing, who can help us, how can we be really effective in the world? That's what matters. So seven elements, not all separate, all highly interconnected. Five of them are about meta intelligence, about understanding ourselves or about understanding knowledge, what it is, where it comes from. And social intelligence is fundamental to all of this. And what's wonderful about it, and I think it's a brilliant finesse, because AI kind of got us to this point, but actually I'm also part of the solution because I can help us through that intelligence infrastructure. If we build it right, if we get the right data, we clean it and we apply what we know about human learning to the algorithms that process it, then we really can fly. And I'm gonna give you a tiny weeny example of what I mean. Please don't run away with the idea that I think we can solve it all. So simply, this is what we've been doing in the lab. It's looking at collaborative problem solving, which is highly social students interacting. And we collect lots of data as they do it, we mark their hands to see the movements. We do eye tracking. We look at what they're building. They're using some bits of technology and we collect technology that way. But what we also do is we think about what we know from the learning sciences, about what signifiers we might identify from this data that can tell us something about how the collaborative problem solving process is going for this group. Not because we think any school or college can have this kind of desk set up, especially my desk, but because as we analyse this data, we learn about the kinds of things that could be part of a classroom environment. Not to replace a teacher. Far from it. To give that teacher more information about when she needs to go and help a particular group. When a group's doing well. And then to give that group information for themselves to reflect on about how their process happened and what went well, what didn't. So one tiny weeny signifier that we can pick up, one of many that would be triangulated in order to reach some kind of understanding about how collaborative problem solving is going, is to do with hand movements and eye gaze direction. So we know from the learning sciences that when groups of students' hands are either placed on the same object or involved with each other, and when their gaze is either looking at the same thing or looking at each other, then that's a signifier that could be effective. Collaborative problem solving going on. Just one signifier, not the answer in itself. So we did a series of studies where we collected lots of data, including eye gaze and hand movement and we collected video which we then gave to an expert from a totally different university and asked her to identify where groups were collaborating effectively as they were solving a problem and where they were not. And we mapped our data from the hand movements and the eye gaze onto her analysis. And as you can see. So when it says high collaborative problem solving, low collaborative problem solving, that's the assessment made by the independent human expert looking at the groups. And this is time going along the horizontal axis. And you can see when you've got low problem solving, you've also got low synchrony. So when the lines are together, then there's lots of synchrony in the hand movements and in the eye gaze. So this is just one tiny, tiny example of the kinds of things we can do with data and AI algorithms informed by the learning sciences to try and unpack what's going on, in this instance when collaborative problem solving is in play to help us be better at supporting our students, to help students understand themselves more. And if you can imagine this triangulated with masses and masses of other signifiers, you can see how that intelligence infrastructure can be very powerful and how we can start to find ways of assessing and evaluating and shining a light on these kinds of intelligences, social intelligence measures, objective intelligence as well as academic intelligence. And I just wanted to finish, of course, with the triangle, because I'm in the triangle and it's the triangle that's at the heart of one of the ways in which I think we can bring this to life. So I was very struck by the opening talk by Vice-Chancellor in his talk about collaboration. And ours is a triangle of collaboration in the same way as the Cambridge University, Cambridge Assessment, Cambridge University Press is a triangle of collaboration. But this is a triangle of collaboration that I think can help us develop much better A.I. for use in education, for use to support teaching and learning, but also to address some of those other two buckets about understanding A.I. and thinking about how education changes. So it's the triangle that sits behind a program called Educate that we run at UCL knowledge lab, which is all about bringing together edtech developers, many of whom are using A.I. now, with the people they're developing for, the teachers and learners they're developing for. And then we sit there to try and broker the relationships, but fundamentally to try and get at the gold of this golden triangle, and that's the evidence, to try and improve the evidence that we can give to teachers, learners, parents about whether a piece of technology works or not. But it's more important than that, which we'll come to in just a second. But through this program, we've worked with some 250 companies in the last two years to try and get the conversation going around this triangle, to try and get the focus on: as an edtech developer, are you actually addressing a real learning need or not? And do we have evidence that it's working if you are? So companies we've worked with, for example, Busuu, andalso guess what Chatterbox, OyaLabs and MyCognition, who I spoke about earlier. These are some of the others that we work with from across the piece, many of whom are now using A.I. And just to finish with, I want to put that golden triangle in the centre of what I think needs to happen if we're going to address those three buckets around A.I. in education. And that is we need to help AI developers understand teaching and learning. Most of them know absolutely nothing about teaching and learning in the same way that most teachers and learners don't know much about AI. If we can help them to work together using this kind of model then we end up with much better pieces of A.I. technology being used in teaching and learning and a much better understanding of A.I. We help our educators to understand it better through working with our developers, and most importantly, we help our developers to understand teaching and learning through working with educators. And I think if we can get that right, then we can start to address those three buckets. How do we use A.I.? How do we help people understand A.I. and what are its implications for the future of education? So I will leave that slide up just as I rush off the stage.

Professor Rose Luckin's keynote speech at the Cambridge Summit of Education asked whether education is ready for AI, and suggested how educators can help future learners outwit the robots. The main elements of the speech are outlined below.

Transcript

ROSE LUCKIN: Thank you so much for inviting me. I love coming to Cambridge and I love coming to Cambridge Assessment and Cambridge University Press. I also have my own triangle, which I will be talking about later. Triangles are clearly very important to what we're talking about today. And you might think, why are you talking about artificial intelligence? If she's dealing with the social dimension of the future of learning? Well, hopefully that will become clear as well, because as she said, social interaction is fundamental to the way that we learn. 

So is education ready for artificial intelligence? Hopefully, by the end of this talk, I'll persuade you that perhaps it's not quite, but that we can move in the direction where it can be. But let's see. 

First of all, let's think about what is this thing called AI; don't worry, I'm not going to talk a lot about artificial intelligence, but I have a couple of slides to kind of lay out the ground material for what it is I want to say, which is all about intelligence, and definitely encompassing the social aspects of intelligence. 

Let's take the very basic definition of artificial intelligence: Technology capable of actions and behaviours that require intelligence when done by humans. Very basic. There are many ways of looking at defining artificial intelligence, and lots of controversy between those working in this area about the right definition. But I think this is good for today. 

Why is artificial intelligence so much in the news at the moment and having a really big impact in many areas including education in many parts of the world? Well, it's really because of a change between artificial intelligence that cannot learn, and artificial intelligence that can learn, because as we know, learning is the holy grail. Learning is what we're all trying to achieve. Learning is what helps us to develop our intelligence. 

Back in 1997, when I was doing my PhD there was this huge celebration when IBM built this machine called Deep Blue that beat Garry Kasparov, the world chess master. And here is the sequence for those chess players where a fundamental change in the AI beating Garry Kasparov happened. But this was AI that couldn't learn. And I remember so clearly huge excitement in the artificial intelligence community because, yeah, we've cracked chess. Intelligent people play chess. We've cracked intelligence. Not so much. 

Suddenly, people thought, actually being able to see is much, much harder to do than being able to play chess. And we've spent decades developing AI systems that could do visual processing accurately. And that's a big important thing as far as I'm concerned to remember when it comes to intelligence, artificial and human, the things we take for granted are fundamental to our human intelligence and they're often the things that we overlook. But as we move forward in an AI enhanced world, they're perhaps the things we need to give much bigger credit to, because we don't necessarily want to focus in our education systems on the things that we can automate through artificial intelligence.

So another game, this time 2016 it's Go and we've got Lee Sedol up there. The moment he realises he's going to lose, he's going to lose to a computer playing Go. This is AI that learns, and this is really what's brought about the huge change. Now, I would also flag at this point I'm not a Go player, by the way, but people who do play Go tell me it's not just about winning the game. It's also about the social interactions that form around the Go playing. It's about the relationships you build with other people as you play. I suspect AlphaGo didn't do terribly well at those. So it depends what we mean by winning the game. 

But if you take this ability to build AI systems that can learn and you combine it with the masses of data, and we talked about big data earlier this morning in Stephen Toope's introduction, data is being collected all the time. Sometimes we're conscious of it. Sometimes we're not. We have masses of data and we now have masses of sophisticated AI algorithms that can learn from that data. Plus, we have affordable computing power and memory. 

When I was a lowly undergraduate, if you wanted to run any program of any size, you had to run it overnight. I've got more computing power on my phone than we had then. And even when I was a PhD student, it was the same thing. We have amazing amounts of computing power memory and it's really these three things coming together that creates the perfect storm that we're now in where AI is having a huge impact on our lives and increasingly including education. 

But there's a really important thing to remember. Data is like oil - because we have phrases coming out from the media all the time about data is the new oil - Well, yes, it is certainly making a few people a heck of a lot of money, that's for sure. It's certainly like oil in that respect, but it's also like oil in the fact that it's crude. And until it's processed, it's of little value. 

Unless we collect the right data and we design the algorithms to process that data in a way that tells us something we want to know, it's not valuable. So it's all in the way it's processed and that data has to be cleaned before it can be processed by an algorithm. That's an important part of the process. So to sum up for this part of the talk. Artificial intelligence is intelligent. No question about it. But in a very particular sort of way. 

There's lots of talk about artificial general intelligence, but we are nowhere near artificial general intelligence at the moment because artificial general intelligence would be the kind of intelligence that a human brain has where it's not just subject specific. So AlphaGo can't diagnose cancer, and AI systems that might be able to diagnose a particular sort of cancer can't play Go. We've got very specific smart AI systems. 

It's important to remember that human intelligence and artificial intelligence are not the same thing. Yes, of course, we've built our AI systems in our own image. How else would we do it? But we've built our AI systems in a particular part of our own image. We've built them to do the things that we regard as intelligence. Hence, why was chess such an early target? "Oh, that's an intelligent behaviour. Let's look at that". Now we're starting to realise that some of the things we take for granted are intelligent behaviours and actually they're much harder to automate. 

So what does this all mean for education? The social bit will come in increasingly as I talk about this. Well, I think it's useful to think about how AI impacts on education in three buckets, if you like. But they're not separate. They're interconnected. 

We can think about how AI can be used to support teaching and learning, and increasingly it can be. 

We can think about how we need to educate people about AI because it's changing our lives - I don't mean that everybody needs to learn to code, by the way. I just mean everybody needs to understand the principles of what AI is, what it can do, what it can't do in order to benefit from it and to keep themselves safe. 

The third one the grey one at the end, which is the hardest one of all, is trying to work out how we need to change the education system across cradle-to-grave, across the lifelong learning spectrum. It's really fundamentally about recognising that we need to rethink the way we talk about intelligence, about human intelligence, because we have this thing called AI and thinking how that impacts on our education systems and on society. And as I say, they are not mutually exclusive. They are highly interconnected. 

If we use AI effectively in education, we will help people understand AI more. And then as more people understand it, they will see why things need to change. 

First of all, very briefly, the reality of AI in education today. In some countries it's being used enormously, China being one example, and in others it isn't. But the reality of the kinds of things that are available now for you to go out and buy if you wanted to are systems like this made by a company called Alelo in the U.S. Very, very nice language learning support, not just language learning, but they specialise in language learning, trying to help people learn a language not just in terms of their syntax, semantics, but also the embodiment of that language within culture. 

The second slide is of a workplace training system. All of these pieces of software built by this company use artificial intelligence to individualise the way that they teach somebody language. And they're available online. You can pick them up wherever you are in the world. Have a conversation often with an avatar, an onscreen character, speaking, character, listening, and you're having a conversation and improving. 

Then there's a range of different platforms and different systems that are very, very good. Tutoring particular areas of the curriculum, particularly STEM and language learning. And they individualise instruction according to the student's needs. They can give very useful feedback to educators, to parents and to students themselves about how they're progressing. Then there's a slightly different raft of systems where it's not about individualising the instruction to the child or to the adult. It's about recommending resources to be used, a more sophisticated version of Amazon. If you've read this, you might like that. If you bought this, you might like that. Much more sophisticated, but looking at a raft of different resources and trying to pick the ones that are best for your class if you're a teacher or for yourself as a learner. 

Then there are some really nice uses of AI that bring social learning into this. So this is a company called Chatterbox that I particularly like that match-makes refugee expert language speakers with students who want to learn that language, but it tries to match-make in a way where there's more than the language learning in common and also provides nicely tailored instructional resources as well. So you get the human support from your refugee expert speaker and the tech gives you some resources to support you as well. There are a whole raft of companies that are looking at ways of helping to process speech and helping to give advice about pronunciation to try and including improve people's conversational skills when it comes to language. 

Those are a couple of language learning examples, and then thinking very much about what Usha was saying about babies, you may or may not approve of this one, but this is the kind of thing: we don't just think of AI systems being there to support adults or school age children. This is a system for babies. This little monitor sits in the room and monitors the particularly the language interactions that happen within the room and give parents feedback about how they can improve the way that they're talking or singing or playing with their child. So not just for adults, that's for sure. It's an increasingly popular kind of approach to have that kind of monitoring going on and giving feedback. And finally to get away from thinking it's just about learning a particular subject or a particular skill. 

We can also look at systems like this one, a company called MyCognition, which is just starting to embed AI in its system. What it does is try to support what they call cognitive fitness. So it's looking at executive function memory, speed of processing, does an assessment to start off with online, and then you play this little game here called Aqua SNAP, which is individualised, to try and focus on the areas of your cognitive fitness that need particular attention. I really love it. I have it. I use it. I bought it for my family last Christmas because I think it's a really nice little app that you can use five minutes a day and then it's set. I think it's a nice use of starting to use AI. So there are things that are readily available off the shelf in the hundreds and thousands more. But the potential is where it's really interesting. 

As I said, data is the new oil. And this gentleman here is Lewis Johnson behind the company, Alelo. And he describes it as a game changer for his company. And as we know, data is all around us. We can collect it through technology, through our physical interactions, through directly inputting into computers, but also through observation and through our use of social media. There are so many ways in which we can collect data that can be useful for building AI systems to support teaching and learning. Or more importantly, to support our understanding of ourselves or of our students. As I've already said, data needs to be clean to extract its value. And it also needs to be processed. 

This is where Usha was talking about the amazing increased understanding of language learning and cases where the students who have problems gained from psychology and neuroscience. We need that kind of understanding to inform the algorithms that process the data that are going to help us understand learning. This is where the real power comes. I call these learning sciences: cognitive psychology, neuroscience, education, sociology, all of these coming together to help us improve the algorithms. 

If we get that right, it stops being about individual pieces of technology and it starts being about this: an intelligence infrastructure, and an intelligence infrastructure is that combination of data with learning. Science is informed. Algorithms processing that data to unpack the learning process. That intelligence infrastructure can surely empower our interactions with technology in the workplace when we're training through robots, through our phones, augmented and virtual reality, but really importantly, it empowers the social interaction that's so important for learning and it can empower our own understanding of ourselves. So when we're learning alone, we understand ourselves, and our learning process is so much more because we have this intelligence infrastructure. 

And it's not just for the able-bodied. We can start enhancing intelligent exoskeletons, glasses to help the blind see enhanced with artificial intelligence, embedded technologies in the skin and of course, tapping into what we're learning from neuroscience and adding that into the mix as we're trying to understand learning as it happens for our students, for ourselves. That's where we really get the gold, that potential. 

We absolutely need people to understand about AI, but it doesn't mean everybody understanding how to build AI. It just means everybody understands how to use it and, really importantly, understands the ethics. And when it comes to ethics, this is something I believe passionately in that we have to focus on now. We have to think every time we're thinking about data. Every time we think about processing it, every time we're thinking about building any kind of AI or using it, we must be thinking about ethics both in terms of the data, in terms of the way that it's processed and also in terms of what happens when it's processed. 

What happens? What does the learner receive? What happens if we have deep fake videos out there where it looks like it's Saul or it's me or it's Tim or it's Usha talking to people, but it's not. It's looking like us, but it's not. It's somebody else. And it's saying stuff we don't, we wouldn't say. We really have to start worrying about a lot of this stuff. 

That's why we launched the Institute for Ethical AI and Education, because there's lots of work being done with data and ethics and AI and ethics. Nobody was holding the education piece. And education has to be the most important thing. It's something we want people to interact with throughout their lives. Other areas that are important, of course, include things like medicine and health. But we don't actually want people to interact with the medical system or the health system all their lives. We probably want them to interact with it as little as possible because they're fit and they're well. Whereas with education and learning, we want people to be doing it all the time. And when it comes to that ethical piece, education is crucial because regulation will never be enough. We can't keep up with the technology and with people who want to do harm, so education is crucial. But this is the hardest bit and this is where the social stuff really comes in. 

We're now in the early stages of the fourth industrial revolution. There's no question about that. It's happening all around us to a lesser or greater extent. And there are lots of reports, this is an example from last year, of robots doing my job. None of them agree on precisely which jobs, precisely how many, precisely where. But there is a general consensus that things like transportation and storage are the most vulnerable to being automated. And hey, look, education right at the bottom of the graph. Yes. We're not about to automate education. Thank goodness for that, because we don't want to, because it's fundamentally social. But that doesn't mean it won't change enormously. I think it will be disrupted enormously. It's just that we won't replace educators with technology, or won't if we get it right. 

One of the other things where there's a consensus is that if you have a higher level of education, you are least vulnerable when it comes to the fourth industrial revolution (we could call this the fourth education revolution. Anthony Seldon does). So, yes, there are some gender differences and there are some age differences. But this is a biggie, you know. Better educated, more resilient, better equipped to deal with the changes that are coming, because the problem is we don't actually know what the changes are. We can predict some things quite accurately in the near term, but when you think about the bigger picture, when you think about lots of things happening in the world, not just automation through AI, not just AI enhanced technologies but all of the geopolitical situation. There's lots of uncertainty. 

It's a bit like driving a car in fog along a road you don't know. So you've got thick flog. You don't know where you are. How useful is a map? Not very. And so all of these reports that I alluded to earlier are very much trying to create that map. And I think that's the wrong approach. And I talk about this a lot more in this book, which I'll just flash up there for the detail because in five minutes, I can't give you a huge amount of detail. But if you want to know more, the book is there.

So what's the human equivalent of that car that I need in the fog if I'm driving along a road I've never driven before? Thick fog. I want a car that's reliable. It's got good brakes. The steering is reliable. Good visuals. Visuals for what might be available. Not too many pieces of solid metal in the way. But I also need a driver that's not intoxicated, that's not sleep deprived, that knows how to drive, that can hear, that can see. Well, I need all of those things. But what's the equivalent in terms of our learning, our intelligence that we need as we approach this fourth industrial revolution? 

I think it's a combination of factors. And actually this builds on what Usha was saying earlier as well. Yes, of course, we need interdisciplinary academic intelligence, not gonna move away from that. But it does need to be more interdisciplinary because many of the problems we face in the world today need that kind of interdisciplinary approach. 

We certainly need social intelligence. 

Then there are these things called meta-knowing, meta-cognitive, meta-subjective, meta-contextual - and I'll come to perceived self-efficacy - these meta-intelligences, that's where I think the most attention needs to be paid. AI cannot do this. AI does not understand itself. We can. We don't always understand ourselves very accurately, but we can learn. And that's really important. So if you add social intelligence and these meta-intelligences, that's where AI really, really struggles. 

Meta-knowing is about understanding what knowledge is. I worry a lot about the fact that people don't understand what knowledge is. They don't understand where it comes from. They don't understand what good evidence is, they don't understand how to make good judgements about what should be believed, and should not be believed. And I say that as somebody who teaches in a top university and I've just noticed as my students - all of whom are post-grads these days - come forth, they seem to have less grasp on the fact that knowledge isn't something to be handed out. It's something you construct. It's relative. There are very few things that are certain. I think we need to focus on our relationship to knowledge fundamentally because it's so important. Precious, as the Cambridge University motto goes.

Metacognitive is understanding our own cognition, understanding when we're distracted, understanding when we're focusing, understand what we do know accurately and what we don't know. 

Subjective intelligence, emotional, it relates to social intelligence, but not just our emotional intelligence, our understanding of our emotional intelligence, but the way it's developing and of its relationship to other people and their development of emotional intelligence. 

Meta-contextual intelligence, this is, again, something I think we underestimate enormously. And it's so hard for AI. I've never given a talk in this room before, but I kind of know how it works. You're sitting there. You're the audience. I'm up here. If I suddenly did three back flips down the of the room, I went and sat on that gentleman's lap. I think you'd all go "whoa". I'd be surprised because I can't do backflips, but the point is, as a context, as an environment, which is only part of the context, I kind of understand how it works. Even though I've not been here before. That's really hard for AI. We just do it. We just breeze through in any circumstances and we shouldn't underestimate that. And if we get all of this right - and they're not individual intelligences, they're just different elements of our complex human intelligence - this accurate, perceived self efficacy, understanding what goals we should be setting ourselves, how likely we are to be able to achieve them, how motivated we are, are we focusing, who can help us, how can we be really effective in the world? That's what matters. 

So seven elements, not all separate, all highly interconnected. Five of them are about meta intelligence, about understanding ourselves or about understanding knowledge, what it is, where it comes from. And social intelligence is fundamental to all of this. And what's wonderful about it, and I think it's a brilliant finesse, because AI kind of got us to this point, but actually I'm also part of the solution because I can help us through that intelligence infrastructure. If we build it right, if we get the right data, we clean it and we apply what we know about human learning to the algorithms that process it, then we really can fly. And I'm gonna give you a tiny weeny example of what I mean. Please don't run away with the idea that I think we can solve it all. 

So simply, this is what we've been doing in the lab. It's looking at collaborative problem solving, which is highly social students interacting. And we collect lots of data as they do it, we mark their hands to see the movements. We do eye tracking. We look at what they're building. They're using some bits of technology and we collect technology that way. But what we also do is we think about what we know from the learning sciences, about what signifiers we might identify from this data that can tell us something about how the collaborative problem solving process is going for this group. Not because we think any school or college can have this kind of desk set up, especially my desk, but because as we analyse this data, we learn about the kinds of things that could be part of a classroom environment. Not to replace a teacher - far from it - to give that teacher more information about when she needs to go and help a particular group, when a group's doing well. And then give that group information for themselves to reflect on about how their process happened and what went well, and what didn't. 

One tiny weeny signifier that we can pick up, one of many that would be triangulated in order to reach some kind of understanding about how collaborative problem solving is going, is to do with hand movements and eye gaze direction. So we know from the learning sciences that when groups of students' hands are either placed on the same object or involved with each other, and when their gaze is either looking at the same thing or looking at each other, then that's a signifier that there could be effective collaborative problem solving going on. 

We did a series of studies where we collected lots of data, including eye gaze and hand movement and we collected video which we then gave to an expert from a totally different university and asked her to identify where groups were collaborating effectively as they were solving a problem and where they were not. And we mapped our data from the hand movements and eye gaze onto her analysis. As you can see. When it says high collaborative problem solving, low collaborative problem solving, that's the assessment made by the independent human expert looking at the groups. And this is time going along the horizontal axis. And you can see when you've got low problem solving, you've also got low synchrony. So when the lines are together, then there's lots of synchrony in the hand movements and in the eye gaze. So this is just one tiny, tiny example of the kinds of things we can do with data and AI algorithms informed by the learning sciences to try and unpack what's going on, in this instance when collaborative problem solving is in play to help us better support our students, to help students understand themselves more. 

If you can imagine this triangulated with masses and masses of other signifiers, you can see how that intelligence infrastructure can be very powerful and how we can start to find ways of assessing and evaluating and shining a light on these kinds of intelligences, social intelligence measures, objective intelligence as well as academic intelligence. 

I just wanted to finish, of course, with the triangle, because I'm in the triangle and it's the triangle that's at the heart of one of the ways in which I think we can bring this to life. So I was very struck by the opening talk by Vice-Chancellor in his talk about collaboration. And ours is a triangle of collaboration in the same way as the Cambridge University, Cambridge Assessment, Cambridge University Press is a triangle of collaboration. But this is a triangle of collaboration that I think can help us develop much better AI for use in education, for use to support teaching and learning, but also to address some of those other two buckets about understanding AI and thinking about education changes. 

The triangle that sits behind a program called Educate that we run at UCL Knowledge Lab, which is all about bringing together edtech developers, many of whom are using AI now, with the people they're developing for, the teachers and learners they're developing for. And then we sit there to try and broker the relationships, but fundamentally to try and get at the gold of this golden triangle, and that's the evidence, to try and improve the evidence that we can give to teachers, learners, parents about whether a piece of technology works or not. But it's more important than that, which we'll come to in just a second. 

Through this program, we've worked with some 250 companies in the last two years to try and get the conversation going around this triangle, to try and get the focus on: as an edtech developer, are you actually addressing a real learning need or not? And do we have evidence that it's working if you are? So companies we've worked with, for example, Busuu, and also guess what Chatterbox, OyaLabs and MyCognition, who I spoke about earlier. These are some of the others that we work with from across the piece, many of whom are now using AI. 

And just to finish with, I want to put that golden triangle in the centre of what I think needs to happen if we're going to address those three buckets around AI in education. We need to help AI developers understand teaching and learning. Most of them know absolutely nothing about teaching and learning in the same way that most teachers and learners don't know much about AI. If we can help them to work together using this kind of model then we end up with much better pieces of AI technology being used in teaching and learning and a much better understanding of AI. 

We help our educators to understand it better through working with our developers, and most importantly, we help our developers to understand teaching and learning through working with educators. 

I think if we can get that right, then we can start to address those three buckets. How do we use AI? How do we help people understand AI and what are its implications for the future of education?

Return to top

One only has to look to the phenomenal amount of investment in artificial intelligence (AI) and education in countries like China and Singapore to see that machine learning is here, is developing rapidly and is already changing the use of technology in education.

Modern machine learning is certainly smart, but it cannot learn everything. We as educators therefore need to ensure that our human learners develop a rich repertoire of intelligent behaviours and advanced thinking.

Artificial intelligence: Technology capable of actions and behaviours that require intelligence when done by humans.

Why is artificial intelligence in the news so much, and why is it having a really big impact in many areas including education? Because of the development of artificial intelligence that can learn.

If you take this ability to build AI systems that can learn and you combine it with big data and modern computing power - that creates the ‘perfect storm’ that we're now in where AI is having a huge impact on our lives and increasingly education.

Human intelligence

An important thing to remember when it comes to artificial and human intelligence is that the things we take for granted as humans are fundamental to our human intelligence in an AI enhanced world, and actually they're much harder to automate.

They're perhaps the things we need to pay more attention to, because human intelligence and artificial intelligence are not the same thing. We don't necessarily want to focus in our education systems on the things that we can automate through artificial intelligence.

There's lots of talk about ‘artificial general intelligence’- an imagined machine that can function in the world as well as a human brain- but we are nowhere near that because a human brain is not subject specific. For example, AlphaGo can't diagnose cancer, and AI systems that might be able to diagnose cancer can't play Go. We've got very specific smart AI systems. 

Implications of AI for education

What does this all mean for education? I think it's useful to think about three routes for AI to impact education: 

  1. Using AI in Education to tackle some of the big educational challenges
  2. Educating People about AI so that they can use it safely and effectively
  3. Changing education so that we focus on human intelligence and prepare people for an AI world

These routes are not mutually exclusive. They are highly interconnected: if we use AI effectively in education, we will help people understand AI more. As more people understand it, they will see why things need to change. 

Using AI in education

Individualised learning

One example of AI in education in those that specialise in STEM language learning, trying to help people learn a language not just in terms of their syntax, semantics, but also language within culture. Companies such as Alelo use artificial intelligence to individualise the way that they teach somebody language and can provide useful individual feedback to educators, parents, and to students themselves.

Social learning

There are some nice uses of AI involving social learning. Chatterbox matches refugee language experts with students who want to learn that language, but it tries to make matches in a way where there's more than the language learning in common and provides nicely tailored instructional resources. 

The potential

There are many ways to collect data that can be useful for building AI systems to support teaching and learning. To process that data, we need understanding from what I call learning sciences: cognitive psychology, neuroscience, education, sociology, and language learning.

If we get that right, it stops being about individual pieces of technology and it starts being about an intelligence infrastructure that empowers the social interaction important for learning and our own understanding of ourselves.

Educating people about AI

We absolutely need people to understand AI, but that doesn’t mean understanding how to build AI. It that means that everybody understands how to use it and, importantly, understanding the ethics. This is something I believe passionately that we must focus on now. Every time we think about processing data, every time we're thinking about using any kind of AI, we must be thinking about ethics in terms of the way that data is processed. What happens? What does the learner receive? What happens if we have ‘deep fake’ videos out there that look like me, for example, but it's not me. 

That's why we launched the Institute for Ethical AI and Education, because although there’s a lot of work being done with data, AI, and ethics, nobody was holding the education piece. And education must be the most important thing. It's something we want people to interact with throughout their lives.

Education is crucial because regulation will never be enough. We can't keep up with technology and with people who want to do harm. But this is the hardest bit, and this is where the social stuff really comes in.

Changing education for an AI world

The fourth industrial revolution

We're now in the early stages of the fourth industrial revolution. There's no question about that. It's happening all around us. There are lots of predictions about robots taking over human jobs, and there is a consensus that areas like transportation and storage are most vulnerable, but we're not about to automate education. Thank goodness for that, because education is fundamentally social. But that doesn't mean it won't be disrupted enormously. It's just that we won't replace educators with technology, at least if we get it right.

Another consensus is that if you have a higher level of education, you are least vulnerable when it comes to the fourth industrial revolution: you’re better educated, more resilient, better equipped to deal with changes. The problem is we don't know what the changes will be; we can predict some things quite accurately in the near term, but there's a lot of uncertainty as well. It's a bit like driving a car in fog along a road you don't know. How useful is a map? Not very.

Types of intelligence needed

What's the human equivalent of that car that I need in the fog? I think it's a combination of different elements of our complex human intelligence:

  • Interdisciplinary academic intelligence, because many of the problems we face in the world today need an interdisciplinary approach
  • Social intelligence
  • Meta-intelligence:
    • meta-knowing
    • meta-cognitive
    • meta-subjective
    • meta-contextual
    • perceived self-efficacy

Meta-intelligence is where I think the most attention needs to be paid. AI does not have it; AI does not understand itself. Humans can. We don't always understand ourselves very accurately, but we can learn. AI really struggles with social intelligence and meta-intelligences.

Meta-knowing intelligence:

Understanding the nature of knowledge. Many people don't understand what knowledge is, where it comes from, or what makes good evidence. Knowledge is something you construct. It's relative. There are very few things that are certain. It’s important to focus on our relationship to knowledge.

Metacognitive intelligence:

Understanding our own cognition: when we're distracted, when we're focusing, what do we know accurately and what don’t we know?

Meta-Subjective intelligence:

This relates to social and emotional intelligence: the way it's developing and its relationship to other people and their development of emotional intelligence.

Meta-contextual intelligence:

AI finds it hard to understand context or environment. Humans can make sense of new circumstances and experiences relatively easily, but AI cannot.

Perceived self-efficacy:

Understanding what goals we should be setting ourselves, how likely we are to be able to achieve them, how motivated we are, who can help us, how can we be effective in the world? 

If we build it right, if we get the right data, and we apply what we know about human learning to the algorithms that process it, then we really can fly.

UCL Knowledge Lab: Collaborative problem solving

As an example, at the UCL Knowledge lab we’ve been looking at collaborative problem solving.

We know from the learning sciences that when groups of students' hands are either placed on the same object or involved with each other, and when their gaze is either looking at the same thing or looking at each other, then that's a signifier that collaborative problem solving going on.

We did a series of studies where we collected data including eye gaze and hand movement, and we recorded video which we then gave to a human expert observer and asked her to identify where groups were collaborating effectively. We mapped our data from the hand movements and eye gaze onto her analysis to see where it matches to further identify what collaborative problem solving looks like.

As we analyse this data, we learn about the kinds of things that could be part of a classroom environment to give teachers information about when they should help a particular group, or when a group's doing well - and then give the groups information to reflect on the process themselves. If you can imagine this triangulated with masses of other signifiers, you can see how that intelligence infrastructure can be very powerful.

Educating developers and teachers

At UCL Knowledge Lab we have a program called Educate, which is all about bringing together EdTech developers, many of whom are using AI, with the people they're developing for: teachers and learners.

We've worked with some 250 companies in the last two years to try and get the conversation going: As an EdTech developer, are you addressing a real learning need or not? And if you are, do we have evidence that it's working?

We need to help our educators better understand AI through working with developers, and we need to help developers understand teaching and learning.

I think if we can get that right, then we can start to follow those three routes of AI into education.


Rose Luckin is Professor of Learner Centred Design at UCL, and the founder and Director of EDUCATE, the leading research and business training programme for EdTech startups in London.

Her research blends artificial intelligence (AI) with theories from the learning sciences, with a particular interest in using AI to open the ‘black box’ of learning to show teachers and students the detail of their progress intellectually, emotionally and socially.

Research Matters

Research Matters 32 promo image

Research Matters is our free biannual publication which allows us to share our assessment research, in a range of fields, with the wider assessment community.