Robotics and AI are expected to deal with many of the global challenges that we are facing. Poverty, international conflict, health problems. And this means that they will leave the factory floors and that they will come into the environments where we are most, in our homes, in our streets, in our schools, and in our hospitals.
Here’s an example of a typical environment. And these environments are social. We are social animals by nature, and we interact socially, and we obey social rules all the time. It tells us what is “normal” and it tells us how to behave. So if we take this technology and put it in these environments, this technology will stand out if it doesn’t comply, if it doesn’t have social intelligence.
And when we are not socially intelligent, we are seen as rude, or threatening, and people really don’t like us. And that is the same when you use technology and put it in environments.
Here’s an example of how we made computers more socially intelligent, because how can we make computers empathize. It’s a feeding robot that helps paralyzed people to eat. And we made actually quite a simple module that listens to the conversation at the table and then offers the food at natural intervals during the dinner, which allows a person to completely participate in the ritual that dining together is.
So, we can do for simple things. But what about more complex social situations? You can see speed-dating. We made a lot of people speed-date, and then knowing the outcomes we could make the computer think about what were the parameters that predicted a match. And in this case it was the variation in the distance between the two daters. I impart this wisdom to you. But it shows that we can deal with quite complex social situations.
And the core idea behind making machines socially intelligent is that sentient beings need social referencing to learn. We do this much more, for instance, than chimpanzees. We hold our babies, we look at them, we put them in a chair, and we talk to them, we wave at them, we show them objects. And babies emulate our behavior and they learn. And it’s thought that this social referencing leads to empathy and knowing theory of mind, and even enculturation.
Enculturation allows us much more effectively, for instance than chimpanzees, to teach children how to use that the thinking tools that we all have in our brain very effectively. So just to compare it takes a chimpanzee child about seven years to learn to hit a nut with a rock.
So, social referencing is so great robots should do it, too. But first there are some technical challenges that we need to solve. For instance, low energy consumption. Throughout these technical challenges, what these robots really need to do is understand the social environment that they are in.
So, this robot, a SPENCER robot at Schipol airport, it needs go around a little family and not drive through them. And here you see the FROG robot. It’s an outdoor guide robot, and this robot has to respond naturally people. It has to behave naturally. And you see a blind person trying to understand FROG and FROG trying to understand the blind person. The model that FROG has of it users must also deal with impairments.
And what happens is when we put these behaviors into robots, they are easier to understand because they are much more familiar. But it also means that they are becoming very rich social characters. And it may even mean that we empathize with them or maybe even develop a social bond with such robots.
This is the EASEL robot, and you can see a boy really engaging with this robot even though one of the huge challenges, really these technical challenges that I told you about to get these robots running for long periods of time, a very large challenge, is our own fear. Because we are not quite sure what happens to people when they start relating to robots, and if they change how they relate to people.
So it’s extremely important that we design and develop these robots to fit the role that they have to do. Here you see a TERESA telepresence robot that elderly people can use to participate at a distance. So they log into the room from a remote location. And over time the artificial intelligence takes over their positioning behavior so they can focus on socializing.
These robots need to seamlessly integrate into our environments. But they are hugely integrated machines themselves. They need machine learning, computer vision, they need planning, syncing, hardware manipulation, navigation. So you can’t just build a robot with one discipline. You need a lot of disciplines to come together to realize an optimal solution.
There are many applications that we can think of for social robots. You can think of an exoskeleton that knows where your friends are. You can think of drones that know the difference between civilians and soldiers. A search and rescue robot that knows when someone is in pain. A car that knows that you’re distracted. A house that knows that someone doesn’t belong there.
So while we are all thinking about what robots can do, we should make robots think about us and what we do. So I’ve showed you that machines can respond and deal with social behaviors, our social behaviors, and I’m very interested to know how social robotics could mean anything in your field interest or passion. Thanks.
Further Reference
Vanessa Evers’ home page, and her faculty profile at the University of Twente