Illah Nourbakhsh: I want to start with two risks that robotics challenges humanity with, empowerment and dignity.
Movies let us imagine the future. This is Fail Safe, 1963. You should all watch it. In this movie, we see two great nations disempowered by autonomous robotic technology. The bad news is that reality has caught up with that movie.
BumBot here, made by this gentleman at home, a robot with high-pressure water cannons that scares the homeless off the sidewalk in front of his restaurant. The robotics movement, just like the Internet, can amplify the best and the worst in us humans. The difference is this is in the physical world.
And when you think about what we could be doing that’s empowering instead of disempowering, we’d have to start with the children. What we would have had to be doing now is instead of having our children become consumers of robotics technology, consumers of products, we’d have to train them to be producers, to realize that they can use robotic technologies to build something with their intuition, their creativity, and their sense of purpose, that has meaning to them. Then we’d have a technologically fluent society.
But that’s not what we’re doing. Schools around the world don’t teach technology fluency. Instead we use robotic products as robotics advances beyond our imagination. In fact, it advances to the point where the robots become difficult distinguish from the humans. As my colleague Tony pointed out, they get better and better at doing everything we can do. And I have a specific example for you from my friend in Japan, [Hiroshi] Ishiguro, where he’s actually designing robots to look more and more like us. My next picture is Ishiguro with his robot, a geminoid.
When you have robots that tend to look like us, and over time act like us, and perceive like us, and expect us to interact with them the same way, we lose our identity as people because we’re confusing identify of machine with the identity of humanity. How will we behave when faced with machines that look like us? How will we discriminate them, and how will that disempower us? It doesn’t have to be that way. Robotic technologies can celebrate our identity as humans.
In the Hear Me project at Carnegie Mellon, we help children tell the story of the challenges they face at school. They create media, then they build robots that tell their stories, and we spread them in restaurants and cafes around Pittsburgh, so that adults in the cafes and restaurants that make decisions about those children are hearing the stories of those children through robots. Those are robots for empowerment. Those are robots that celebrate the identity and challenges that we face as humans. But that’s not frequently the direction we go in.
Now let me turn to the side of dignity. And this is a really important point about humanity. How we treat autonomous machines matters. Famous experiment, where you bring in participants who play with the robot on the right using a flashlight. Then the researcher says, “The robot’s not performing well. Now kill it,” and they give you a hammer. And they video record this. They watch how many times you smash it. This is playing with fire. We’re creating robots in which people perceive agency. Then we’re asking them to treat the robot inhumanely, on purpose. It doesn’t have to be that way.
Keepon, developed in Japan and Carnegie Mellon, teaches autistic children to dance. Because it’s not human looking, it’s easier for autistic children to deal with. That’s the direction we ought to be going, but how many products do that? Very few indeed.
One final challenge for you is an example of what we could be doing. Air pollution, as you all know, is a massive problem globally. It kills more people than breast cancer, AIDS, prostate cancer, car crashes, put together. So particulate matter matters. But it’s invisible. And it’s not just an urban problem. We know this problem exists in rural areas. I myself have gone to Uganda, Bukoba and [Simbi?], and measured air pollution in single room houses like that at a thousand times unsafe levels of particulates.
So we know this is a problem, and heartbreakingly, that’s the house where somebody lives, sleeps, eats, and cooks. So children are, day and night, subjected to exactly the things that will cause cardiovascular disease, pulmonary disease, and even now linked to autism and ADHD hyperactivity disorder.
So we know that is a major problem. Yet we know how to make robots that can sense the particulate matter. We’ve demonstrated this at CMU again. They can sense the particulate matter. They can display it to the homeowner. They can run an electric fan charged by solar power. Exhaust the smoke when necessary. And even provide light in an environment, the indoor Ugandan kitchen, which has forever been dark.
That’s the kind of robot technology we know how to create. That creates dignity for humanity, and it empowers that person to understand the system of their house as a place where they can manage the air pollution. But that’s not the direction we tend to go in robotics. It doesn’t have the profit margin for our business society.
So I want to end by pointing out that if we started with the youngsters and taught them robotic technology as producers, as creators of artifact, and we’ve demonstrated this in Pittsburgh, you can go far in changing the power dynamics. But I’m going to say the risks outweigh the opportunities until we decide that robots are not products, but raw material for people who are technologically fluent to create a new society.
Further Reference
Illah Nourbakhsh at the Carnegie Mellon University Robotics Institute.
Robot Futures, Illah’s book exploring speculative robot interaction scenarios, and associated blog.