Could we make a moral machine? Could we build a robot capa­ble of decid­ing or mod­er­at­ing its actions on the basis of eth­i­cal rules? Three years ago I thought the idea impos­si­ble, but I’ve changed my mind. So, what brought about this u‑turn?

First was think­ing about sim­ple eth­i­cal behav­iors. So imag­ine some­one not look­ing where they’re going. You know, look­ing at their smart­phone, about to walk into a hole in the ground. You will prob­a­bly inter­vene. Now, why is that? It’s not just because you’re a good per­son. It’s because you have the cog­ni­tive machin­ery to pre­dict the con­se­quences of their actions.

Now imag­ine it’s not you but a robot, and the robot has four pos­si­ble next actions. So, from the robot­’s per­spec­tive it could stand still or turn to its left, and that the human will come to harm, will fall in the hole.

sdaf

But if the robot could pre­dict the con­se­quences of both its and the human’s action, then anoth­er pos­si­bil­i­ty opens up. It could choose to col­lide with the human to pre­vent them from falling in the hole. And if we express this is an eth­i­cal rule, which you see here, this looks remark­ably like Asimov’s First Law or Robotics, which is that a robot must not injure a human or through inac­tion cause a human to come to harm.

So thus emerged the idea that we could build an Asimovian robot. We need to equip the robot with the abil­i­ty to pre­dict the con­se­quences of both its own actions and oth­ers’ in its envi­ron­ment, plus the eth­i­cal rule that I showed you in the pre­vi­ous slide.

Screenshot of a robot simulator showing several robots playing what looks like soccer

Image: Webots

In fact, the tech­nol­o­gy that we need to do this exists and it’s called the robot sim­u­la­tor. So, roboti­cists use robot sim­u­la­tors all the time to mod­el and test our robot code in a vir­tu­al world before run­ning that code on the real robot. But the idea of putting a robot sim­u­la­tor inside a robot, well, it’s not a new idea but it’s tricky and very few peo­ple have pulled it off. In fact, it takes a bit of get­ting your head round. The robot needs to have, inside itself, a sim­u­la­tion of itself and its envi­ron­ment, and oth­ers in its envi­ron­ment. And run­ning in real-time as well.

Three robots standing in a room, with one inside a bounded-off "danger zone"

So, over the past two years we’ve actu­al­ly test­ed these ideas with real robots. In fact, these are the robots. We don’t have a hole in the ground, we have a dan­ger zone. And we use robots instead of humans. We use robots as proxy humans. So let me show you some of our lat­est exper­i­men­tal results.

Here we have a the blue robot, the eth­i­cal robot, is head­ing towards a des­ti­na­tion. This is its goal. But it notices right here that the red robot, the human, is head­ing toward dan­ger. So the blue robot choos­es to divert from its path to col­lide (gen­tle col­li­sion) with the human, to pre­vent it from com­ing from harm. This is exact­ly the same thing but a short movie clip. You can see again, the blue robot is the eth­i­cal robot. Our red robot is the proxy human. Cute robots, aren’t they.

So, we also test­ed the same with an eth­i­cal dilem­ma. Here our eth­i­cal robot is faced with two humans head­ing toward dan­ger. It rather dithers, rather hes­i­tant, and of course it can­not save them both. There isn’t time. Ethical dilem­mas are a prob­lem real­ly for ethi­cists not roboti­cists.

So, how eth­i­cal is our eth­i­cal robot? Our robot imple­ments a form of con­se­quen­tial­ist ethics. In fact, we call the inter­nal mod­el a con­se­quence engine. The robot behaves eth­i­cal­ly not because it choos­es to, but because it’s pro­grammed to do so. We call it an eth­i­cal zom­bie. Our approach has a huge advan­tage, which is that the inter­nal process of mak­ing eth­i­cal deci­sions is com­plete­ly trans­par­ent. So if some­thing goes wrong, then we can replay what the robot was think­ing. I believe that this is going to be real­ly impor­tant in the future, that autonomous robots will need the equiv­a­lent of a flight data recorder in air­craft. An eth­i­cal black box.

So, what have we learned? Well, the biggest les­son, in fact the thing that caused my u‑turn, is this, that we do not need to make sen­tient robots to make eth­i­cal robots. In oth­er words, we don’t need a major break­through in AI to build at least a min­i­mal­ly eth­i­cal robot. We don’t need to build Data from Star Trek.

I’d like to leave you with a ques­tion about the ethics of eth­i­cal robots. If we can build even min­i­mal­ly eth­i­cal robots, are we moral­ly com­pelled to do so? Well, with dri­ver­less cars just around the cor­ner, I think it’s a ques­tion that we’re going to have to face real­ly quite soon. So thank you very much indeed for lis­ten­ing. Thank you.

Further Reference

Alan Winfield's blog with follow-up post and slides, and his staff profile at the University of West England, Bristol web site.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.