The orig­i­nal video for this pre­sen­ta­tion can be found at The Conference’s site.

Photo of a small robot seated by the side of the road

Can any­one tell me what this is? HitchBOT! Yes. It’s hitchBOT. What does hitchBOT do? He hitch­hikes. HitchBOT asks peo­ple to put it in their car and take it some­where, and as many of you have heard, this robot made it all the way across Canada and through some parts of Europe, rely­ing pure­ly on the kind­ness of strangers, and then two weeks ago hitchBOT was van­dal­ized. It was try­ing to cross the United States and some­one broke it beyond repair.

And hon­est­ly I was lit­tle bit sur­prised that it took this long for some­thing bad to hap­pen to hitchBOT. But I was even more sur­prised by the amount of atten­tion that this case got. I mean, it made inter­na­tion­al head­lines and there was an out­pour­ing of sym­pa­thy and sup­port from thou­sands and thou­sands of peo­ple for hitchBOT. They took to Twitter and expressed how sad they were. HitchBOT was attacked. HitchBOT is dead because humans are awful. Cecil the lion and Hitchbot the robot in the same week.

Now, you could say of course peo­ple are upset. This was an act of van­dal­ism, and we con­demn that, when peo­ple have no respect for oth­er peo­ple’s prop­er­ty and they make it val­ue­less. Not mat­ter what it is, if it’s a car, we don’t like that behav­ior. But in this case it’s also inter­est­ing that peo­ple are apol­o­giz­ing direct­ly to hitchBOT. HitchBOT I’m so sor­ry.” And I’m sure that Dana Mitchell and the hun­dreds of oth­er peo­ple like her know that they’re just talk­ing to a robot that does not under­stand them, and not only because it’s broken.

So why are all of these informed adults sym­pa­thiz­ing direct­ly with hitchBOT? The answer is anthro­po­mor­phism. This is our ten­den­cy to project life-like qual­i­ties on to oth­er enti­ties and emo­tion­al­ly relate to them through this. And this is why I’m super inter­est­ed in in the con­text of robot­ics. And I think it’s not only inter­est­ing but also a real­ly time­ly sub­ject. It’s time­ly because robots aren’t any­thing new. We’ve had robots for years. But the robots have been present kind of behind the scenes in man­u­fac­tur­ing and fac­to­ry con­texts, and what’s hap­pen­ing now is that they’re enter­ing into all of these new areas of our lives. Hospitals and trans­porta­tion sys­tems and the mil­i­tary. And they’re com­ing into our work­places and our households.

So what’s real­ly new about robots is that they’re going to be every­where. And it’s also noth­ing new that we can emo­tion­al­ly relate to objects. People have always had the ten­den­cy to fall in love with cars and gad­gets and stuffed ani­mals. But the new thing about robots is what we’re see­ing is this effect tends to be more intense. Those of us who work in human-robot inter­ac­tion, we think that this is because of the inter­play of three factors.

The first fac­tor is phys­i­cal­i­ty. People are also able to fall in love with vir­tu­al objects. This is the Portal Companion Cube from the video game, that a lot of peo­ple are very fond of. But stud­ies are show­ing that we’re very phys­i­cal crea­tures, and we respond very dif­fer­ent­ly to some­thing that’s in our phys­i­cal space ver­sus some­thing that’s vir­tu­al on the screen.

The sec­ond fac­tor is move­ment. Robots move, and we’re bio­log­i­cal­ly hard-wired to any­thing that’s mov­ing in our phys­i­cal space in a way that we can’t quite antic­i­pate what it’s going to do next. We’ll auto­mat­i­cal­ly project intent onto that move­ment. You see this even with real­ly sim­ple exam­ples like the Roomba vac­u­um clean­er. It just moves around ran­dom­ly on your floor to clean it, and it does­n’t know the dif­fer­ence between you and a chair. But just the fact that it’s mov­ing around caus­es peo­ple to name the Roomba, to feel bad for the Roomba when it gets stuck under the couch. So it starts there, and then there are much more extreme exam­ple where there are count­less sto­ries of sol­diers in the United States mil­i­tary that become emo­tion­al­ly attached to the robots that they’re work­ing with.

kate-darling-robots (10)

They’ll name them, and they’ll give them medals, and when they’re bro­ken and they need to get them repaired they want to have the exact same one back, not a dif­fer­ent one. And if they can’t repair them they’ll have funer­als for the robots, with gun salutes. There’s even stories—Peter Singer has a book called Wired for War, and there are sto­ries of sol­diers actu­al­ly risk­ing their lives to save the robots that they’re work­ing with. And what’s real­ly inter­est­ing here is that these robot aren’t designed to elic­it this response at all. They’re just meant to be tools. 

So that brings us to the third fac­tor, which is this whole new cat­e­go­ry of robots that are specif­i­cal­ly designed to make us respond to them in this way. They have faces and eyes, and they’re cute and they mim­ic all of these sounds and move­ments that we auto­mat­i­cal­ly and sub­con­scious­ly asso­ciate with states of mind. So what we’re see­ing is that the effect becomes real­ly strong with these robots, and stud­ies are show­ing that peo­ple real­ly respond to these cues and will respond to them even if we’re per­fect­ly aware that this is just a machine.

If you work in social robot­ics this is awe­some, because it means you can cre­ate so much engage­ment with this tech­nol­o­gy. And we’re already see­ing this put to real­ly great uses in health and edu­ca­tion, for exam­ple. There’s the Nao next-generation robot that can work with autis­tic chil­dren and effec­tive­ly bridge the com­mu­ni­ca­tion between par­ent and child. We have some cute robots at the MIT Media Lab that teach chil­dren read­ing skills and sto­ry­telling and cod­ing skills. And they’re real­ly engag­ing, because who would­n’t want to learn lan­guages from a fluffy drag­on instead of an adult.

And it’s not just for kids. We have robots moti­vat­ing adults to do things. There’s a weight-loss coach robot that is more effec­tive than con­ven­tion­al meth­ods because peo­ple are engag­ing with what they per­ceive to be a social actor. And we have the Paro seal that’s used in elder­ly care and with demen­tia patients. It’s kind of bril­liant because it gives peo­ple the sense of nur­tur­ing some­thing instead of just being the ones that are being cared for all of the time. It’s even been used as an alter­na­tive to med­ica­tion, for calm­ing dis­tressed patients.

So that’s kind of cool, and it’s also kind of cool to see that we’re start­ing to see that we’ll be able to use robots instead of ani­mal ther­a­py in a lot of cas­es where we can’t use ani­mal ther­a­py, which is quite a few con­texts. And the rea­son it works is because peo­ple will treat cer­tain robots more like an ani­mal than a machine or a device. There’s actu­al­ly this real­ly great exam­ple with very sim­ple tech­nol­o­gy, this recent exam­ple from Japan.

A Japanese monk kneeling before several shelves filled with Aibo robot dogs

Sony used to make this robot dog called the Aibo, and they stopped sell­ing it a while ago. But they just recent­ly pulled the tech sup­port [video skips ~10secs] funer­als. So that’s kind of adorable and heart­warm­ing or maybe a lit­tle creepy depend­ing on how you feel about it. 

And actu­al­ly I do want to talk about the dark side of this for a lit­tle as well. Or, the issues that myself and oth­er peo­ple think need to be addressed mov­ing for­ward. Just to give you an overview of some of the con­cerns, a lot of these robots are being used with elder­ly and with chil­dren, as I men­tioned, and there are some ques­tions of human auton­o­my and human dig­ni­ty if you’re deceiv­ing peo­ple into treat­ing some­thing like it’s alive when real­ly it isn’t. And then there are some ques­tions of sup­ple­ment­ing ver­sus replac­ing human care. Honestly a lot of the robots that I see being devel­oped are def­i­nite­ly there to sup­ple­ment human care and not replace it. But we don’t know how this tech­nol­o­gy’s going to be used down the road, and it’s worth keep­ing in mind that if we start replac­ing human care, we don’t know what aspects are going to get lost in that process.

Another big issue is pri­va­cy and data secu­ri­ty, because you have these robots that will be enter­ing into more inti­mate areas like our house­holds, and they’ll be col­lect­ing per­son­al data in order to func­tion bet­ter as social robots, and I do not see the com­pa­nies work­ing on this tech­nol­o­gy real­ly car­ing enough about pri­va­cy and data secu­ri­ty at the moment.

Another issue is emo­tion­al manip­u­la­tion. If peo­ple are emo­tion­al­ly engag­ing with robots, is it okay for my com­pan­ion robot to have in-app pur­chas­es? Or is it okay for my grand­fa­ther’s robot pet to sud­den­ly need a manda­to­ry soft­ware upgrade that costs $10,000? This is some­thing that we could decide to let the mar­ket reg­u­late, or it might be some­thing that we need con­sumer pro­tec­tion laws for. It’s a lit­tle ear­ly for that now, but I think that this is going to become an issue with­in the next few years, the next decade or so.

And then we have con­texts where we don’t want peo­ple to anthro­po­mor­phize robots, and we don’t real­ly know how to pre­vent that. So in the mil­i­tary exam­ples, again, it can be any­thing from inef­fi­cient to dan­ger­ous for peo­ple to be get­ting emo­tion­al­ly attached to the tools that they’re work­ing with, and we cur­rent­ly don’t real­ly know how to pre­vent that from happening. 

So these are some of the issues that I think we should be think­ing about as this moves for­ward. And I do think we should be address­ing these with­in the recog­ni­tion that this is incred­i­bly use­ful tech­nol­o­gy. And I don’t want to throw the baby out with the bath­wa­ter. I think we can talk about pri­va­cy and con­sumer pro­tec­tion and all of the eth­i­cal issues with­out dis­miss­ing the poten­tial of the technology.

And in the mean­time I also think it’s just a real­ly fas­ci­nat­ing area to study, because when we look at this anthro­po­mor­phiza­tion of robots, we’re actu­al­ly learn­ing more about human psy­chol­o­gy in the process.

I’d like to show a video that some of you may have seen. It got a lit­tle bit of atten­tion last February. This is a com­pa­ny called Boston Dynamics. They’re owned by Google, and they make these mil­i­tary robots that are very animal-like, or human-like, and this is a video where they intro­duce their newest robot, which is named Spot. It looks a lit­tle bit like a dog, and what’s going to hap­pen in the video is they’re going to kick it, and then once more a lit­tle hard­er. [Video plays through to ~0:35]

It was inter­est­ing when this video came out. Obviously they’re kick­ing the robot to demon­strate how sta­ble it is. But it does skid around in a very dog-like way, and so a lot of peo­ple expressed very neg­a­tive emo­tions about this video. They took to Twitter and online com­ments to say that this was dis­turb­ing. And it got to the point where PETA, the ani­mal rights orga­ni­za­tion, was get­ting so many phone calls that they had to issue a press state­ment, and they said basi­cal­ly we’re not going to lose any sleep over this cuz it’s not a real dog.” But they do say it makes sense that peo­ple find the idea of this vio­lence inappropriate.

Now, vio­lence and empa­thy in robot­ics is my main inter­est, and I’ve been work­ing on this for quite some time. I’ve been inter­est­ed in it for years now. A few years ago my friend Hannes Gassert and I did a work­shop at the Lift con­fer­ence in Geneva. We took these Pleo dinosaur robots. This is a $500 toy. It’s real­ly cute. They respond when you touch them, and if you hold them up by the tail, they cry and they get real­ly upset and you have to put them down and calm them down. So we gave these robots to groups of peo­ple and had them name them and play with them. I think we had five of them. And then we asked them to tor­ture and kill them.

Photos of a robot dinosaur leashed to a chair, and another of a group of people crouched over a different robot dinosaur with a hatchet on the ground next to it

We thought it would be inter­est­ing or fun­ny. It actu­al­ly turned out to be more dra­mat­ic than we expect­ed. People real­ly refused to even strike the Pleos, and we had to kind of play mind games with them and force them to get a lit­tle more bru­tal. In the end only one of the Pleos died. Four of them are still liv­ing hap­pi­ly some­where. But I came away from this work­shop feel­ing that this was super inter­est­ing. But also feel­ing that I did­n’t learn enough from the work­shop because it’s not a con­trolled set­ting, and it was­n’t an exper­i­ment. You can’t tell are peo­ple hes­i­tat­ing because it costs $500, or are they hes­i­tat­ing out of empa­thy, and there’s a lot of social dynam­ics. So I went back to the lab, and since then I’ve been work­ing with cheap­er robots, because you can’t use $500 robots if they’re going to get smashed. 

I’ve been work­ing with a research part­ner named Palash Nandy. We’ve been using these HEXBUGs. They’re just this toy that you can buy and they move around like lit­tle insects. So we have peo­ple come into the lab and smash them with mal­lets. We’ve been inter­est­ed in a few dif­fer­ent fac­tors, but one of the things that we first test­ed was how do peo­ple respond if you per­son­i­fy the robot? If you say, This is Frank. And Frank’s favorite col­or is red, and he likes to play.” And then will peo­ple hes­i­tate more to smash Frank?

But the oth­er thing we were inter­est­ed in was the rela­tion­ship between peo­ple’s nat­ur­al ten­den­cy for empa­thy, and how they would respond to the robot. So we did psy­cho­log­i­cal empa­thy test­ing with peo­ple, and we found that peo­ple with low empath­ic con­cern for oth­ers, they did­n’t care about Frank. They would just smash Frank. And peo­ple with high empath­ic con­cern, they respond­ed real­ly strong­ly to the per­son­i­fi­ca­tion and the sto­ry­telling. And it’s kind of cool because what we’ve come up with is a ver­sion of the Voight-Kampff test from Blade Runner. I don’t know if you guys know this con­cept. How many of you have seen Blade Runner or read the book? [Most of vis­i­ble audi­ence rais­es hands] Wow. That is a lot of people.

Okay, for those of you who have not, you should. It’s a total clas­sic. And I won’t spoil any­thing. But as many of you know it takes place in a world where robots and humans look exact­ly the same so they devel­op this test to dis­tin­guish between them where they use sto­ry­telling and they mea­sure peo­ple’s empath­ic respons­es to tell whether you’re a human or a robot. So what we’ve done is like a mashup of that where we can tell whether you’re an empath­ic human or not depend­ing on how you respond to sto­ry­telling around robots.

So that’s kind of cool. It’s actu­al­ly not the ques­tion that I’m most inter­est­ed in answer­ing. The ques­tion that I’m most inter­est­ed in is not can we mea­sure peo­ple’s empa­thy with robots, but can we change peo­ple’s empa­thy with robots? So for exam­ple, if you’re the guy whose just it is to just kick the dog-like robot all day, could that pos­si­bly desen­si­tize you to kick­ing an actu­al dog? And then there’s the more pos­i­tive flip­side, which is might we be able to use robots to encour­age peo­ple to be more empath­ic? Could we work with chil­dren and pris­on­ers or any­one, real­ly, and kind of encour­age empa­thy in peo­ple with robots?

So those are ques­tions I’m inter­est­ed in. I’m gen­er­al­ly inter­est­ed in all of the ques­tions that I’ve raised here today, and I think that they encom­pass kind of the core of what I view as robot ethics. (And I should say I love robots, prob­a­bly more than any­one else.) But I don’t think that robot ethics is actu­al­ly about robots. I think that it’s about humans. I think that it’s about our rela­tion­ship with robots, but main­ly robot ethics is about our rela­tion­ship with each other.

Thank you.

Further Reference

Kate’s home page.

Description at The Conference’s site of the ses­sion this talk was part of, and Kate’s speak­er bio.

Kate pre­vi­ous­ly spoke to CBC’s Spark pro­gram about her research, includ­ing more detail on the Pleo exper­i­ment. There’s also an extend­ed inter­view.