Illah Nourbakhsh: I want to start with two risks that robot­ics chal­lenges human­i­ty with, empow­er­ment and dignity.

A man seated at an office desk, reaching to push a button on a machine on the desktop

Movies let us imag­ine the future. This is Fail Safe, 1963. You should all watch it. In this movie, we see two great nations dis­em­pow­ered by autonomous robot­ic tech­nol­o­gy. The bad news is that real­i­ty has caught up with that movie.

BumBot here, made by this gen­tle­man at home, a robot with high-pressure water can­nons that scares the home­less off the side­walk in front of his restau­rant. The robot­ics move­ment, just like the Internet, can ampli­fy the best and the worst in us humans. The dif­fer­ence is this is in the phys­i­cal world. 

And when you think about what we could be doing that’s empow­er­ing instead of dis­em­pow­er­ing, we’d have to start with the chil­dren. What we would have had to be doing now is instead of hav­ing our chil­dren become con­sumers of robot­ics tech­nol­o­gy, con­sumers of prod­ucts, we’d have to train them to be pro­duc­ers, to real­ize that they can use robot­ic tech­nolo­gies to build some­thing with their intu­ition, their cre­ativ­i­ty, and their sense of pur­pose, that has mean­ing to them. Then we’d have a tech­no­log­i­cal­ly flu­ent society.

But that’s not what we’re doing. Schools around the world don’t teach tech­nol­o­gy flu­en­cy. Instead we use robot­ic prod­ucts as robot­ics advances beyond our imag­i­na­tion. In fact, it advances to the point where the robots become dif­fi­cult dis­tin­guish from the humans. As my col­league Tony point­ed out, they get bet­ter and bet­ter at doing every­thing we can do. And I have a spe­cif­ic exam­ple for you from my friend in Japan, [Hiroshi] Ishiguro, where he’s actu­al­ly design­ing robots to look more and more like us. My next pic­ture is Ishiguro with his robot, a geminoid.

Hiroshi Ishiguro, shown from the waist up leaning slightly forward with his hand in front of him, mirrored by a robot modeled on his features

When you have robots that tend to look like us, and over time act like us, and per­ceive like us, and expect us to inter­act with them the same way, we lose our iden­ti­ty as peo­ple because we’re con­fus­ing iden­ti­fy of machine with the iden­ti­ty of human­i­ty. How will we behave when faced with machines that look like us? How will we dis­crim­i­nate them, and how will that dis­em­pow­er us? It does­n’t have to be that way. Robotic tech­nolo­gies can cel­e­brate our iden­ti­ty as humans. 

In the Hear Me project at Carnegie Mellon, we help chil­dren tell the sto­ry of the chal­lenges they face at school. They cre­ate media, then they build robots that tell their sto­ries, and we spread them in restau­rants and cafes around Pittsburgh, so that adults in the cafes and restau­rants that make deci­sions about those chil­dren are hear­ing the sto­ries of those chil­dren through robots. Those are robots for empow­er­ment. Those are robots that cel­e­brate the iden­ti­ty and chal­lenges that we face as humans. But that’s not fre­quent­ly the direc­tion we go in. 

A hammer, flashlight, and small robotic toy laid out on a tabletop.

Now let me turn to the side of dig­ni­ty. And this is a real­ly impor­tant point about human­i­ty. How we treat autonomous machines mat­ters. Famous exper­i­ment, where you bring in par­tic­i­pants who play with the robot on the right using a flash­light. Then the researcher says, The robot­’s not per­form­ing well. Now kill it,” and they give you a ham­mer. And they video record this. They watch how many times you smash it. This is play­ing with fire. We’re cre­at­ing robots in which peo­ple per­ceive agency. Then we’re ask­ing them to treat the robot inhu­mane­ly, on pur­pose. It does­n’t have to be that way. 

A small robot, roughly resembling two yellow foam balls stacked atop each other, with eyes and a nose.

Keepon, devel­oped in Japan and Carnegie Mellon, teach­es autis­tic chil­dren to dance. Because it’s not human look­ing, it’s eas­i­er for autis­tic chil­dren to deal with. That’s the direc­tion we ought to be going, but how many prod­ucts do that? Very few indeed.

One final chal­lenge for you is an exam­ple of what we could be doing. Air pol­lu­tion, as you all know, is a mas­sive prob­lem glob­al­ly. It kills more peo­ple than breast can­cer, AIDS, prostate can­cer, car crash­es, put togeth­er. So par­tic­u­late mat­ter mat­ters. But it’s invis­i­ble. And it’s not just an urban prob­lem. We know this prob­lem exists in rur­al areas. I myself have gone to Uganda, Bukoba and [Simbi?], and mea­sured air pol­lu­tion in sin­gle room hous­es like that at a thou­sand times unsafe lev­els of particulates.

So we know this is a prob­lem, and heart­break­ing­ly, that’s the house where some­body lives, sleeps, eats, and cooks. So chil­dren are, day and night, sub­ject­ed to exact­ly the things that will cause car­dio­vas­cu­lar dis­ease, pul­monary dis­ease, and even now linked to autism and ADHD hyper­ac­tiv­i­ty disorder.

So we know that is a major prob­lem. Yet we know how to make robots that can sense the par­tic­u­late mat­ter. We’ve demon­strat­ed this at CMU again. They can sense the par­tic­u­late mat­ter. They can dis­play it to the home­own­er. They can run an elec­tric fan charged by solar pow­er. Exhaust the smoke when nec­es­sary. And even pro­vide light in an envi­ron­ment, the indoor Ugandan kitchen, which has for­ev­er been dark.

That’s the kind of robot tech­nol­o­gy we know how to cre­ate. That cre­ates dig­ni­ty for human­i­ty, and it empow­ers that per­son to under­stand the sys­tem of their house as a place where they can man­age the air pol­lu­tion. But that’s not the direc­tion we tend to go in robot­ics. It does­n’t have the prof­it mar­gin for our busi­ness society.

So I want to end by point­ing out that if we start­ed with the young­sters and taught them robot­ic tech­nol­o­gy as pro­duc­ers, as cre­ators of arti­fact, and we’ve demon­strat­ed this in Pittsburgh, you can go far in chang­ing the pow­er dynam­ics. But I’m going to say the risks out­weigh the oppor­tu­ni­ties until we decide that robots are not prod­ucts, but raw mate­r­i­al for peo­ple who are tech­no­log­i­cal­ly flu­ent to cre­ate a new society. 

Further Reference

Illah Nourbakhsh at the Carnegie Mellon University Robotics Institute.

Robot Futures, Illah’s book explor­ing spec­u­la­tive robot inter­ac­tion sce­nar­ios, and asso­ci­at­ed blog.

Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Cash App, or even just sharing the link. Thanks.