Molly Wright Steenson: Hi every­body. It is great to be here. I got to speak at the very first Interaction in 2008. And after run­ning the awards for two years with Thomas Kueber and Rodrigo Vera, it is so awe­some to be back here.

I want­ed to start out by say­ing that I’m not an ethi­cist. But last year I had a real­ly pret­ty amaz­ing oppor­tu­ni­ty fall into my hands when I was award­ed a named pro­fes­sor­ship in ethics and com­pu­ta­tion­al tech­nolo­gies. And what this give me an oppor­tu­ni­ty to do is focus a lit­tle bit on the ques­tion of ethics for the next few years. And one way that I also tend to study things, I think I should point out, is through pop cul­ture. So let’s see where this is going.

Who here knows what the trol­ley prob­lem is? Can I see a show of hands? Okay. It sounds like some peo­ple know it, some peo­ple don’t. If you do know about it, maybe you’ve encoun­tered it in…say I don’t know, an under­grad phi­los­o­phy class. It goes like this.

There is a trol­ley, and it’s hurtling down the track. And there are five peo­ple tied to the track in front of the trol­ley. And you are the per­son who can access a switch and throw the trol­ley off that track and get it away from the five peo­ple. But in so doing, you kill one per­son.

What’s the right thing to do? What do you think? Who would throw the switch? Okay. Who would not throw the switch? Who would say this is intractable and like, we can’t fig­ure this out? Uh huh. Right. Okay.

Well then there is anoth­er ver­sion of it. So what if there’s a bridge and you are on top of the bridge, and you know that you can stop that out-of-control trol­ley from hit­ting those peo­ple if you throw some­thing heavy off the bridge.

And next to you is a large per­son.

Okay, how many of you would throw the per­son off the bridge? A cou­ple of you. And like 99% of you would­n’t.

And so the rea­son that we talk about the trol­ley prob­lem is because of two philoso­phers, Phillipa Foot and Judith Jarvis Thomson. Phillipa Foot came up with the trol­ley prob­lem in 1967, and Judith Jarvis Thomson intro­duced some of the vari­a­tions, like the fat man” vari­a­tion as she calls it, in 1985. And the rea­son for these exper­i­ments, these thought exper­i­ments, is because they’re an oppor­tu­ni­ty to look at the how and the why of some of these eth­i­cal and moral dilem­mas.

You might also know the trol­ley prob­lem from the show The Good Place—are there any Good Place watch­ers out here? [some cheer­ing from audi­ence] Alright. Ted Danson, every­one’s favorite god, or demon depend­ing on your point of view, and Chidi’s very confusing—he’s the guy in the mid­dle look­ing absolute­ly shocked as he’s dri­ving the trol­ley. He’s an ethics pro­fes­sor who’s try­ing to make all these humans bet­ter humans.

Now, there are lots of memes about the trol­ley prob­lem. These are trol­ley prob­lem names from a Facebook group that land­ed there after 4Chan. I’m real­ly hap­py about my trol­ley prob­lem guy mug that I final­ly hunt­ed down and got.

Illustration of two men talking, one a trolley salesman, captioned '*slaps roof of trolley* This bad boy can fit so much ethics in it.'

And this is prob­a­bly my very favorite trol­ley prob­lem meme. It’s a cliché, right. I mean, the thing about the trol­ley prob­lem is that we use it a lot when we talk about the trade­offs of arti­fi­cial intel­li­gence. Or what we knew would be inevitable when a self-driving Uber hit and killed a pedes­tri­an, right. These ques­tions of is it right to kill one or the oth­er, how, and why? What are the moral con­sid­er­a­tions.

Now, the Markkula Center for Applied Ethics at Santa Clara University has some real­ly use­ful think­ing and cur­ric­u­la around ethics. And you’ll see their URL pop up at the bot­tom of a cou­ple of slides. I real­ly strong­ly rec­om­mend that you check out some of the things that they’ve put into use.

But one of the things they point out is that what ethics is not is eas­i­er to talk about than what ethics actu­al­ly is. And some of the things that they say about what ethics is not include feel­ings. Those aren’t ethics. And reli­gion isn’t ethics. Also law. That’s not ethics. Science isn’t ethics.

And instead they say in terms of what ethics is is typically…you could kind of sim­pli­fy it down to five approach­es: util­i­tar­i­an, rights, fair­ness, jus­tice, com­mon good, and virtue.

So util­i­tar­i­an, hey! it’s trol­ley prob­lem guy again. And in this case, the util­i­tar­i­an ver­sion is a ques­tion here of what is going to cause the great­est bal­ance of good over harm. And in clas­sic util­i­tar­i­an­ism, it’s not only per­mis­si­ble but it is the best option to throw that switch and kill the one per­son ver­sus the five. And the ques­tion here of course that we end up ask­ing is can the dig­ni­ty of the indi­vid­u­als be vio­lat­ed in order to save many oth­ers. Do you sac­ri­fice one for the many, if it’s for a much greater good? That’s clas­sic util­i­tar­i­an­ism.

The rights approach, that’s Judith Jarvis Thompson’s ques­tion. Is it infring­ing upon the rights of oth­er peo­ple? And in this case it is a pret­ty big infringe­ment on the right of the large per­son stand­ing next to you on the bridge if you throw that per­son over.

There’s the ques­tion of fair­ness and jus­tice. Aristotle talks about equals among equals and priv­i­leg­ing equal­i­ty. So that’s what we’re talk­ing about with the fair­ness and the jus­tice approach to ethics.

There’s a com­mon good approach. Somewhere here in the audi­ence is the Dimeji Onafuwa, who was my PhD stu­dent at Carnegie Mellon. He talks a lot about com­mon­ing. What is com­mon to us all? The social rela­tion­ships that bind us. If we val­ue those, then that’s a com­mon good approach to ethics.

And then final­ly, virtue ethics. Or as this par­tic­u­lar memes says, As you’re expe­ri­enc­ing the trol­ley prob­lem you ask your­self: What would the chill guy at the bar do, from the Interaction par­ty?’ ” And in this sense, this allows us to live up to our high­est char­ac­ter poten­tial. That’s the oper­at­ing from a virtue ethics per­spec­tive. Five per­spec­tives.

Now, I don’t know if you guys noticed this… I think I did, giv­en the amount of ener­gy that we’re look­ing at here, but. 2018 and 19 seem to be the year/years of ethics. And, I just did a cur­so­ry look on Lexis Nexis to see if we looked at AI and ethics togeth­er, how many arti­cles have been pub­lished, and to see how that changed over the years. And it’s almost 6,000 more arti­cles since 2017 about AI and ethics, down to 311 in 2011. So, it’s boom­ing.

So why now? Why now? Okay, we know why now, because some real­ly bad things have hap­pened that need some eth­i­cal con­sid­er­a­tion. But part of me won­ders a lit­tle bit if we’re in a moment like 1999, when clients heard the word usabil­i­ty” and then they’d say things like, We need some usabil­i­ty with this web site.”

It did­n’t need usabil­i­ty, what they need­ed is good design. They need­ed good func­tion. They need­ed to be under­stood. And I kind of won­der here if we need some ethics with this arti­fi­cial intel­li­gence.

So, I want to point out that Kathy Baxter, who’s the Architect of Ethical AI Practice at Salesforce and some­one who’s very pas­sion­ate about ethics, has col­lect­ed at least fifty dif­fer­ent ethics frame­works, toolk­its, prin­ci­ples, codes of con­duct, check­lists, oaths, and man­i­festos. And she actu­al­ly points out that the num­ber may be—she heard from some­one else that the num­ber may be like two hun­dred now.

And here are—you know, you can see some of them here. There’s the Ethical OS that the Institute for the Future puts togeth­er. You can see that a com­pa­ny like Salesforce has sup­port­ed ethics in a num­ber of ways, not least because of their recent hire of Paula Goldman as Chief Ethical and Humane Use Officer. Of Oaths and Checklists,” it’s an edi­to­r­i­al by O’Reilly. There are the Asilomar AI Principles that were put togeth­er by a com­mu­ni­ty. And the Framework for Trustworthy AI in the EU.

I want to point out that there are phys­i­cal ethics frame­works right here in the build­ing. This is Microsoft’s area. I like think­ing of it as a very nice ethics hut. And if you can stop by I think they’re going to have six dif­fer­ent oppor­tu­ni­ties to use their game, this card deck called Judgment Call.” So do to stop by and check out what they’re doing over there. It’s an inter­est­ing way I think to have a con­ver­sa­tion around a lit­er­al phys­i­cal frame­work of ethics.

I want­ed to put up the faces of some of these peo­ple who are both inspir­ing me and help­ing me with some of this work. Kathy Baxter is on the one side of the screen, and Josh Lefevre and Louise Larson are Masters stu­dents at Carnegie Mellon. And they have been work­ing through a num­ber of these just dif­fer­ent ethics frame­works to fig­ure out what they do and what they don’t do. Who they serve. What they don’t serve. Who their audi­ence is. Whether they even define ethics. Ethical frame­works are a good thing. How do we know if they’re good? Or if they’re even ethics? And again, some of these—in fact the major­i­ty of them—don’t include a def­i­n­i­tion of ethics. We don’t know what they’re start­ing from.

And I want to point out that these tools are use­ful. They pro­voke thought. They build com­mu­ni­ty; this is a com­mu­ni­ty, here, that’s come togeth­er today and to lis­ten a lit­tle bit, to talk about these ques­tions.

These tools offer guide­lines. And they help us fig­ure out things like fair­ness. How to account for pri­va­cy. How to keep peo­ple safe. How to pro­vide pro­tec­tion against unin­tend­ed bias, or dis­crim­i­na­tion.

Last night, this was what Ruth Kikin-Gil said when when we were hav­ing a beer. She said, As a design­er, I feel like my job is cov­er­ing tech­nol­o­gy’s ass.” And I think that’s because design is where the rub­ber meets the road. So how can design help? And I’m going to pro­vide a cou­ple of high-level things and show a lit­tle bit more detail, and then kind of come back around to some of these oth­er ques­tions that we’ve looked at as we close.

We know that design is good at fram­ing prob­lems. And that design­ers are good at inves­ti­gat­ing the con­text of a prob­lem by using human-centric approach­es under­stand­ing the needs of mul­ti­ple stake­hold­ers. But I think there are oth­er things that we can do. We can be direct­ly involved with data, and I real­ly appre­ci­ate Holger’s talk because I think he showed a num­ber of the ways that this is very much the case. What and how data is to be col­lect­ed is after all a design ques­tion.

I want to intro­duce you to the work of Mimi Onuoha—if you’re not famil­iar with her she is…she’s ter­rif­ic. She’s a design­er as well as a crit­ic and an artist, and she does a num­ber of things like stud­ies how data col­lec­tion works. And in the project The Library of Missing Datasets she came up with a list of sets of data that had­n’t sim­ply been col­lect­ed. And if you flip through these file cab­i­nets you’ll dis­cov­er there’s noth­ing in the tabs and draw­ers.

She’s keep­ing a Github repos­i­to­ry where peo­ple can actu­al­ly sub­mit data that has­n’t been col­lect­ed so that it might be. And indeed one of the datasets is no longer miss­ing: civil­ians killed in encoun­ters with police and law enforce­ment agen­cies. Someone col­lect­ed that dataset. When you col­lect data, you can have action.

She tells a sto­ry about work­ing with a group of Asian actors on Broadway who point­ed out that they weren’t get­ting cast in very many shows. And so she did some data col­lec­tion with them, and this is what she found.

The King and I is the show that every­body is cast in and there’s not much going on any­where else. But by visu­al­iz­ing this and col­lect­ing it and telling the sto­ry, change could be made. And I think again, Holger real­ly nice­ly said this in his talk.

And of course… I almost think this is a corol­lary; it’s not four, but it’s maybe 3a, how data is visu­al­ized is def­i­nite­ly a design ques­tion.

But also I think there is a notion of design for inter­pre­ta­tion. And this is a ques­tion of some­thing out there. People talk about the black box, where we should be able to see inside and see what the algo­rithm is doing. Or a robot should be able to be stopped at any time and announce what it’s doing. And we know that it isn’t that sim­ple. Because peo­ple who do this research don’t know them­selves what is hap­pen­ing, what the algo­rithms are doing.

And so this is a ques­tion of design for inter­pre­ta­tion, ver­sus design for explain­abil­i­ty, or design for trans­paren­cy. And let me explain. The Department of Defense is run­ning an ini­tia­tive, DARPA is run­ning an ini­tia­tive, called Explainable AI. And in this sit­u­a­tion they talk about enter­ing a new age of AI appli­ca­tions. And that in between, we don’t real­ly under­stand what’s hap­pen­ing and so we need to find ways to explain it.

I got­ta tell you, this image does­n’t give me hope. Then again I mean, we all know how the mil­i­tary is with PowerPoint, and it’s a very very strange thing.

But instead if we’re design­ing for inter­pre­ta­tion rather than trans­paren­cy I think we begin to get some­where, and this is some­thing very impor­tant that we do as design­ers. My friends Mike Ananny and Kate Crawford wrote a piece a cou­ple years ago talk­ing about the ten things that being trans­par­ent can actu­al­ly do. Transparency can be harm­ful. I mean, if any­one ever tells you every­thing they real­ly think about you…it might not be good. It can inten­tion­al­ly con­fuse things. And it can have tech­ni­cal and tem­po­ral lim­i­ta­tions, to name a few.

So instead they say what we need to do is we need to design AI in a way that makes us inter­pret, gives us the tools to under­stand what’s going on and why. And this ques­tion of inter­pretabil­i­ty I think is impor­tant when you start look­ing at the com­plex issues that are out there in the world. There are exam­ples like the Allegheny Family Screening Tool in my Pittsburgh, Pennsylvania. And this is a tool that will deter­mine whether a child is at risk from being removed from their fam­i­ly when a call is made to Health and Human Services about child safe­ty, whether there’s a risk of that child being removed in the next two years. And on one hand, it’s prob­a­bly real­ly good to have an algo­rithm that does that crunch­ing for you and that fig­ures it out. But we know that the way the data is col­lect­ed may be biased. And we know that they may be biased against under­rep­re­sent­ed minori­ties. And who is going to be more like­ly to be in the Allegheny Family Screening Tool, it’s going to be those very under­rep­re­sent­ed minori­ties.

So to this end I actu­al­ly think it’s pret­ty impres­sive that Carnegie Mellon, the Department of Health and Human Services, and a cou­ple of insti­tu­tions in New Zealand are work­ing on audit­ing these algo­rithms in real­ly inter­est­ing ways and work­ing with the com­mu­ni­ties to under­stand, and to kind of come to a con­clu­sion about what we do with life and death ques­tions where algo­rithms could help and they could also real­ly hurt.

And we see oth­er things. We’re at Amazon right now, Amazon had had an AI recruit­ing tool that was real­ly real­ly biased against women. And so the com­pa­ny dropped it. Or you might see the book Uberland; this lit­tle screen­shot of When Your Boss Is an Algorithm is a snip­pet from that book, and it’s again look­ing at the stakes of some­thing like Uber and what it does to the peo­ple who dri­ve for it. And I’m hap­py to see that the Defense Innovation Board is going to explore the ethics of AI in war. Because that’s real­ly pret­ty impor­tant.

So, we know they’re about at least fifty if not 200 dif­fer­ent ethics frame­works. And what we don’t know is are they proven? Do they work? How would we know? Here are a cou­ple of exam­ples for you. I kid you not, from our friends at the Department of Defense, this is the logo for Project Maven:

Okay. Just, just look at it again. Um…I’m not—I’m real­ly not sure what’s going on there. But if you recall, you may remem­ber that Google employ­ees fought back against Google work­ing with the Department of Defense on Project Maven. Project Maven is com­put­er vision analy­sis for drone tar­get­ing and could be used to kill peo­ple.

So, 4,000 or so employs at Google signed a let­ter, and they said, We believe that Google should not be in the busi­ness of war. Therefore we ask that Project Maven be can­celled, and that Google draft, pub­li­cize and enforce a clear pol­i­cy stat­ing that nei­ther Google nor its con­trac­tors will ever build war­fare tech­nol­o­gy.”

As a result of that, Google pub­lished its AI prin­ci­ples back in June on the Google blog. And they’re pret­ty sim­ple, and they don’t go very far. Which is a pret­ty stan­dard cri­tique of them. But there were prin­ci­ples in place that said Okay, we won’t do Project Maven but we may still be doing defense work as long as it isn’t going to be poten­tial­ly killing peo­ple on a bat­tle­field.” I want to point out also that six months lat­er they’ve just recent­ly in December pub­lished an update say­ing that they’ve been work­ing with the Markkula Center for Applied Ethics, and they’ve also brought in a group of inter­nal and exter­nal experts to help advise them on what to do.

Other com­pa­nies have made oth­er deci­sions about work­ing with the Department of Defense. Both Microsoft and Amazon dou­bled down on their com­mit­ments to defense work. I point this out not to say that one is the right answer ver­sus the oth­er. We don’t have a field of AI with­out the Department of Defense and with­out what it’s done to sup­port it in its ear­li­est years back to the 50s and 60s. We don’t have that and we don’t have the Internet. So every­thing that every­one in this room is doing for a liv­ing is some­where tied back there to some­thing that is defense-funded. And I think it’s dif­fi­cult to keep these two things in mind, but I think it’s impor­tant.

It's an 'A for effort' philosophy, in which companies that prioritize ethics can sometimes escape punishment when their ethics programs fail. — Hannah Clark

I also have a con­cern about ethics being used as window-dressing or as a mask for the mis­deeds of com­pa­nies. And this is a quote, and this is some­thing that I also ini­tial­ly became aware of through Kathy Baxter, a 2006 arti­cle in Fortune about what hap­pened with ethics offi­cers in 1991. When fed­er­al sen­tenc­ing guide­lines changed, the Chief Ethics Officer all of a sud­den became a posi­tion that a lot of dif­fer­ent com­pa­nies had. And the rea­son they had it was because it made it less like­ly that they were going to be ter­ri­bly pun­ished for white col­lar crime. So ethics can be a hedge against lit­i­ga­tion, and against reg­u­la­tion. And I know that that ques­tion of reg­u­la­tion is a major one for com­pa­nies like Amazon, and Facebook, and Google.

Sometimes ethics is about check­ing box­es. I was talk­ing to a pro­fes­sor who is at NYU, and he’s just writ­ten a book that will come out—I wish I had it off the top of my head—that will come out in this fall, about African Americans and the Internet and tech­nol­o­gy. And he said that some­one asked him, Is there a check­list that I can fol­low? To make sure it’s okay?”

He’s like, No, there’s not a check­list, it’s called very hard work.” And I’m con­cerned that some of these check­lists are just kind of a case of if we have some ethics on this web site, if we have some ethics on this AI, every­thing’s gonna be fine. Something called ethics-washing or ethics-shopping, as Ben Wagner who’s a pro­fes­sor in Vienna calls it. Do we just find the right frame­work and put it on our prod­uct, or put it on our prac­tices and feel like we did it? Is it sort of like sus­tain­abil­i­ty and the green move­ment in 2004, 2005?

And a real­ly big ques­tion I have is if 2018 and 2019 is the year of ethics, what hap­pens in 2021 when ethics is no longer a hot but­ton top­ic? Did we solve it? Is it all bet­ter? I think it’s more com­pli­cat­ed than that.

https://​www​.youtube​.com/​w​a​t​c​h​?​v​=​-​N​_​R​Z​J​U​A​QY4

14 mil­lion views on YouTube, my friends. You don’t hear it because you’re laugh­ing too hard, but he goes, Uh oh!” And indeed, that’s why I do want us to ques­tion what do we real­ly mean when we say ethics.” Thank you.

Further Reference

Presentation listing page, including slide deck


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.