Kevin Bankston: Next up I’d like to reintro­duce Chris Noessel, who in addi­tion to his day job as a lead design­er with IBM’s Watson team some­how finds the time to write books like Designing Agentive Technology: AI That Works for People and, a per­son­al favorite of mine, Make It So: Interaction Design Lessons from Science Fiction, which led to him estab­lish­ing the sci​fi​in​ter​faces​.com blog where he’s been doing some amaz­ing work sur­vey­ing the sto­ries we are and aren’t telling each oth­er about AI, which he’s going to talk about right now in our third and final solo talk.


Chris Noessel So, hi. Thank you for that intro­duc­tion, that saves me a lit­tle bit of time in what I’m about to do. I am an author. I’m a design­er of non-Watson AI at IBM in my day job. And I am here to talk to you about a study that I’ve done for sci​fi​in​ter​faces​.com.

Let me begin with a hypo­thet­i­cal. And let’s say we were to go out and ask…take a poll of the vox pop­uli, the voice in the street, and ask them What role would you say that AI should play in med­ical diag­no­sis?” Then we think about what their answers would be if we showed them this: 

Baymax from the Big Hero 6 movie. Then think about how their answers would change if we then showed them this: 

Which is the holo­graph­ic doc­tor that we just men­tioned from the Voyager series of Star Trek.

And then how of course would their answers change if we were remind­ed them of Ash from Alien, who was osten­si­bly a doc­tor on that ship. Right?

These exam­ples serve to illus­trate that how peo­ple think about AI depends large­ly on how they know AI. And to the point, how the most peo­ple know AI is through sci­ence fic­tion, which sort of rais­es the ques­tion, yeah? What sto­ries are we telling our­selves about AI in sci­ence fiction? 

So, I first came on this ques­tion dur­ing an AI retreat in Norway that was an uncon­fer­ence that was sort of sprung on us. And they said, Okay, what do you want to do here?” And I had just com­plet­ed an analy­sis of the Forbidden Planet movie in the con­text of the Fermi Paradox. And it required me to sort of do this real­ly broad-scope analy­sis, unlike the nor­mal ones that I do in the blog. So I sim­ply asked that one. 

But to answer that ques­tion takes a lot. I thought of course that I could do it in like, a two-hour con­fer­ence set­ting but no, it took me sev­er­al months after I got home from that set­ting. Because what I need­ed to do was look at all of sci­ence fic­tion movies and tele­vi­sion shows. And that’s quite a lot. I don’t think I’ve cap­tured them all and of course I am bound by English speak­ing for the most part. I am cer­tain­ly bound by movies and tele­vi­sion. But I wound up with 147 titles in total that I includ­ed, and actu­al­ly all the data is live in a Google sheet that you can access if you like. 

But I took a look at each one of those titles and I tried to inter­po­late what the take­away is. I said okay, if you were to watch this sto­ry and leave the cin­e­ma or get up off your couch and be asked the ques­tion So what should we do about AI?” 

That led to a series of take­aways. And those take­aways run quite the gamut. Everything from Evil will use AI for Evil” to AI will seek to sub­ju­gate us,” which is the peren­ni­al Terminator exam­ple but of course even the Sentinels in The Matrix. In the dia­gram that I’m slow­ly build­ing behind you, the big­ger text actu­al­ly rep­re­sent­ed the things that were seen more com­mon­ly through­out the sur­vey of movies. 

It also includ­ed things like AI will be use­ful ser­vants.” I men­tioned that sort of hap­py era of sci-fi AI. Robby is part of that, and much more recent­ly Baymax. 

And it includes things like AI is just straight-up evil. Like you turn on the machine and it’s try­ing to kill you. There are com­ic exam­ples like the Robot Devil from Futurama, but also bad and dis­turb­ing movies like Demon seed with the Proteus IV AI

So once I did that I had thirty-five take­aways that all con­nect­ed back to the 147 prop­er­ties that I had gath­ered togeth­er. And if you head on to the web­site you can actu­al­ly see. It’s hard to see in this pro­jec­tion but there are lines that con­nect which movies and which TV shows con­nect to which take­aways. So if you’re enough of a nerd like me, you can actu­al­ly study and say Where’s RoboCop fit in all this?” 

So that was my analy­sis of okay, what sto­ries are we cur­rent­ly telling. It’s a bottom-up analy­sis. It’s a folk­son­o­my. But it gave me a basis. 

Now to answer the oth­er side of that ques­tion, how do we know what sto­ries we should tell about AI? That’s a tough one. It’s a big val­ue judg­ment. I’m cer­tain­ly not going to make it, so I let some oth­er peo­ple make it. And those par­tic­u­lar peo­ple were the peo­ple who had pro­duced think tank pieces, thought pieces, or writ­ten books on the larg­er sub­ject of AI. I thought I would have a lot more than I did. I wound up with only four­teen man­i­festos, but they include every­thing from the AAAI Presidential Panel on Long-Term AI Futures, to the Future of Life Institute. The MIRI mis­sion state­ment, OpenAI, and Nick Bostrom’s book. 

But from these four­teen manifestos—I read them one at a time. And instead of take­aways, which is like what we got from the shows, on this side I was able to say okay well what do they just direct­ly rec­om­mend we do about AI

That also gave me anoth­er list. That list includes things like Artificial General Intelligence’s goals must be aligned with ours.” Or AI must be valid: it must not do what we don’t want.” That’s a nuanced thought. But sim­i­lar to these take­aways, in this dia­gram you’ll see that the texts that are larg­er were more rep­re­sent­ed in the man­i­festos. And it includ­ed things like we should ensure equi­table ben­e­fits, espe­cial­ly against ultra-capitalist AI.” And this real­ly tiny one, We must set up a watch for mali­cious AI,” all the way down to the bot­tom, we must fund AI research. We must man­age labor mar­kets upend­ed by AI

And I won’t go through all of these. I don’t have time. But in total there were fifty-four imper­a­tives that I could sort of pull out from a com­par­a­tive study of those manifestos. 

And so, we have on the left a set of take­aways from sci­ence fic­tion. And we have a set of imper­a­tives on the right from man­i­festos. And real­ly it’s just a mat­ter of run­ning a diff, if you know that com­put­er ter­mi­nol­o­gy. But it’s to be able to say okay, what of here maps to what of here, and then what’s left over? 

Again this is a lot of data and I did pro­duce a sin­gle graph­ic that you can see at that URL—I’ll show it sev­er­al times in case you want to write it down. 

So a hundred-plus years of sci-fi shows sug­gests this and AI man­i­festos sug­gests this. And then I ran the diff; there are some lines there that are hard to see from this doc­u­ment. The main thing that we find is of course there are some things that map from the left to the right. And those are sto­ries that we are telling, that we should keep on telling.

And those are not the inter­est­ing ones. The inter­est­ing ones are the ones that don’t con­nect across. So this is the list of those take­aways from sci­ence fic­tion that don’t appear in the man­i­festos. These we can think of things that are just pure fic­tion. Things we need to stop telling our­selves. Because they— If we trust the sci­en­tists as being the guide­posts for our nar­ra­tive, they include things like AI is evil out of the gate. Now of course, there’s an imper­a­tive way up there that says evil peo­ple will use AI for evil and that’s still in. But this one right, here nobody believes that AI is just…an evil mate­r­i­al that we should nev­er touch. 

Interestingly those man­i­festos are not inter­est­ed in the cit­i­zen­ship of AI, par­tial­ly because that’s entailed in gen­er­al AI, which…manifestos are much more con­cerned about the near-term here and now. And that includes things like oh, they’ll be reg­u­lar cit­i­zens ver­sus they’ll be spe­cial cit­i­zens. And even this notion that AI will want to become human. Sorry, Data. Sorry, Star Trek.

So there is a list of pure fic­tion take­aways that we should stop telling our­selves. That was not the point of the study the point of the study that I want­ed to do was on the oth­er side. And that’s the list of things that man­i­festos tell us that we ought to be talk­ing about in sci­ence fiction…but we’re not. 

They include every­thing like AI rea­son­ing must be explain­able and under­stand­able.” I’d com­plet­ed this right around the time of the GDPR so, I’m real­ly hap­py that that’s out there. But it includes things like We should enable human-like learn­ing capa­bil­i­ties.” At a very foun­da­tion lev­el it’s got to be reli­able, because if it’s not and we depend upon it, what hap­pens? It includes things like We must cre­ate effec­tive pub­lic pol­i­cy.” That includes effec­tive lia­bil­i­ty, human­i­tar­i­an and crim­i­nal jus­tice laws. It includes things like find­ing new met­rics for mea­sur­ing the effects of AI and its capabilities. 

And again I’m not going to go into those indi­vid­ual things. They’re fas­ci­nat­ing, and you can head to the blog posts in order to read them all. And there’s lots of analy­sis that I did all all over this thing, like that’s the set of take­aways. If you want to know what coun­try pro­duces the sci-fi that is clos­est to the sci­ence, turns out that it’s Britain. The coun­try that’s most obsessed with sci-fi is sur­pris­ing­ly Australia. And of course the most pro­lif­ic for AI shows is the United States, even though we’re far behind India in our actu­al pro­duc­tion of movies in total.

I even did sort of a… Oh this is a dia­gram of the valence of sci-fi over time. If you’re inter­est­ed, it’s slow­ly improv­ing but it has­n’t reached pos­i­tive yet. And then I even did an analy­sis of the take­aways that we have in sci­ence fic­tion based on their Tomatometer read­ings from Rotten Tomatoes. So you can actu­al­ly see which ones—if you’re mak­ing a sci-fi movie, which take­aways you can bet on and which one you should prob­a­bly avoid, just for the rat­ings. But this is all stuff that entailed in the longer series of blog posts. 

I also include an analy­sis of what shows stick to the sci­ence the best, in order to sort of reward­ed them and raise more atten­tion. Damien men­tioned Person of Interest and that’s num­ber one in this analy­sis. But it includes things like Colossus: the Forbin Project, the first Alien, Psycho-pass: The Movie which is the only ani­me that made this par­tic­u­lar list. And even— I don’t like the movie, but the AI in it is pret­ty tight with Prometheus.

I also includ­ed a series of prompts. Which is to say okay, if I were to give a writer’s prompt about some of these ideas can I spark some? This is an exam­ple. What if Sherlock Holmes was an induc­tive AI, and Watson was the com­par­a­tive­ly stu­pid human whose job was to babysit it? Twist: Watson dis­cov­ers that Holmes cre­at­ed the AI Moriarty for job security.

So, I tried to put these prompts out there to see if any­one’d take the bait. So far no one has, but I’m doing my part. 

And then even some of those things I have begun to write on myself since no one else had tak­en the bait, and tried my first hand at a near-term nar­row AI prob­lem with the self-publication of this last year. 

Okay. So, that’s a lot to take in and I under­stand that. It cov­ers like 17,000 words or some­thing on the blog. And so what I want­ed to do to sum­ma­rize all this is what I did on the sort of poster that I cre­at­ed, which is to read off the sort of five cat­e­gories of find­ings that I found. These are nuanced so I’m going to read them. 

The first cat­e­go­ry of sto­ries we should be telling our­selves is that We should build the right AI. Narrow AI must be made eth­i­cal­ly, and trans­par­ent­ly, and equi­tably or it stands to be a tool used by evil forces to take advan­tage of glob­al sys­tems and just make things worse. As we work towards gen­er­al AI we have to ensure that it’s ver­i­fied, valid, secure, and con­trol­lable. And we must also be cer­tain that its incen­tives are aligned with human wel­fare before we allow it to evolve into super­in­tel­li­gence and there­fore, out of our con­trol. Sadly, sci-fi miss­es about two-thirds of this in the sto­ries that it tells. And that’s large­ly I think because of sort of, they’re not telling sto­ries about how we make AI good AI

The next cat­e­go­ry is we should build the AI right. So this is real­ly talk­ing about the process. Like what do we do as we we’re con­struct­ing the thing? So we must take care that we are able to go about the build­ing of AI coop­er­a­tive­ly, eth­i­cal­ly, and effec­tive­ly. The right peo­ple should be in the room through­out to insure diverse per­spec­tives and equi­table results. Or if we use the wrong peo­ple or the wrong tools, it affects our abil­i­ty to build the right AI. Or more to the point, it’ll result in an AI that’s wrong in some crit­i­cal point. Sci-fi miss­es most of this—nearly 75% of these imper­a­tives from the man­i­festos just aren’t present in AI

The third out of five is that it’s our job to man­age the risks and the effects of AI. And there weren’t a ton of take­aways relat­ed to this, so it means that it’s a very crude sort of met­ric. But we pur­sue AI because it car­ries so much promise to solve so many prob­lems at a scale that humans have nev­er been able to man­age our­selves. But AIs car­ry with them risks that scale as the thing becomes more pow­er­ful. So we need ways to clear­ly under­stand, test, and artic­u­late those risks so that we can be proac­tive about avoid­ing them. 

The fourth out of five is that we have to mon­i­tor AI. AI that is deter­min­is­tic isn’t it real­ly worth the name of AI. But build­ing non-deter­min­is­tic AI mean that it’s also some­what unpre­dictable. We don’t know what it’s going to do to us. And can allow for bad faith providers to encode their own inter­ests in the effects. So to watch out for that, and to know if it’s effec­tive, if well-intended AI is going off the rails we have to estab­lish met­rics for its capa­bil­i­ties, its per­for­mance, and its ratio­nale…and then build the mon­i­tors that mon­i­tor those things. We only get about half this right. 

And the last sort of super­cat­e­go­ry in the report card of sci­ence fic­tion is that we should encour­age accu­rate cul­tur­al nar­ra­tives. And it’s very low con­trast, but we just don’t talk about this. We don’t talk about telling sto­ries about AI in sci-fi very much. If at all. Certainly not in the sur­vey at all, right. But if we mis­man­age that nar­ra­tive, we stand to a neg­a­tive­ly impact pub­lic per­cep­tion and cer­tain­ly leg­is­la­tors (to the point of this thing), and even like encour­age Luddite mobs, which nobody needs. 

Okay. So, that’s the total report card. The short-form take­away from sci-fi, as com­pared to AI man­i­festos. And the total grade if you will is only about 36.7%. Sci-fi is not doing great. But that’s okay, right. We should have tools such as this analy­sis in order to poke at the mak­ers of sci-fi, and even to encour­age oth­er cre­ators to cre­ate new and bet­ter and more well-aligned AI. And that’s part of why I’ve done, and part of why I’m try­ing to pop­u­lar­ize the project. If you want to learn more about it, I’m repeat­ing that URL here for you. 

If you’re real­ly curi­ous about this kind of work, I wrapped up the Untold AI last year on the blog. I’m ded­i­cat­ing the entire year of 2019 to ana­lyz­ing aI in sci-fi. But right now I’m in the mid­dle of the process of ana­lyz­ing gen­der and its cor­re­la­tions across things like embod­i­ment, sub­servience, and germane-ness. And you can see that Gendered AI on the Sci-fi Interfaces blog. 

And that’s it. I am done with one minute so I have an extra minute if there’s any time for ques­tions. Thank you.

Further Reference

Event page