Kevin Bankston: And now I’m gonna take the oppor­tu­ni­ty to intro­duce our next solo speak­er, Dr. Kanta Dihal, who is the post-doctoral research asso­ciate on the AI Narratives project, one of the project leads on the Global AI Narratives project, and the project devel­op­ment lead on the Decolonizing AI project, all of which are at the Leverhulme Centre for the Future of Intelligence at Cambridge University. Across those projects, she explores how fic­tion­al and non-fictional sto­ries shape the devel­op­ment and pub­lic under­stand­ing of AI, includ­ing explor­ing the range of hopes and fears about AI that are reflect­ed in our sto­ries about AI. Which is what she’s going to talk about today. So Kanta if you could please come up?


Kanta Dihal: Thank you Kevin. So this is work that I’ve been doing since around 2017 with my coau­thor Stephen Cavef. We came up with the idea to write a short paper categorizing…trying to make some sense of those many nar­ra­tives that we have around arti­fi­cial intel­li­gence and see if we could divide them up into dif­fer­ent hopes and dif­fer­ent fears. And two years lat­er, we’ve looked at 360 books, films, TV series, and oth­er nar­ra­tives, in English, from the 20th and 21st cen­turies. So that’s both a caveat but also an expla­na­tion of the scope of our research. We found that many works to look at just with­in these parameters. 

And that work was inspired by the fact that as you just heard on the pan­el, that prospect of shar­ing our lives with intel­li­gent machines some­how pro­vokes peo­ple to imag­i­na­tive extremes. So think­ing about them seems to make peo­ple either wild­ly opti­mistic or melo­dra­mat­i­cal­ly pessimistic. And the opti­mists believe that AI will solve many if not all of our soci­ety’s prob­lems. So you might have heard of the London-based AI com­pa­ny DeepMind, who cre­at­ed AlphaGo, the Go-playing AI sys­tem. And they’ve some­times use the slo­gan solve intel­li­gence, and then use that to solve every­thing else.” 

But then there’s the pes­simists, who fear that AI in its many forms will inevitably bring about human­i­ty’s down­fall. And that’s a theme that’s very fre­quent­ly picked up by the media. So we were with our cen­ter involved with a report on AI by the UK’s Parliament. And when that report was pub­lished, a UK tabloid called The Sun cov­ered that with the head­line Lies of the Machines: Boffins (that’s a British word used to describe aca­d­e­mics) urged to pre­vent fib­bing robots from stag­ing Terminator-style apocalypse.”

And those sto­ries mat­ter. Those sto­ries influ­ence the devel­op­ment of the tech­nol­o­gy itself. They influ­ence pub­lic fears and expec­ta­tions. And, they can influ­ence pol­i­cy­mak­ers. So our research aims to explain the struc­ture of our sto­ries around AI and why they have such a grip on our imag­i­na­tion. And we hope that that’s the first step towards hav­ing more diverse and con­struc­tive respons­es to the prospects of intel­li­gent machines. 

So, why is it that the idea of intel­li­gent machines and espe­cial­ly human-like arti­fi­cial intel­li­gence fas­ci­nates peo­ple so much? And for so long. I mean, there have been nar­ra­tives around intel­li­gent machines for 3,000 years. And seri­ous attempts to explain that fas­ci­na­tion start­ed just about a cen­tu­ry go with Sigmund Freud, who was one of the first thinkers, actu­al­ly, to ask why we find such machines so fas­ci­nat­ing. And he focus­es on how we find them uncan­ny. And that’s that creepy feel­ing of see­ing an imag­ined fear become real. And he sug­gest­ed that one rea­son for why we find espe­cial­ly human-like AI and androids so unset­tling is because they leave us uncer­tain about just what we are look­ing at, in terms of real­i­ty being not what it seems. In terms of not being sure whether it’s dead or alive. But also that we’re some­how being deceived. 

So he dis­cuss­es a short sto­ry from 1816 by ETA Hoffmann called The Sandman,” in which the pro­tag­o­nist Nathaniel, the man in yel­low, is bewitched by the beau­ty of a woman called Olympia, the one in the very extrav­a­gant dress. And after much more woo­ing, it turns out that Olympia is an automa­ton. Which makes you won­der how close­ly this guy was look­ing at her from so up close? And then when Nathaniel dis­cov­ers that, it dri­ves him to madness. 

And that kind of unease still feeds into con­tem­po­rary ideas about what arti­fi­cial intel­li­gence could look like. Like the 1975 film The Stepford Wives, which was remade as a com­e­dy. For those who don’t know it, in that film the men­folk of a small US town replace their much-too-human women with what they con­sid­er to be per­fect android wives. So they look iden­ti­cal to their orig­i­nal wives but then pre­sum­ably they bake bet­ter muffins. 

And more recent­ly, his­to­ri­ans who have been explor­ing the fas­ci­na­tion with human-like machines have focused on what they call the lim­i­nal qual­i­ty,” so the boundary-challenging and trans­gress­ing qual­i­ty of those machines. So, we tend to divide up the world very neat­ly into liv­ing things: plants and ani­mals, and non-living things: ham­mers and nails. But AI and robots seem to be some­where in between those two. Because like non-living things they are built by humans from inan­i­mate com­po­nents: met­al and plas­tic. But like liv­ing things, espe­cial­ly those intel­li­gent androids that we imag­ine can speak, think, some­times walk and so on. 

And that category-defying ele­ment of AI I think is an impor­tant part of why we find them fas­ci­nat­ing. If you’ve seen any of the famous YouTube videos of Boston Dynamics and their four-legged robots, you’ll under­stand that they are cap­ti­vat­ing­ly and slight­ly dis­turbing­ly both like and unlike liv­ing machines; liv­ing crea­ture. But we also think that there’s more to be said about why we find the idea of such machines so provocative. 

Photo of Robby the Robot from Forbidden planet with a quote from Morbius: "Don't attribute feelings to him, gentlemen. Robby is simply a tool. Tremendously strong of course; he could quite easily topple this house off its foundations."

And the start­ing point a lot is that AI is a tool. It’s a piece of tech­nol­o­gy designed to help us achieve our goals. But it’s also sup­posed to be an intel­li­gent tool. So a tool with attrib­ut­es that we would nor­mal­ly asso­ciate with humans. A tool that’s autonomous. That can think. That has goals. Perhaps even what you’d call a mind. 

And that makes it very dif­fer­ent to ordi­nary tools. And that’s what has such huge impli­ca­tions. Because those attrib­ut­es are what promise to make AI the ulti­mate tool. The ulti­mate tech­nol­o­gy. It’s in a sense not just a tool, but in all the many ways in which AI will be able to be deployed, it’s seen as the mas­ter tool. So the DeepMind slo­gan solve intel­li­gence then use that to solve every­thing else.” Whereas the think­ing pow­er of humans is lim­it­ed by our cra­nial capac­i­ty, AI does seem poten­tial­ly lim­it­less and promis­es to work out solu­tions to all our prob­lems. So it rep­re­sents the apoth­e­o­sis of the tech­no­log­i­cal dream. The dream that we’ve been hav­ing for tech­nol­o­gy ever since some­one clever rubbed some sticks togeth­er. That we can use tools to cre­ate a bet­ter world, a par­adise on Earth. So that’s the source of our extrav­a­gant hopes. 

But at the same, there’s the idea of cre­at­ing tools with minds of their own that cre­ates, to our minds, inher­ent insta­bil­i­ties. Because a tool with goals could have goals that misalign with ours. A smart machine could out­smart us. A machine with auton­o­my could choose to dis­obey. And that insta­bil­i­ty is the source of extrav­a­gant fear. 

So we argue in our research that those hopes and fears go togeth­er. We’ve ana­lyzed works that include both fic­tion and what we call spec­u­la­tive non-fiction; so non-fiction that explores the future. And on the basis of that we iden­ti­fied four dichotomies that struc­ture our affec­tive respons­es to intel­li­gent machines so that each con­sists of a hope and a cor­re­spond­ing fear. 


Eternal life pos­si­ble through AI: Rich and famous will SWITCH BODIES like Altered Carbon: A LEADING arti­fi­cial intel­li­gence expert has revealed how robot­ics will allow some humans to live for­ev­er —Daily Star, 3 February 2018

So first let’s look at the hopes in detail. The first one con­cerns life. The pur­suit of health and longevi­ty is humans’ most basic dri­ve. I mean, it is the pre­con­di­tion for almost any­thing else that you might want to do. So con­se­quent­ly humans have always used tech­nol­o­gy to try to extend our lives. So it’s no sur­prise that for AI the hope is that it will do that in the way of giv­ing us bet­ter diag­noses, per­son­al­ized med­i­cine, and so on. And the most ardent advo­cates of AI’s poten­tial in this field sug­gest that it would make us some­how entire­ly immune to aging and dis­ease and to allow us to become what’s some­times called med­ical­ly immor­tal.” But that’s not real immor­tal­i­ty, because it still relies on hav­ing this human body which is in all kinds of ways messy and unre­li­able, and can be hit by cars. So, many advo­cates go even fur­ther, sug­gest­ing that we could actu­al­ly tran­scend­ed the body alto­geth­er and upload our minds into cyberspace. 

Robot Servants Less than 10 Years Away, Daily Telegraph Australia, 23 October 2017

Now the sec­ond hope con­cerns time. So assum­ing we man­age to stay alive for as long as we wish, then we hope to be able to use all that time as we wish. So that’s the dream of AI free­ing us from the bur­den of work that we don’t want to do. So, no more mind-numbing days fill­ing in Excel spread­sheets behind your desks, because the AI will do all th at. And we’ll live in smart homes that will do all the laun­dry fold­ing for us, and we’ll be lords and ladies of those AI manors. And AI offers us such a life of lux­u­ry and ease, and poten­tial­ly could do so with­out hav­ing the very com­plex social and psy­cho­log­i­cal pres­sures of hav­ing human servants—so humans that you use to do the dirty work for you. 

The third hope con­cerns desire. Once we have time, we want to fill it with all the things that bring us plea­sure. So just as AI promis­es to auto­mate work, it promis­es to auto­mate and uncom­pli­cate the ful­fill­ment of every desire. It could be the per­fect friend, always there, always ready to lis­ten, nev­er demand­ing any­thing in return. And in imag­in­ings of AI there’s loads of exam­ples, start­ing with Isaac Asimov’s very first robot sto­ry Robbie” about a robot nan­ny, to the oper­at­ing sys­tem Samantha in the film Her. And of course many hope that intel­li­gent androids will be the per­fect lovers, as we saw in for instance Westworld, until that went wrong. [audi­ence laughs]

And final­ly the fourth hope con­cerns pow­er. So once humans have cre­at­ed that par­adise in which we have life and time and all our desires are filled, we’d want to pro­tect that. And I might add that humans have a habit not just of fight­ing to pro­tect their favored way of life but also to force it on oth­ers. And in an AI con­text, the Culture nov­els of Iain M. Banks present such a view. So sto­ries of what we call intel­li­gent autonomous weapons are ancient. They go back at least to ancient Greece, to the bronze giant Talos, who defend­ed the island of Crete from pirates and invaders by throw­ing boul­ders at them. And you have sto­ries of bronze knights guard­ing secret pas­sage­ways all the way through the Middle Ages. And then in mod­ern times of course, much of the fund­ing for AI research has come direct­ly from the mil­i­tary. So as the mas­ter tech­nol­o­gy, AI is also poten­tial­ly the ulti­mate weapon.

So that’s four utopi­an visions that those hopes reflect, but they are inher­ent­ly unsta­ble. The con­di­tions for each hope to be ful­filled bring about that poten­tial for that utopia to col­lapse into a dystopia. And one fac­tor in par­tic­u­lar is key to that bal­ance between hope and fear and where it tips over. And that is con­trol. So the extent to which humans are in con­trol of the AI, rather than AI being in con­trol of the humans, deter­mines whether we con­sid­er a future prospect utopi­an or dystop­i­an.

So on the dystopi­an side, on the sub­ject of life, while peo­ple to achieve immor­tal­i­ty its flip­side is los­ing our human­i­ty in the process. Because what are we will­ing to sac­ri­fice in order to live for­ev­er? Our mem­o­ries, as the pan­el just dis­cussed? Our emo­tions, our phys­i­cal form, our indi­vid­u­al­i­ty and embod­i­ment? So this is a Ship of Theseus ques­tion. If you replace all your bits with met­al pros­the­ses, is the result­ing immor­tal being still you? And how much human­i­ty will be left when you turn your­self into pure data and upload your­self to one of those serv­er farms in Arizona? 

And our hopes for hav­ing more time can turn into fears of obso­les­cence when we lose con­trol of the amount of leisure time that we have. So at the same time as we dream of being free from work, there’s this ter­ri­fy­ing idea of being put out of work, because of course work does­n’t only pro­vide an income but also a role in soci­ety, sta­tus, stand­ing, a feel­ing of accom­plish­ment, pride, and pur­pose. So, a UK paper had the head­line last year Robots are the ulti­mate job steal­ers. Blame them, not immi­grants.” I’m not sure if that’s so help­ful. [audi­ence laughs]

Of course, as tech­nol­o­gy advances, most most of oppo­nents to this idea say well even­tu­al­ly there will be new jobs cre­at­ed because of AI. But of course, we won­der… It’s quite under­stand­able that peo­ple would wor­ry if AI con­tin­ue to be devel­oped that get bet­ter at more and more things, what will be left for us to do? 

And our hopes with regard to desire can tip into the fear that we on the one hand to might bring some­thing unnat­ur­al or mon­strous into our home. So that’s that uncan­ny val­ley, the effect that you get nowa­days when you see those robots that are sup­posed to look like humans but they don’t real­ly. But there’s also fears regard­ing AI being bet­ter than humans. So, if we have all our desires ful­filled by AIs, then that means we become redun­dant to each oth­er. We might not only become obso­lete in the work­place but even in our own homes and in our own relationships. 

And final­ly we can eas­i­ly imag­ine how the hope of acquir­ing dom­i­nance turns into its flip­side, the fear of being dom­i­nat­ed. So first there’s the fear of los­ing con­trol of AI as a tool—the sor­cer­er’s appren­tice sce­nario, or the Roomba going wild and hoover­ing up your ham­ster. But on anoth­er lev­el there is the fear that AIs will acquire minds of their own so that they turn from tools into agents. And that robot rebel­lion theme is real­ly per­sis­tent and reveals that there’s a para­dox at the heart of our rela­tion­ship with intel­li­gent machines. That we want clever tools that can do every­thing we can and more, includ­ing be the per­fect sol­dier. And then for those tools to ful­fill our hopes, we give them attrib­ut­es like intel­lect and auton­o­my. And of course it’s not hard to see the ten­sion in the idea of cre­at­ing beings that are super­hu­man in capac­i­ty and subhuman in their sta­tus. So, fears of Skynet show that recog­ni­tion of the deep para­dox in cre­at­ing pow­er­ful inde­pen­dent minds enslaved to us, which is why so many nar­ra­tives of robot rebel­lion so close­ly par­al­lel nar­ra­tives of slave rebel­lion. But that’s anoth­er piece of research I’m work­ing on. 

So, that’s our eight hopes and fears. And last year we decid­ed to look at the role that these eight nar­ra­tives play in the life of the aver­age British per­son. So we sur­veyed over 1,000 peo­ple, and the find­ings of that sur­vey show that the UK pop­u­la­tion has a marked­ly neg­a­tive view of AI. So lev­els of con­cern were on aver­age sig­nif­i­cant­ly high­er than lev­els of excite­ment across these nar­ra­tives. And unfor­tu­nate­ly, con­cern was high­er than excite­ment even for sev­er­al of the hope­ful narratives. 

And also we had an open ques­tion, How would you explain AI to a friend?” And in response near­ly 10% of peo­ple spon­ta­neous­ly offered neg­a­tive sen­ti­ments instead of explain­ing what AI is. So we titled the paper Scary Robots,” because that’s lit­er­al­ly what some­one replied: How would you explain AI to a friend? Scary robots.” 

So dystopi­an visions seem to be so entrenched that large num­bers of peo­ple are inclined to see the down­sides of AI even when pre­sent­ed with whol­ly utopi­an visions. So nego­ti­at­ing the deploy­ment of AI and inform­ing peo­ple of what it can and can­not do will have to con­tend with those entrenched fears that under­lie even what seem to look like quite pos­i­tive sto­ries. Thank you. 

Further Reference

Event page