Kevin Bankston: Next up I’d like to intro­duce Madeline Ashby, who like Kanta is on the advi­so­ry board of our AI Policy Futures project, but unlike Kanta could­n’t make it today. Madeline, in addi­tion to being a pro­fes­sion­al futur­ist who con­sults with com­pa­nies and gov­ern­ment agen­cies, is also an accom­plished sci­ence fic­tion writer, best known for her most recent nov­el Company Town and for her Machine Dynasty series. Her work on the third and final book of that series is what kept her from being here in per­son today, so she has instead record­ed a five-ish minute mes­sage that’ll serve as a lead-in to our next pan­el. So if we could queue up Madeline’s video, that would be great.

Madeline Ashby: Hi, my name is Madeline Ashby and I wel­come you to this event, and I apol­o­gize that I can­not be there with you. Now the rea­son that I can­not be there with you is that I’m a sci­ence fic­tion writer as well as being a futur­ist. And I am wrap­ping up the edits on the final book in a tril­o­gy about arti­fi­cial intel­li­gence, the sub­ject of our con­ver­sa­tion. And it’s requir­ing a lot of my atten­tion in part because I’m real­iz­ing that it’s sort of the last chance I have to work with these char­ac­ters and make a final state­ment about you know, what I was try­ing to do when I decid­ed that I want­ed to write about arti­fi­cial intel­li­gences, and the evo­lu­tion of con­scious­ness, and what it would be to be a dif­fer­ent kind of con­scious­ness. I think that writ­ing about arti­fi­cial intel­li­gence is basi­cal­ly a phe­nom­e­no­log­i­cal ques­tion. It is about what it is to be a bat. It is about you know, tak­ing on this otherness. 

And I think that one of the chal­lenges when we talk about how we’re going to write about AI is that so much has already been writ­ten. Both at the lev­el of myth…you know. When we talk about sto­ries like the golem, sto­ries like Pinocchio, things like that, those are also arti­fi­cial intel­li­gence sto­ries. But also there’s this whole gamut of pop cul­ture sto­ries, right. And as I’m sort of fin­ish­ing this tril­o­gy out, I real­ize now that I’m sort of think­ing about all of those oth­er sort of ren­di­tions of this sto­ry, or all the oth­er ver­sions of this sto­ry and how I can pos­si­bly set my char­ac­ters apart. 

And one of the ways that I’ve tried to do that is to make sure that my char­ac­ters make ref­er­ence to, in their dia­logue, oth­er depic­tions of robots. So in the Machine Dynasty, which is my tril­o­gy about robots who eat each oth­er and you know, evil grand­moth­ers and so on and so forth, these robots make ref­er­ence to the fact that the word robot” comes from the old Slavonic word for slave. They are aware of the fact that in pop­u­lar cul­ture already, they have been depict­ed as god­less killing machines, or sexbots, or skin­jobs, or what have you. And they’re aware of it; they’ve seen depic­tions of themselves. 

And I think that you know, if you believe that even­tu­al­ly arti­fi­cial intel­li­gence will achieve sort of an anthro­po­mor­phized con­scious­ness or human-like con­scious­ness, or even just a mam­malian con­scious­ness, a mammal-like consciousness…you know, you’re talk­ing about some­thing that might lat­er read what you wrote about it. The same way as when you blog about your kids. There’s every pos­si­bil­i­ty that they’re going to find out what you wrote. And I think that’s one of the most inter­est­ing chal­lenges about this, is that you know, you have to be kind of care­ful about what it is that you’re going to say. What expec­ta­tions are you cre­at­ing? What are you…telling this thing to be? What are you telling it to become? And can you tell it to become some­thing bet­ter than you? You know, can it be bet­ter than what you are? Is it a true evo­lu­tion? Can it go beyond you? 

And I think that’s sort of one of the most inter­est­ing chal­lenges as we frame sort of debates about arti­fi­cial intel­li­gence, debates about what intel­li­gence is, what con­scious­ness is. You know, why is it that we think that our ver­sion of intel­li­gence is the best.” Why is it that human intel­li­gence gets sort of this pri­ma­cy? Why is it con­sid­ered the best? Isn’t that sort of an anthro­pocen­tric, nar­cis­sis­tic atti­tude for us to take? Aren’t we dis­count­ing oth­er mod­els of intel­li­gence? Whale intel­li­gence, dol­phin intel­li­gence, the intel­li­gence of bees. Raven intel­li­gence. All of those oth­er mod­els exist on this plan­et, you know. They are aliens among us. And those are nat­ur­al intel­li­gences. There’s noth­ing arti­fi­cial about them. And yet they are just as for­eign to us as some of the fic­tion­al things that we are prob­a­bly talk­ing about today. 

So, because I can’t be there with you, I guess what I would ask you to keep in mind is even­tu­al­ly, you might have to explain what you wrote. You might have to explain the sto­ry you told. You might have to explain why you rep­re­sent­ed an entire type of intel­li­gence in a cer­tain way. When we talk about rep­re­sen­ta­tion in fic­tion we’re often talk­ing about real­ly loaded cat­e­gories, like real­ly sen­si­tive top­ics, really…you know, we’re talk­ing about the sub­al­tern. We’re talk­ing about mar­gin­al­ized peo­ple. We’re talk­ing about bring­ing rep­re­sen­ta­tions for­ward of peo­ple who have been char­ac­ter­ized as vil­lain­ous. As evil. As depraved. As per­verse. As all of the things that you know… all of the qual­i­ties that sort of get penal­ized lat­er on. Or that are con­sid­ered bad. 

And so I guess you know, when we talk about how we rep­re­sent arti­fi­cial intel­li­gence, think about the lessons that it might be learn­ing from you. Is it see­ing itself? Is it see­ing itself rep­re­sent­ed? Is it see­ing itself? You know, is it see­ing the poten­tial for good? Is it see­ing the poten­tial for growth? You know, we ask that ques­tion about our­selves: how are we rep­re­sent­ing our­selves? How are we rep­re­sent­ing dif­fer­ent groups of our­selves? How are we rep­re­sent­ing the mul­ti­plic­i­ty of human­i­ty? And we should pos­si­bly start con­sid­er­ing how we rep­re­sent the mul­ti­plic­i­ty of intel­li­gence as well. 

And so I guess that’s sort of what I would say. Hopefully I would be more artic­u­late if I were actu­al­ly there. But edits are pret­ty killer, so. Good luck guys.

Kevin Bankston: Thank you Madeline. And good luck with your edits. Now I’d like to wel­come to the stage the pan­elists for our sec­ond pan­el on AI in sci-fi. So come on up, folks. The mod­er­a­tor Andrew Hudson is a sci­ence fic­tion writer him­self and a grad­u­ate stu­dent at ASU, where he stud­ies how spec­u­la­tive futures can bet­ter help us imag­ine how to live through cli­mate change, and where he has also been lead­ing the research for our AI Policy Futures projects, so thank you for that. So Andrew take it away and let’s go till 3:20 instead of 3:15.

Andrew Hudson: Yeah, sure. Thanks every­one, and thanks to the rest my pan­el. We’re the fic­tion pan­el, fol­low­ing up on the fact pan­el. And I thought the fact pan­el did a real­ly good job lay­ing out some frus­tra­tions that I think are very rea­son­able to have with the sci­ence fic­tion lit­er­a­ture that has used this term AI.” And so I’ll just say on behalf of sci-fi writ­ers, Our bad.” But I think what I hope we can do in this pan­el is have a slight­ly more lit­er­ary dis­cus­sion to try to answer well why were those the sto­ries that we were telling and like, what has been the point of telling those sto­ries even though they don’t now nec­es­sar­i­ly always align with the pol­i­cy prob­lems that we’re hav­ing. But what was the use of them. So I’ll let the rest of my pan­elists intro­duce them­selves. But I was hop­ing we could start as we go through with respond­ing to Madeline’s provo­ca­tion and say like, what kinds of blog posts are we leav­ing about our chil­dren, human or non, and the type of soci­ety that they’re going to be cre­at­ing? Hello Chris.

Chris Noessel: Sure. I have an oppor­tu­ni­ty to intro­duce myself as a solo speak­er next, so I’ll be very brief. I’m here for being the author of SciFiInterfaces​.com, a nerdy blog. But I actu­al­ly think that Madeleine’s injunc­tion about think­ing about your prog­e­ny as your audi­ence is…I kin­da don’t want to think about that. Partially because it will help both my bio­log­i­cal prog­e­ny, ide­o­log­i­cal prog­e­ny, under­stand where they came from came from bet­ter. Because I don’t want to put a veneer on that and lie, or change what I would say. 

Lee Konstantinou: I’m Lee Konstantinou. I’m a pro­fes­sor in the English depart­ment at the University of Maryland, College Park, so I’m a local. I teach and write schol­ar­ship on sci­ence fic­tion, and I’m also a writer of sci­ence fic­tion. I’ve writ­ten a nov­el, I’ve writ­ten a bunch of short sto­ries. I’m think­ing a lot about AI in dif­fer­ent projects that I’m work­ing on. And I don’t know if we’re going to intro­duce our­selves first and then answer the ques­tion, but to Madeline’s provo­ca­tion you know, the thing that came to mind is that the per­son who writes the blog post about their child is real­ly in a way not writ­ing about their chil­dren at all. They’re often writ­ing about them­selves, writ­ing about their own hopes and aspirations. 

And one thing I would say about a lot of our sci­ence fic­tion­al nar­ra­tives that fea­ture AI is that they’re often not real­ly about AI in any kind of tech­ni­cal sense. They’re not engag­ing in the project of fore­cast­ing. They’re not try­ing to give us a tech­ni­cal blue­print for the future. And so to our AI prog­e­ny who will watch this video I say, It was­n’t about you at all, it was all about us,” you know. 

Kanta Dihal: Well I’m Kanta Dihal. I’ve just been intro­duced by Kevin so I’ll just go straight to the ques­tion. So well, I don’t have chil­dren of my own but I do have the strong belief that yes, you might want to keep them in mind when you pub­licly write about them. Because I recall a friend show­ing me a blog post that a preg­nant fam­i­ly mem­ber had made, and her mus­ings on how she hoped that this child was going to turn out healthy because oth­er­wise she did­n’t want it. 

Which brings me to the idea of, and I guess it must be men­tioned or maybe I’ll doom you all by say­ing this, but Roko’s Basilisk is the sort of thought experiment/terrify your chil­dren before going to bed sto­ry that if you know that a super­in­tel­li­gence is going to exist in the future, then you have to bear in mind that it is going to know every­thing you do in your life, so you bet­ter ded­i­cate your life to mak­ing sure that this super­in­tel­li­gence is going to be built and not hin­der it because oth­er­wise that super­in­tel­li­gence will make a copy of your brain and tor­ture into eter­ni­ty in cyberspace. 

Yeah. So, now you’ll all have to go out and do that and write nice blogs about the AI

Damien Williams: I’m Damien Williams. I am a PhD researcher at Virginia Tech University. My work is in sci­ence, tech­nol­o­gy, and soci­ety. I’m research­ing the ways that bias and val­ues get embed­ded into tech­no­log­i­cal and non-technological sys­tems, specif­i­cal­ly look­ing at arti­fi­cial intel­li­gence, machine learn­ing, human biotech­no­log­i­cal inter­ven­tions such as pros­the­ses, implants, oth­er what peo­ple might think of as cyborg imple­ments. And when I use the word bias” there, which is kind of what my ques­tion was ear­li­er, I mean both in terms of per­spec­tives but also in terms of mod­els but also in terms of the things that under­gird what even­tu­al­ly become prej­u­dices. Bias in that, and under­stand­ing it as anoth­er way of think­ing about what it is that we mod­el for and try to pre­dict based off of. 

My orig­i­nal Masters degree is in a com­bi­na­tion of phi­los­o­phy and reli­gious stud­ies. And so this con­ver­sa­tion about what it is that we leave for our chil­dren, and what the Basilisk might do, and what the mind is, and what it is that con­scious­ness might be and be mod­eled as in these stories…all of those things are pret­ty pertinent. 

For Madeline’s provo­ca­tion, I think that we do have to kind of think about our chil­dren, our prog­e­ny. But I don’t think that nec­es­sar­i­ly requires that we change what we say. But it means giv­ing con­text to what we say and why we say it. My own par­ents, I want them to be hon­est with me about what they feel. And I don’t nec­es­sar­i­ly always have direct access to exact­ly why they feel when they feel it. We’re talk­ing about a thing in Madeline’s provo­ca­tion that will have the access to look at the con­text of lit­er­al­ly every­thing, all the time, for­ev­er. So in that sense if we’re talk­ing about a prog­e­ny which will be able to reach back and see why we thought what we thought when we thought it, I think we should be care­ful not just what we say, but be care­ful to be will­ing to think about why it is we feel what we feel. And not just toss ideas out there with­out that context. 

And I think that’s about com­mu­ni­ca­tion. I think that’s about not just you know, hedg­ing our bets so that the Basilisk does­n’t kill us, or tor­ture a copy of our brains for­ev­er and ever. I think it’s about being will­ing to be open and com­mu­nica­tive with anoth­er mind that while it might be dras­ti­cal­ly dif­fer­ent from ours is still made from us. And I think that’s just par­ent­ing in any capacity. 

Hudson: Well hope­ful­ly con­text is kind of what we can give some [of] today around some of the sto­ries that have shaped a lot of the mythos that we’ve built up. So I want to go back to Kanta’s hopes and fears dichotomies, which I think are real­ly fas­ci­nat­ing, and maybe ask the pan­el are these a reflec­tion of the way sci-fi has played into nar­ra­tives that we already had in our soci­ety about the major­i­ty ver­sus the mar­gin­al­ized or ver­sus minori­ties or ver­sus the out­siders, and how have maybe some of the core AI sto­ries for­ward­ed those nar­ra­tives, or pro­duced counter nar­ra­tives? Maybe Lee, do you want to start us off? 

Konstantinou: Yeah. So I said in my pre­vi­ous answer that sci­ence fic­tion nar­ra­tives about AI are often alle­gor­i­cal in their scope. And one of the main or great kind of alle­gor­i­cal sub­jects of sci­ence fic­tion about AI is the ques­tion of pow­er and author­i­ty, and dom­i­na­tion, right, which [to Chowdhury] your your talk I think out­lined so beautifully. 

And so I think what we sort of find in our sci­ence fic­tion nar­ra­tives about AI are like, every pos­si­ble com­bi­na­tion of forms of dom­i­na­tion. You get AIs that kill all humans. You get humans who in one way or the oth­er are dom­i­nat­ing or tor­tur­ing AIs. You can think of a nar­ra­tive like Westworld or Ex Machina where the AIs could arguably be said to have good rea­son for rebelling against their human mas­ters. You get works of sci­ence fic­tion like Dune or Battlestar Galactica where there is a pri­or AI rev­o­lu­tion or AI upris­ing that leads to the elim­i­na­tion or exter­mi­na­tion of AI. And I think you get all of these vari­ants, and they’re often not very nuanced. You know, they they pick a side, they pick a tra­jec­to­ry, and I think the most inter­est­ing sci­ence fic­tion is find­ing a kind of more nuanced or kind of plu­ral­is­tic vision of what AI might be, that’s break­ing out of these tropes. 

So a recent book by the nov­el­ist Annalee Newitz, her book Autonomous I think is one of actu­al­ly the best visions of a world in which AI come in all shapes and sizes. They were embod­ied in a vari­ety of ways. They have polit­i­cal opin­ions. They’re…kind of wrong, mis­guid­ed, fool­ish, coura­geous. And they’re not quite human at the same time. And so I think a promis­ing sci­ence fic­tion is sort of sci­ence fic­tion that is mov­ing in that more com­plex direc­tion. For me, for my taste. I don’t know if that answers your ques­tion but. 

Hudson: Chris, I know your taste aims a lit­tle more pop­py, but what do you see in this type of pre­dom­i­nant nar­ra­tive ver­sus like, counternarrative?

Noessel: I do study big-budget films and tele­vi­sion shows, most­ly. And those, the cre­ators of those sto­ries are always hedg­ing their bets? because they want to make as much mon­ey as pos­si­ble? with their sto­ries. And that means that they can only go too far out­side of a par­a­digm before they begin to lose that. Primer is a great film about time trav­el, but it is not acces­si­ble to the major­i­ty of pop sci-fi view­ers. And that dichoto­my of yes, I can get Thor; he’s a dude with a ham­mer, or I can’t quite under­stand the angler­fish metaphor of Under the Skin means that the things I study tend to be on this safer side of sci-fi. And what I see across the nar­ra­tives that I ana­lyze is they work on a prin­ci­ple of what you know, plus one. 

So, to abuse a phrase—and unfor­tu­nate­ly I can’t remem­ber the fel­low who coined it but like, what if phones, but too much. Or—

Williams: Daniel Ortberg.

Noessel: Thank you, what is his name. 

Williams: Daniel Ortberg.

Noessel: Thank you. 

Williams: The Toast.

Noessel: They can’t real­ly go to the extents of you know, you can’t waste twen­ty min­utes of an audi­ence’s time with a giant back­ground in order to explain why this moment that you’re about to see in the cin­e­ma is rel­e­vant. They have to play fast and quick, and that keeps the sto­ries in cin­e­ma and tele­vi­sion fairly…mmm…less risky. 

Hudson: Kanta, so is this I think, a fair way to spin out from your your dichotomies?

Dihal: Yes, def­i­nite­ly, and then when you’re look­ing at the rela­tion­ship between these kinds of nar­ra­tives and the old­er nar­ra­tive tra­di­tions that they fit in, again it’s almost as if AI is the sort of hyper­bol­ic ver­sion of tech­nol­o­gy mak­ing every­thing pos­si­ble. But it’s very sim­i­lar to nar­ra­tives of fly­ing. I mean fly­ing was a dream, a tech­no­log­i­cal dream, for thou­sands of years, until it actu­al­ly hap­pened and it took a form very much unlike what had been imag­ined in all those nar­ra­tives. There was no wing flap­ping, and there were no steam engines up in the air. But, we could fly. And we can fly now, and nowa­days it’s just real­ly every­day busi­ness. So in the same sense, these sto­ries about AI are in all kinds of ways antic­i­pat­ing relat­ing our­selves to intel­li­gent machines. 

And so on rep­re­sen­ta­tion and coun­ternar­ra­tives, I think one thing that many sto­ries of AI make clear is they pre­sume, or at first sight, they are about…they seem to be about humans ver­sus non-humans. So humankind as this one glob­ule in which all of us here and every­one out there is includ­ed, ver­sus the rest. And the same with nar­ra­tives of aliens. 

Well what these nar­ra­tives actu­al­ly reveal is that human­i­ty is some­thing that is grant­ed as a mat­ter of degree. Some peo­ple are con­sid­ered more human than oth­ers. And when you get an intel­li­gent machine, it slots into that hier­ar­chy and shakes up that hier­ar­chy. And intel­li­gence is actu­al­ly a way in which that hier­ar­chy has been main­tained, what with things like—here in the US con­text, the SAT being devel­oped by a eugeni­cist in order to keep peo­ple of col­or out of the universities. 

So intel­li­gence as this bench­mark for how human some…something or some­one is gets real­ly prob­lem­at­ic when you bring in an arti­fi­cial intel­li­gence that might be more intel­li­gent. Because that one might start pok­ing all the way up at the top say­ing, Scuse me. I’m at the top now, accord­ing to your bench­marks.” And that’s where peo­ple like Elon Musk start worrying. 

Hudson: Yeah. I real­ly like the fly­ing ques­tion. And one ques­tion that I’ve heard that I find real­ly provoca­tive is, does a sub­marines swim? And the ques­tion of whether a machine thinks may actu­al­ly be as arbi­trary as why does a plane fly but we don’t real­ly like say­ing that a sub­ma­rine swims? It’s just sort of a gim­mick of language. 

But yeah, to your your oth­er point, it seems like AI stands in for the oth­er in lots of alle­gor­i­cal sto­ries. And so maybe Damien, maybe could you give us some exam­ples of this if you have? And is it help­ful to have these types of sto­ries, now that we’re talk­ing about the ways that real life AI sys­tems oth­er oth­er human beings?

Williams: To answer your sec­ond ques­tion first, yes. 

To answer your first ques­tion next, I mean the exam­ples that we have go down through his­to­ry. I mean, we have as we’ve kin­da of talked about a num­ber of times… Madeline Ashby brought this up in her record­ed talk, the word for robot comes from from the Slavonic word for slave. But that’s from a piece called RUR (Rossum’s Universal Robots), and that’s about an oppressed work­ing class that were enslaved and made into a group of work­ers. They were made to be these work­ers. But there’s also instances where we talk about the idea of… I mean, we can even look back to when robots were being promised to every­body in just IBM copy, and this idea that every­body would have a robot slave of their own. Like that was lit­er­al­ly ad copy that was in mag­a­zines. Like the days of slav­ery will come back. Don’t wor­ry, we don’t mean humans.” 

So, it’s always been this under­cur­rent, the notion of the oppressed, the mar­gin­al­ized, the upris­ing, and kind of over­com­ing, and the ten­sion between on the one hand we think that’s right and we think it’s jus­ti­fied; and on the oth­er hand, we’re scared of it. Because it’ll be upris­ing against us. We have that in Westworld from the orig­i­nal. We have that in all of Asimov’s sto­ries. We have that in basi­cal­ly any­thing with a machine intel­li­gence that some­how turns its own cre­ation into a fact of mak­ing humans and its cre­ators obso­les­cent. That kind of process of obso­les­cence becomes the stand-in for oh no, have we become the ones that got over­thrown? Whoever expect­ed this could hap­pen to us? 

And I think that it’s impor­tant that we still think about…not nec­es­sar­i­ly in the same dynam­ics of those kinds of slave nar­ra­tives of oppres­sion but in terms of mar­gin­al­ized peo­ples and think­ing about the ways that we look at the… Robots are often stand-ins for… Even when they’re not rep­re­sent­ing over­throw­ers. They’re often stand-ins for peo­ple with non-standard or neu­ro­di­verse posi­tion­al­i­ties in the world. For autis­tic peo­ple. For peo­ple with ADHD. For peo­ple who think and see and expe­ri­ence the world dif­fer­ent­ly. And there’s often in even just our lin­guis­tic con­ceits, there’s a line drawn between neu­ro­di­verse pop­u­la­tions and robot­ic­ness, or machine-like qualities. 

And so that’s why I think the answer to your sec­ond ques­tion is yes. It has to be inves­ti­gat­ed. We still have to think about these things, because even as we are cre­at­ing sys­tems which oth­er peo­ple… Which take in data points or are con­struct­ed at the very out­set in such a way that they will mar­gin­al­ize or fur­ther repress, they are still going to be used as touch­points and metaphors for talk­ing about the very peo­ple and the very pop­u­la­tions whom they are oppress­ing. And we have to take the time to ren­der out in sto­ries a mod­el for think­ing dif­fer­ent­ly about that. For specif­i­cal­ly inter­ro­gat­ing that ques­tion. For say­ing, isn’t oppressed per­son right to over­throw their oppres­sor? Isn’t some­one who sees the world dif­fer­ent­ly right to ques­tion the met­rics by which they’re being judged? That’s one way to read Blade Runner, by the way. There’s a bur­geon­ing host of autists—people with autism—who are look­ing at Blade Runner and going maybe the prob­lem isn’t that these stand-ins for autis­tic peo­ple don’t feel. Or don’t feel the right way. Maybe they feel too much. Maybe the way that they feel is present but dif­fer­ent enough that the humans in their capac­i­ty don’t under­stand what it is that they’re feel­ing, and are rein­ter­pret­ing that nar­ra­tive in that way. 

And so think­ing about how we take those nar­ra­tives of oppres­sion and specif­i­cal­ly ask, well what if the peo­ple who are being mod­eled or mir­rored here are the ones who get to tell the sto­ry? What sto­ry would they tell about this instead? That ques­tion becomes deeply deeply impor­tant, specif­i­cal­ly because if it’s not inter­ro­gat­ed it will be used to fur­ther mar­gin­al­ize them. To fur­ther dis­en­fran­chise them from the tools that are being used to oper­ate and con­trol their lives. 

Dihal: That is a great read­ing of Blade Runner. That I was­n’t famil­iar with yet, because the read­ing of Blade Runner that is most often advanced and that is being used for lots of dif­fer­ent nar­ra­tives about arti­fi­cial intel­li­gence is the slave nar­ra­tive. So the AI stands in for the oppressed racial other. 

The same with again aliens. I’ve think think­ing for instance of the film District 9, which shows racial seg­re­ga­tion except it’s humans ver­sus the aliens. And in both these cas­es, Blade Runner and District 9, you can see that by means of hav­ing the AI and the alien as the racial oth­er, you pre­sume the all the humans are white. You need no racial diver­si­ty among your humans because you have a racial oth­er. And you can see that in Blade Runner! These are fugi­tive slaves; all the androids are white. Nearly all were humans. And as far as I remem­ber that’s not any bet­ter in the new Blade Runner. And in District 9 for the fact that it’s set in South Africa, again very few non-white human protagonists. 

Williams: The num­ber of black South Africans who appear in District 9 is I want to say some­thing around the order of twelve total? and they are basi­cal­ly a face­less gang.

Dihal: Yeah and aren’t they sup­posed to be racial stereo­types of Nigerians?

Williams: Mm hm.

Konstantinou: Yeah. It was very, yeah, controversial.

Williams: Yeah. And so yeah, that’s tak­ing the time to again just specif­i­cal­ly dig down on those facts and say, we have told this oth­er­ing sto­ry for so long, and it has made its way into the process of what it is that we build these things to do if not to be. What if we did this oth­er­wise? Oughtn’t we do this oth­er­wise? And tak­ing the time to do so. 

Hudson: I think there’s lots of ways in which that pat­tern also shows up in oth­er gen­res, right. There’s so many ways in which AI sto­ries to my mind repli­cate hor­ror tropes, right. Like the androids are zom­bies. The sort of dis­em­bod­ied Siri voic­es are ghosts, right? So we’re in a well-trod kind of lit­er­ary tra­di­tion here one way or another. 

Anyone else on this question? 

So, one thing that is unique about this AI dis­course that we’re hav­ing is that it goes back a long way? In some ways much fur­ther than like, we’ve been talk­ing about what ifs of AI way before we start­ed hav­ing orga­ni­za­tions that put AI in their sort of hype notes, right? And now we’re here, but there’s been a whole evo­lu­tion of this con­ver­sa­tion along the way. And so Chris maybe…I know you have some data on how how we’ve we talked about AI has evolved over the last century.

Noessel: Yeah. So when the analy­sis that I’m going to share with…in sort of the solo talk, one of the things I took a look at was the valence and the preva­lence of which nar­ra­tives have been told when, from the begin­ning of cin­e­ma to now. And there are four main eras, if you will. And this data isn’t in the solo talk so I’m hap­py to expli­cate it.

We’re going to bypass Le voy­age dans la lune par­tial­ly because it was a piece of vaude­ville that was put to film, and regard Metropolis as the first seri­ous piece of sci­ence fic­tion. And Fritz Lang’s mas­ter­piece was the sort of begin­ning of this very dark, dystopi­an era, where espe­cial­ly European film­mak­ers were using tech­nol­o­gy to illus­trate the evils of the Industrial Revolution. And so the very begin­ning of AI in sci-fi was just…it’s ter­ri­ble, it’s dark, it’s going to require us to feed our chil­dren to the machines. 

Then start­ing with Robby the Robot in Forbidden Planet, there was an era of pos­i­tiv­i­ty and almost sort of like American adver­tis­ing for how awe­some AI will be. It’ll be like look! they won’t even be able to dis­obey you with­out short cir­cuit­ing. Won’t that be marvelous! 

And that peri­od last­ed prob­a­bly up until the 80s. And themes such as RoboCop began to ask ques­tions about well, maybe it’s not as pret­ty as it—because by then of course America had become sort of the cin­e­mat­ic jug­ger­naut of the world—began to admit that maybe it’s not going to be all Robbys in the world. And so it was a peri­od of inves­ti­gat­ing the com­pli­ca­tions. And in fact that was the emer­gence of evil AI” rather than sort of a sys­temic machine like we saw in Metropolis. So we see things like the hor­ri­ble Proteus IV in Demon Seed that just comes right out the gate as evil. It’s also a peri­od unques­tion­ing gen­e­sis nar­ra­tives like cham­pagne on a key­board brings a com­put­er to life, or a light­ning bolt strikes a plane and sud­den­ly it wants to rebel. 

And that con­tin­ued up until the aughts, two thou­sand aughts, and that final peri­od is where peo­ple are begin­ning to deal with the real­i­ties and the nuances of AI even get into that sort of otherness—what does it mean to be oth­er. And that’s sort of where we are. What’s most inter­est­ing about these trends is they don’t quite fol­low the science—the peaks and val­leys of AI hype, and the AI win­ters. There’s not a tight cor­re­la­tion, which I would have expected. 

So, those the sort of four big eras. There are lots of oth­er analy­sis but…ought not to go into them. 

Hudson: Yeah, I’m curi­ous if any­one else has thoughts on what are like, some of the high­lights of those moments? And maybe some works that you did­n’t men­tion that define some of those types of waves. 

Konstantinou: I mean, so one inter­est­ing way to track these trends might be to look at the way like, a fran­chise like Star Trek treats AI. And so like the lat­est sea­son of Star Trek: Discovery has an evil AI from…I think it’s from the future, as its main enemy. 

Noessel: Spoiler alert.

Konstantinou: I’m sor­ry. Yeah. Well. Yeah, I’m gonna ruin it all. Yeah.

Williams: It’s been over three weeks. 

Konstantinou: But it’s—I did­n’t real­ly spoil any­thing. But it’s kind of an unusu­al… And it’s tied up with kind of the ori­gin of the utopi­an soci­ety that is the Federation. And this is a show that’s osten­si­bly set in the past of the fran­chise, but it’s a much more moral­ly ambiva­lent, dark­er vision of AI, of the use of these sys­tems, com­pared say to like if you remem­ber the holo­graph­ic doc­tor from Star Trek: Voyager who’s there to help and many plot­lines are ded­i­cat­ed to explor­ing his emerg­ing human­i­ty. Or Data from Star Trek: The Next Generation. And so it does seem that like, a fran­chise like Star Trek would be an inter­est­ing way to think about pub­lic sen­ti­ments about AI and how they’re changing. 

Dihal: And relat­ed to that, you could see the same hap­pen­ing in Star Wars

Williams: Mm hm.

Dihal: —where ini­tial­ly you have the sort of unquestioned…the AIs are com­ic relief, mov­ing towards, well the most recent Star Wars films where it’s much more ambiva­lent. So in Solo, there is an AI who stands up for robot rights and who goes to robot fight­ing pits and tells the robots who fight in these pits that they don’t have to have such a life. That they have free will. And she claims that she has a roman­tic rela­tion­ship with a her human copi­lot. Now that’s quite a dif­fer­ent way of look­ing at than sort of R2-D2 beep­ing around a bit. 

Hudson: And R2-D2 in of the first…one of the clas­sic scenes in Star Wars being told like, we don’t serve your kind here, right. Like the droid’s got­ta leave. Yeah, that’s a pret­ty big jump. 

So I guess— Did you have any any oth­er high­lights that you want­ed to add, Damien?

Williams: I was think­ing about actu­al­ly kind of a tan­dem across these, RoboCop, and think­ing about RoboCop 1986 ver­sus RoboCop 2014, and the dif­fer­ent por­tray­als of what that kind of police state, drone war­fare, robot­ics, and AI nar­ra­tives, what those look like. Like you can see a lot of sim­i­lar­i­ty between the two, obvi­ous­ly, because it’s just a strict remake. But there’s also nuance in what the char­ac­ters, and inte­ri­or to the nar­ra­tive con­sid­er to be the prob­lem of what has hap­pened here. Versus you know, in RoboCop 1986 it was the, Oh no, Murphy’s not Murphy any­more.” Or is he? And while that plays in some­what in the remake, that changes to be more about not just is he still him­self, but he’s been turned into and he—like they very clear­ly show that his auto­mat­ed sys­tems can be turned on and he can be made into lit­er­al­ly a pilot­ed drone, in human, bipedal form. 

And so that shift about auto­mat­ed war-fighting, and the mil­i­ta­riza­tion of police, and the automiza­tion of the mil­i­ta­riza­tion of police, becomes much more the cur­rent fear in 2014 ver­sus this notion of how do we stop crime in Detroit and oh no is that per­son still real­ly a per­son in 1986

Noessel: There’s also a shift­ing role of the state ver­sus corporatism—

Williams: Yes, yes.

Noessel: —across the two films.

Williams: Yes.

Noessel: I hes­i­tate to men­tion RoboCop 2[Konstantinou laughs]

Williams: Yeah. As long as you don’t go to RoboCop 3 I think we’re fine.

Noessel: Yeah…uh…

Dihal: I this where I can bring in that I’ve always main­tained that Inspector Gadget is a par­o­dy of RoboCop?

Williams: Yes.

Konstantinou: I think that’s right, yeah.

Williams: Fanastic.

Hudson: Yeah. The shift­ing role of the state kin­da like, puts me in mind of…I recent­ly read a pret­ty old sci-fi sto­ry, I think it was by Asimov, called Franchise” which was I think from…it’s from the 50s but is of course set in 2008. And in it the super­com­put­er Multivac fig­ures out who is like the exact one per­son you need to poll to fig­ure out how to pick the pres­i­dent and how to decide all the elec­tions. And I was think­ing about that in com­par­i­son with a more recent incar­na­tion of the all-seeing super­com­put­er from Person of Interest, the…

Noessel: The Machine.

Hudson: The Machine. And how that’s very much like a sort of sur­veil­lance state. And what The Machine does, and it’s evil coun­ter­part, is not based on like who gets to vote and fig­ur­ing out— Which I think was a much con­test­ed ques­tion in the 50s. The Machine is like, who gets elim­i­nat­ed by the anti-terror kill squad, right. And so that shift I think prob­a­bly we can track to our own polit­i­cal discourse. 

So I want to touch on kind of one more thing and then we’ll take some ques­tions. To come back last time to the hopes and fears, I know Kanta you are now doing some research that explores a much broad­er swath of of AI nar­ra­tives that maybe we haven’t even dis­cussed here today. So are those hopes and fears, do you feel like those are inher­ent­ly Western hopes and fears, and that oth­er cul­tures and oth­er soci­eties, and even oth­er gen­res might have a dif­fer­ent take on AI

Dihal: Yeah, so this is the project that I lead called Global AI Narratives on. Rather than us doing all the research and basi­cal­ly us try­ing to look at every­thing that’s done across the world, we’re build­ing a net­work of schol­ars who in their own regions are experts on this and bring­ing those togeth­er so that we can com­pare and get answers to ques­tions like this. 

So so far we have done so in Singapore and in Japan. And at the Japan work­shop, there there was indeed some fas­ci­nat­ing rev­e­la­tions, espe­cial­ly about what does the media image of an AI look like. And in Japan, the… So when we would here have the Terminator or even, as I showed in my PowerPoint pre­sen­ta­tion, two Terminators and a nuclear explo­sion because it can’t be dra­mat­ic enough. In Japan, the most com­mon go-to image is a pudgy blue car­toon cat called Doraemon. Anyone famil­iar with Doraemon? 

So Doraemon was a real­ly long-running TV series and man­ga series. And this was some­thing that peo­ple grew up with, and espe­cial­ly the generation…well basi­cal­ly age 30 and above in Japan. And that’s why that nar­ra­tive is so much more influ­en­tial. And yeah it’s a cutesy cat, and also it’s— Well it is an android, a robot, but shaped like a cat. It comes from the future and it tries to solve prob­lems that the human pro­tag­o­nist runs into by means of grab­bing futur­is­tic tools from its pouch. And every time there’s a new tool that’s sup­posed to be able to fix all the prob­lems and then it does­n’t. So that’s a com­plete­ly dif­fer­ent nar­ra­tive of the robot bud­dy. And a hope­ful one.

Noessel: Interesting. Genevieve Bell—formerly of Intel and now she’s back home in Australia—and I had a quick chat before we spoke at a con­fer­ence. And she not­ed that exis­ten­tial threats, or let me say sta­tus threats, that AI and robots pose are not a prob­lem as her research has shown, in Japan or Shinto societies. 

Williams: Right.

Noessel: Because they already have this notion that every­thing has a spir­it. So the fact that spir­it is embod­ied in tech­nol­o­gy is not real­ly an issue? So it’s really…like, that’s a new con­cept for us, that our wash­ing machine might have a hope or a fear of its own but not nec­es­sar­i­ly for Shinto practitioners. 

Hudson: Exactly. Wasn’t that…I think visu­al­ized very ele­gant­ly in one of the shorts in the…was it Love, Death & Robots

Noessel: Oh yeah. 

Hudson: …show, where you had a spir­it fox being hunt­ed in this sort of Medieval Japanese soci­ety, and over time as soci­ety evolves gets turned into a steam­punk cyborg basi­cal­ly. I had­n’t… Yeah, I total­ly agree that sort of the cul­tur­al tone and what was pos­si­ble I think was very dif­fer­ent in that. 

Dihal: There’s also dystopi­an nar­ra­tives that can dif­fer quite dra­mat­i­cal­ly across cul­tures. So for instance, if you’re talk­ing about the apoc­a­lypse in sort of main­stream sci­ence fic­tion, it is some­thing in the future. It’s some­thing that can be avert­ed. It’s some­thing that— An apoc­a­lypse sto­ry is a warning.

Now, for many soci­eties across the world the apoc­a­lypse has already happened.

Williams: Correct.

Dihal: I mean, if you look at Native Americans, the apoc­a­lypse has already hap­pened. So the sto­ries that you get about the future are very dif­fer­ent, informed by such past. 

Williams: I would like to open it up to the people. 

Hudson: Okay! Well I think we’ll take some ques­tions now. So we have mics going around. Great. Back there on the right. 

Audience 1: We already have a pow­ered per­sua­sion archi­tec­ture. It seems that there’s algo­rithms that know us bet­ter than we know our­selves, and have gone as far as not only to get us to spend mon­ey but sway elec­tions, maybe insti­gate eth­nic cleans­ing. What are the…what were the sci­ence fic­tion warn­ings that we missed for this? I don’t remem­ber… You know, these are mod­ern myths and myths told us to know thy­self. But does­n’t seem we’ve been get­ting that recent­ly. Or did I miss that?

Noessel: There’s a pos­i­tive ver­sion of that; I can think of [crosstalk] an example. 

Williams: The Culture.

Noessel: Which is that the long arc of the I, Robot series told of a big­ger and big­ger AI that was influ­enc­ing soci­ety. But at the end of the short sto­ries it was actually—it had fad­ed so far into the back­ground and humans had just become pros­per­ous. And they did­n’t even make the con­nec­tion. That’s not quite the warn­ing that you’re look­ing for but I know that was as Asimov’s ulti­mate arc for I, Robot. Other exam­ples of broad­ly dystopi­an AIs… Where was the warning?

Dihal: Perhaps not so much the sto­ries of intel­li­gent machines, although— Okay, so there is Colossus by DF Jones. It was a nov­el and it was turned into a film in the mid 50s…? 

Noessel: I don’t remember.

Dihal: Which was basi­cal­ly about the US builds a com­put­er that can control…

Noessel: Oh, Colossus: The Forbin Project.

Williams: Yes.

Dihal: Yes. 

Noessel: Uh, 72?

Williams: Oh, 70s.

Dihal: 70s? Oh, it’s lat­er than I thought. So, the US has a defense super­com­put­er, and then it turns out that the Soviets also have a defense super­com­put­er. And they decide that they know what’s best for human­i­ty based on the cul­tur­al and polit­i­cal sys­tem in which they have been pro­duced. And then it starts say­ing okay, sur­ren­der all your pow­er to us, humans. 

Humans say no. 

Colossus says well, you have giv­en me access to all your nukes. Colossus throws nukes. humans have to obey. 

Williams: More… Sorry, just to kin­da jump in. A more recent exam­ple is actu­al­ly Person of Interest. The lat­ter arc of Person of Interest, and again, spoil­ers but the show’s been over for four years now. And it’s all on Netflix so you have no excuse.

The kind of the cul­mi­nat­ing arc is about two com­pet­ing AI super­com­put­ers who are war­ring against each oth­er to nudge peo­ple in a cer­tain direc­tion. The name of the super­com­put­er in like the US mil­i­tary arc of it, the thing is called Northern Lights. And that was made about two years before Edward Snowden’s PRISM leaks? They got to a point where they actu­al­ly had to say, Okay, we need to com­plete­ly reframe the sto­ry that we are telling. Because the things that we have been talk­ing about, as soon as we write them…they hap­pen. So we need to think dif­fer­ent­ly about what it is we’re con­sid­er­ing sci­ence fic­tion about AI to be.” And that was entire­ly about you know, a very large sys­tem of algo­rith­mic nudg­ing and influ­ence mov­ing peo­ple through what they thought was just the water of their lives. 

Noessel: I think what’s delightful—just one last bit about the question—is that the big­ger tech­nolo­gies, I think the less we’ve thought about them. Lots of old­er films that are hav­ing to be remade are hav­ing to account for the fact of cell phones. Which of course, instant access to any per­son on the plan­et was not a nar­ra­tive pos­si­bil­i­ty when sto­ries are told in the 50s. So when they remake The Blob, they have to think about it. Or Battlestar Galactica has to think, Well, why would we dis­able the net­works? I know, to Cylons have access to it.” So I think Facebook’s actu­al­ly one of those tech­nolo­gies that is so per­va­sive, and the sousveil­lance that’s involved, and the per­sua­sion that it has, took every­one by sur­prise. It’s a great question. 

Hudson: It could also be that we told the sto­ries but we told them about tele­vi­sion instead of Facebook, right. And I think we learned good lessons about TV being this prob­lem­at­ic medi­um that maybe we for­got when we switched mediums.

Noessel: I’m also won­der­ing if…Santa Claus was… 

Hudson: Yeah. Yeah.

Noessel: No seriously. 

Audience 2: I’ve heard of AI described as a dual-use tech­nol­o­gy. But I’ve also heard you say dur­ing your pre­sen­ta­tion that A, peo­ple tend to go to extremes when they’re describ­ing it. And also you men­tioned that 70% of British cit­i­zens have a very dystopi­an view of the tech­nol­o­gy. So I guess the ques­tion I would ask is, assum­ing that a nar­ra­tive about AI is root­ed in a time and place, what are we miss­ing today? What are our blindspots? 

Noessel: I’m about to present that. 

I mean, there are oth­er answers to that, but I lit­er­al­ly did a lon­gi­tu­di­nal stu— Longitudinal, is that the word for— A study of sci­ence fic­tion tropes to find out the sto­ries we’re not telling. So if you can wait… 

Kevin Bankston: That’s actu­al­ly a great— 

Noessel: Okay, okay. 

Hudson: Let’s call it.

Noessel: There might be one more ques­tion, cause that clock says we have one more minute. Can we? Can we, Kevin? Yeah.

I mean, I’m eager to talk. But.

Mike Nelson: Hi, I’m Mike Nelson with Georgetown University and teach in the Communications, Culture, and Technology pro­gram. This has been fas­ci­nat­ing. You’ve talked a lot about the robot over­lord sce­nario, where we give them all the pow­er and they use it. You’ve talked a lot about the robot under­class that ris­es up. But there’s anoth­er sce­nario that appears less fre­quent­ly, and that is the robot underlords, who kind of take over the basic grunt work of civ­i­liza­tion and slow­ly work up the stack until they’re sort of tak­ing care of all our needs. And then civ­i­liza­tion dies of bore­dom and com­plic­i­ty. Wall‑E is the best one. Kurt Vonnegut, Player Piano, Kurt Vonnegut and the Tralfamadorians. Do you have oth­er exam­ples, and how like­ly is it do you think that we’ll just be so lazy we’ll cease to chal­lenge ourselves?

Williams: Bradbury.

Dihal: Yeah, that’s the obso­les­cence nar­ra­tive that I men­tioned. There was a screen­shot from from Wall‑E illus­trat­ing it. It’s a fair­ly com­mon one. Usually it has to do with work, but you also have the sort of social obso­les­cence. So well, every­body imme­di­ate­ly jumped on Wall‑E. You said Bradbury?

Williams: Yeah. There’s a Ray Bradbury sto­ry. I can­not remem­ber the name of it off the top of my head but it’s entire­ly about a future, prob­a­bly Martian, civ­i­liza­tion in which the peo­ple are…gone, because the auto­mat­ed sys­tems of their house and the auto­mat­ed sys­tems of their world took care of every­thing to the point where they had no rea­son to do anything. 

As for your sec­ondary ques­tion, I hon­est­ly don’t think that’s very like­ly. I think we get bored easy. And we inno­vate on that bore­dom. Real well. And we find new ways to amuse our­selves even if they’re just remix­es of old ways. 

Nelson: And now there’s robot entertainment.

Williams: We do. 

Dihal: And also the more tasks that we man­aged to give to machines, and robots, and com­put­ers, the busier we seem to get?

Williams: Yeah. 

Noessel: Star Trek the orig­i­nal series, Spock’s Brain” is anoth­er example. 

Hudson: Well we’re going to have stop there. Thank you to our pan­elists very much.

Further Reference

Event page