Kevin Bankston: Next up I'd like to introduce Madeline Ashby, who like Kanta is on the advisory board of our AI Policy Futures project, but unlike Kanta couldn't make it today. Madeline, in addition to being a professional futurist who consults with companies and government agencies, is also an accomplished science fiction writer, best known for her most recent novel Company Town and for her Machine Dynasty series. Her work on the third and final book of that series is what kept her from being here in person today, so she has instead recorded a five-ish minute message that'll serve as a lead-in to our next panel. So if we could queue up Madeline's video, that would be great.

Madeline Ashby: Hi, my name is Madeline Ashby and I welcome you to this event, and I apologize that I cannot be there with you. Now the reason that I cannot be there with you is that I'm a science fiction writer as well as being a futurist. And I am wrapping up the edits on the final book in a trilogy about artificial intelligence, the subject of our conversation. And it's requiring a lot of my attention in part because I'm realizing that it's sort of the last chance I have to work with these characters and make a final statement about you know, what I was trying to do when I decided that I wanted to write about artificial intelligences, and the evolution of consciousness, and what it would be to be a different kind of consciousness. I think that writing about artificial intelligence is basically a phenomenological question. It is about what it is to be a bat. It is about you know, taking on this otherness.

And I think that one of the challenges when we talk about how we're going to write about AI is that so much has already been written. Both at the level of myth…you know. When we talk about stories like the golem, stories like Pinocchio, things like that, those are also artificial intelligence stories. But also there's this whole gamut of pop culture stories, right. And as I'm sort of finishing this trilogy out, I realize now that I'm sort of thinking about all of those other sort of renditions of this story, or all the other versions of this story and how I can possibly set my characters apart.

And one of the ways that I've tried to do that is to make sure that my characters make reference to, in their dialogue, other depictions of robots. So in the Machine Dynasty, which is my trilogy about robots who eat each other and you know, evil grandmothers and so on and so forth, these robots make reference to the fact that the word "robot" comes from the old Slavonic word for slave. They are aware of the fact that in popular culture already, they have been depicted as godless killing machines, or sexbots, or skinjobs, or what have you. And they're aware of it; they've seen depictions of themselves.

And I think that you know, if you believe that eventually artificial intelligence will achieve sort of an anthropomorphized consciousness or human-like consciousness, or even just a mammalian consciousness, a mammal-like consciousness…you know, you're talking about something that might later read what you wrote about it. The same way as when you blog about your kids. There's every possibility that they're going to find out what you wrote. And I think that's one of the most interesting challenges about this, is that you know, you have to be kind of careful about what it is that you're going to say. What expectations are you creating? What are you…telling this thing to be? What are you telling it to become? And can you tell it to become something better than you? You know, can it be better than what you are? Is it a true evolution? Can it go beyond you?

And I think that's sort of one of the most interesting challenges as we frame sort of debates about artificial intelligence, debates about what intelligence is, what consciousness is. You know, why is it that we think that our version of intelligence is "the best." Why is it that human intelligence gets sort of this primacy? Why is it considered the best? Isn't that sort of an anthropocentric, narcissistic attitude for us to take? Aren't we discounting other models of intelligence? Whale intelligence, dolphin intelligence, the intelligence of bees. Raven intelligence. All of those other models exist on this planet, you know. They are aliens among us. And those are natural intelligences. There's nothing artificial about them. And yet they are just as foreign to us as some of the fictional things that we are probably talking about today.

So, because I can't be there with you, I guess what I would ask you to keep in mind is eventually, you might have to explain what you wrote. You might have to explain the story you told. You might have to explain why you represented an entire type of intelligence in a certain way. When we talk about representation in fiction we're often talking about really loaded categories, like really sensitive topics, really…you know, we're talking about the subaltern. We're talking about marginalized people. We're talking about bringing representations forward of people who have been characterized as villainous. As evil. As depraved. As perverse. As all of the things that you know… all of the qualities that sort of get penalized later on. Or that are considered bad.

And so I guess you know, when we talk about how we represent artificial intelligence, think about the lessons that it might be learning from you. Is it seeing itself? Is it seeing itself represented? Is it seeing itself? You know, is it seeing the potential for good? Is it seeing the potential for growth? You know, we ask that question about ourselves: how are we representing ourselves? How are we representing different groups of ourselves? How are we representing the multiplicity of humanity? And we should possibly start considering how we represent the multiplicity of intelligence as well.

And so I guess that's sort of what I would say. Hopefully I would be more articulate if I were actually there. But edits are pretty killer, so. Good luck guys.


Kevin Bankston: Thank you Madeline. And good luck with your edits. Now I’d like to wel­come to the stage the pan­elists for our sec­ond pan­el on AI in sci-fi. So come on up, folks. The mod­er­a­tor Andrew Hudson is a sci­ence fic­tion writer him­self and a grad­u­ate stu­dent at ASU, where he stud­ies how spec­u­la­tive futures can bet­ter help us imag­ine how to live through cli­mate change, and where he has also been lead­ing the research for our AI Policy Futures projects, so thank you for that. So Andrew take it away and let’s go till 3:20 instead of 3:15.

Andrew Hudson: Yeah, sure. Thanks every­one, and thanks to the rest my pan­el. We’re the fic­tion pan­el, fol­low­ing up on the fact pan­el. And I thought the fact pan­el did a real­ly good job lay­ing out some frus­tra­tions that I think are very rea­son­able to have with the sci­ence fic­tion lit­er­a­ture that has used this term AI.” And so I’ll just say on behalf of sci-fi writ­ers, Our bad.” But I think what I hope we can do in this pan­el is have a slight­ly more lit­er­ary dis­cus­sion to try to answer well why were those the sto­ries that we were telling and like, what has been the point of telling those sto­ries even though they don’t now nec­es­sar­i­ly always align with the pol­i­cy prob­lems that we’re hav­ing. But what was the use of them. So I’ll let the rest of my pan­elists intro­duce them­selves. But I was hop­ing we could start as we go through with respond­ing to Madeline’s provo­ca­tion and say like, what kinds of blog posts are we leav­ing about our chil­dren, human or non, and the type of soci­ety that they’re going to be cre­at­ing? Hello Chris.

Chris Noessel: Sure. I have an oppor­tu­ni­ty to intro­duce myself as a solo speak­er next, so I’ll be very brief. I’m here for being the author of SciFiInterfaces​.com, a nerdy blog. But I actu­al­ly think that Madeleine’s injunc­tion about think­ing about your prog­e­ny as your audi­ence is…I kin­da don’t want to think about that. Partially because it will help both my bio­log­i­cal prog­e­ny, ide­o­log­i­cal prog­e­ny, under­stand where they came from came from bet­ter. Because I don’t want to put a veneer on that and lie, or change what I would say.

Lee Konstantinou: I’m Lee Konstantinou. I’m a pro­fes­sor in the English depart­ment at the University of Maryland, College Park, so I’m a local. I teach and write schol­ar­ship on sci­ence fic­tion, and I’m also a writer of sci­ence fic­tion. I’ve writ­ten a nov­el, I’ve writ­ten a bunch of short sto­ries. I’m think­ing a lot about AI in dif­fer­ent projects that I’m work­ing on. And I don’t know if we’re going to intro­duce our­selves first and then answer the ques­tion, but to Madeline’s provo­ca­tion you know, the thing that came to mind is that the per­son who writes the blog post about their child is real­ly in a way not writ­ing about their chil­dren at all. They’re often writ­ing about them­selves, writ­ing about their own hopes and aspi­ra­tions.

And one thing I would say about a lot of our sci­ence fic­tion­al nar­ra­tives that fea­ture AI is that they’re often not real­ly about AI in any kind of tech­ni­cal sense. They’re not engag­ing in the project of fore­cast­ing. They’re not try­ing to give us a tech­ni­cal blue­print for the future. And so to our AI prog­e­ny who will watch this video I say, It was­n’t about you at all, it was all about us,” you know.

Kanta Dihal: Well I’m Kanta Dihal. I’ve just been intro­duced by Kevin so I’ll just go straight to the ques­tion. So well, I don’t have chil­dren of my own but I do have the strong belief that yes, you might want to keep them in mind when you pub­licly write about them. Because I recall a friend show­ing me a blog post that a preg­nant fam­i­ly mem­ber had made, and her mus­ings on how she hoped that this child was going to turn out healthy because oth­er­wise she did­n’t want it.

Which brings me to the idea of, and I guess it must be men­tioned or maybe I’ll doom you all by say­ing this, but Roko’s Basilisk is the sort of thought experiment/terrify your chil­dren before going to bed sto­ry that if you know that a super­in­tel­li­gence is going to exist in the future, then you have to bear in mind that it is going to know every­thing you do in your life, so you bet­ter ded­i­cate your life to mak­ing sure that this super­in­tel­li­gence is going to be built and not hin­der it because oth­er­wise that super­in­tel­li­gence will make a copy of your brain and tor­ture into eter­ni­ty in cyber­space.

Yeah. So, now you’ll all have to go out and do that and write nice blogs about the AI.

Damien Williams: I’m Damien Williams. I am a PhD researcher at Virginia Tech University. My work is in sci­ence, tech­nol­o­gy, and soci­ety. I’m research­ing the ways that bias and val­ues get embed­ded into tech­no­log­i­cal and non-technological sys­tems, specif­i­cal­ly look­ing at arti­fi­cial intel­li­gence, machine learn­ing, human biotech­no­log­i­cal inter­ven­tions such as pros­the­ses, implants, oth­er what peo­ple might think of as cyborg imple­ments. And when I use the word bias” there, which is kind of what my ques­tion was ear­li­er, I mean both in terms of per­spec­tives but also in terms of mod­els but also in terms of the things that under­gird what even­tu­al­ly become prej­u­dices. Bias in that, and under­stand­ing it as anoth­er way of think­ing about what it is that we mod­el for and try to pre­dict based off of.

My orig­i­nal Masters degree is in a com­bi­na­tion of phi­los­o­phy and reli­gious stud­ies. And so this con­ver­sa­tion about what it is that we leave for our chil­dren, and what the Basilisk might do, and what the mind is, and what it is that con­scious­ness might be and be mod­eled as in these stories…all of those things are pret­ty per­ti­nent.

For Madeline’s provo­ca­tion, I think that we do have to kind of think about our chil­dren, our prog­e­ny. But I don’t think that nec­es­sar­i­ly requires that we change what we say. But it means giv­ing con­text to what we say and why we say it. My own par­ents, I want them to be hon­est with me about what they feel. And I don’t nec­es­sar­i­ly always have direct access to exact­ly why they feel when they feel it. We’re talk­ing about a thing in Madeline’s provo­ca­tion that will have the access to look at the con­text of lit­er­al­ly every­thing, all the time, for­ev­er. So in that sense if we’re talk­ing about a prog­e­ny which will be able to reach back and see why we thought what we thought when we thought it, I think we should be care­ful not just what we say, but be care­ful to be will­ing to think about why it is we feel what we feel. And not just toss ideas out there with­out that con­text.

And I think that’s about com­mu­ni­ca­tion. I think that’s about not just you know, hedg­ing our bets so that the Basilisk does­n’t kill us, or tor­ture a copy of our brains for­ev­er and ever. I think it’s about being will­ing to be open and com­mu­nica­tive with anoth­er mind that while it might be dras­ti­cal­ly dif­fer­ent from ours is still made from us. And I think that’s just par­ent­ing in any capac­i­ty.

Hudson: Well hope­ful­ly con­text is kind of what we can give some [of] today around some of the sto­ries that have shaped a lot of the mythos that we’ve built up. So I want to go back to Kanta’s hopes and fears dichotomies, which I think are real­ly fas­ci­nat­ing, and maybe ask the pan­el are these a reflec­tion of the way sci-fi has played into nar­ra­tives that we already had in our soci­ety about the major­i­ty ver­sus the mar­gin­al­ized or ver­sus minori­ties or ver­sus the out­siders, and how have maybe some of the core AI sto­ries for­ward­ed those nar­ra­tives, or pro­duced counter nar­ra­tives? Maybe Lee, do you want to start us off?

Konstantinou: Yeah. So I said in my pre­vi­ous answer that sci­ence fic­tion nar­ra­tives about AI are often alle­gor­i­cal in their scope. And one of the main or great kind of alle­gor­i­cal sub­jects of sci­ence fic­tion about AI is the ques­tion of pow­er and author­i­ty, and dom­i­na­tion, right, which [to Chowdhury] your your talk I think out­lined so beau­ti­ful­ly.

And so I think what we sort of find in our sci­ence fic­tion nar­ra­tives about AI are like, every pos­si­ble com­bi­na­tion of forms of dom­i­na­tion. You get AIs that kill all humans. You get humans who in one way or the oth­er are dom­i­nat­ing or tor­tur­ing AIs. You can think of a nar­ra­tive like Westworld or Ex Machina where the AIs could arguably be said to have good rea­son for rebelling against their human mas­ters. You get works of sci­ence fic­tion like Dune or Battlestar Galactica where there is a pri­or AI rev­o­lu­tion or AI upris­ing that leads to the elim­i­na­tion or exter­mi­na­tion of AI. And I think you get all of these vari­ants, and they’re often not very nuanced. You know, they they pick a side, they pick a tra­jec­to­ry, and I think the most inter­est­ing sci­ence fic­tion is find­ing a kind of more nuanced or kind of plu­ral­is­tic vision of what AI might be, that’s break­ing out of these tropes.

So a recent book by the nov­el­ist Annalee Newitz, her book Autonomous I think is one of actu­al­ly the best visions of a world in which AI come in all shapes and sizes. They were embod­ied in a vari­ety of ways. They have polit­i­cal opin­ions. They’re…kind of wrong, mis­guid­ed, fool­ish, coura­geous. And they’re not quite human at the same time. And so I think a promis­ing sci­ence fic­tion is sort of sci­ence fic­tion that is mov­ing in that more com­plex direc­tion. For me, for my taste. I don’t know if that answers your ques­tion but.

Hudson: Chris, I know your taste aims a lit­tle more pop­py, but what do you see in this type of pre­dom­i­nant nar­ra­tive ver­sus like, coun­ternar­ra­tive?

Noessel: I do study big-budget films and tele­vi­sion shows, most­ly. And those, the cre­ators of those sto­ries are always hedg­ing their bets? because they want to make as much mon­ey as pos­si­ble? with their sto­ries. And that means that they can only go too far out­side of a par­a­digm before they begin to lose that. Primer is a great film about time trav­el, but it is not acces­si­ble to the major­i­ty of pop sci-fi view­ers. And that dichoto­my of yes, I can get Thor; he’s a dude with a ham­mer, or I can’t quite under­stand the angler­fish metaphor of Under the Skin means that the things I study tend to be on this safer side of sci-fi. And what I see across the nar­ra­tives that I ana­lyze is they work on a prin­ci­ple of what you know, plus one.

So, to abuse a phrase—and unfor­tu­nate­ly I can’t remem­ber the fel­low who coined it but like, what if phones, but too much. Or—

Williams: Daniel Ortberg.

Noessel: Thank you, what is his name.

Williams: Daniel Ortberg.

Noessel: Thank you.

Williams: The Toast.

Noessel: They can’t real­ly go to the extents of you know, you can’t waste twen­ty min­utes of an audi­ence’s time with a giant back­ground in order to explain why this moment that you’re about to see in the cin­e­ma is rel­e­vant. They have to play fast and quick, and that keeps the sto­ries in cin­e­ma and tele­vi­sion fairly…mmm…less risky.

Hudson: Kanta, so is this I think, a fair way to spin out from your your dichotomies?

Dihal: Yes, def­i­nite­ly, and then when you’re look­ing at the rela­tion­ship between these kinds of nar­ra­tives and the old­er nar­ra­tive tra­di­tions that they fit in, again it’s almost as if AI is the sort of hyper­bol­ic ver­sion of tech­nol­o­gy mak­ing every­thing pos­si­ble. But it’s very sim­i­lar to nar­ra­tives of fly­ing. I mean fly­ing was a dream, a tech­no­log­i­cal dream, for thou­sands of years, until it actu­al­ly hap­pened and it took a form very much unlike what had been imag­ined in all those nar­ra­tives. There was no wing flap­ping, and there were no steam engines up in the air. But, we could fly. And we can fly now, and nowa­days it’s just real­ly every­day busi­ness. So in the same sense, these sto­ries about AI are in all kinds of ways antic­i­pat­ing relat­ing our­selves to intel­li­gent machines.

And so on rep­re­sen­ta­tion and coun­ternar­ra­tives, I think one thing that many sto­ries of AI make clear is they pre­sume, or at first sight, they are about…they seem to be about humans ver­sus non-humans. So humankind as this one glob­ule in which all of us here and every­one out there is includ­ed, ver­sus the rest. And the same with nar­ra­tives of aliens.

Well what these nar­ra­tives actu­al­ly reveal is that human­i­ty is some­thing that is grant­ed as a mat­ter of degree. Some peo­ple are con­sid­ered more human than oth­ers. And when you get an intel­li­gent machine, it slots into that hier­ar­chy and shakes up that hier­ar­chy. And intel­li­gence is actu­al­ly a way in which that hier­ar­chy has been main­tained, what with things like—here in the US con­text, the SAT being devel­oped by a eugeni­cist in order to keep peo­ple of col­or out of the uni­ver­si­ties.

So intel­li­gence as this bench­mark for how human some…something or some­one is gets real­ly prob­lem­at­ic when you bring in an arti­fi­cial intel­li­gence that might be more intel­li­gent. Because that one might start pok­ing all the way up at the top say­ing, Scuse me. I’m at the top now, accord­ing to your bench­marks.” And that’s where peo­ple like Elon Musk start wor­ry­ing.

Hudson: Yeah. I real­ly like the fly­ing ques­tion. And one ques­tion that I’ve heard that I find real­ly provoca­tive is, does a sub­marines swim? And the ques­tion of whether a machine thinks may actu­al­ly be as arbi­trary as why does a plane fly but we don’t real­ly like say­ing that a sub­ma­rine swims? It’s just sort of a gim­mick of lan­guage.

But yeah, to your your oth­er point, it seems like AI stands in for the oth­er in lots of alle­gor­i­cal sto­ries. And so maybe Damien, maybe could you give us some exam­ples of this if you have? And is it help­ful to have these types of sto­ries, now that we’re talk­ing about the ways that real life AI sys­tems oth­er oth­er human beings?

Williams: To answer your sec­ond ques­tion first, yes.

To answer your first ques­tion next, I mean the exam­ples that we have go down through his­to­ry. I mean, we have as we’ve kin­da of talked about a num­ber of times… Madeline Ashby brought this up in her record­ed talk, the word for robot comes from from the Slavonic word for slave. But that’s from a piece called RUR (Rossum’s Universal Robots), and that’s about an oppressed work­ing class that were enslaved and made into a group of work­ers. They were made to be these work­ers. But there’s also instances where we talk about the idea of… I mean, we can even look back to when robots were being promised to every­body in just IBM copy, and this idea that every­body would have a robot slave of their own. Like that was lit­er­al­ly ad copy that was in mag­a­zines. Like the days of slav­ery will come back. Don’t wor­ry, we don’t mean humans.”

So, it’s always been this under­cur­rent, the notion of the oppressed, the mar­gin­al­ized, the upris­ing, and kind of over­com­ing, and the ten­sion between on the one hand we think that’s right and we think it’s jus­ti­fied; and on the oth­er hand, we’re scared of it. Because it’ll be upris­ing against us. We have that in Westworld from the orig­i­nal. We have that in all of Asimov’s sto­ries. We have that in basi­cal­ly any­thing with a machine intel­li­gence that some­how turns its own cre­ation into a fact of mak­ing humans and its cre­ators obso­les­cent. That kind of process of obso­les­cence becomes the stand-in for oh no, have we become the ones that got over­thrown? Whoever expect­ed this could hap­pen to us?

And I think that it’s impor­tant that we still think about…not nec­es­sar­i­ly in the same dynam­ics of those kinds of slave nar­ra­tives of oppres­sion but in terms of mar­gin­al­ized peo­ples and think­ing about the ways that we look at the… Robots are often stand-ins for… Even when they’re not rep­re­sent­ing over­throw­ers. They’re often stand-ins for peo­ple with non-standard or neu­ro­di­verse posi­tion­al­i­ties in the world. For autis­tic peo­ple. For peo­ple with ADHD. For peo­ple who think and see and expe­ri­ence the world dif­fer­ent­ly. And there’s often in even just our lin­guis­tic con­ceits, there’s a line drawn between neu­ro­di­verse pop­u­la­tions and robot­ic­ness, or machine-like qual­i­ties.

And so that’s why I think the answer to your sec­ond ques­tion is yes. It has to be inves­ti­gat­ed. We still have to think about these things, because even as we are cre­at­ing sys­tems which oth­er peo­ple… Which take in data points or are con­struct­ed at the very out­set in such a way that they will mar­gin­al­ize or fur­ther repress, they are still going to be used as touch­points and metaphors for talk­ing about the very peo­ple and the very pop­u­la­tions whom they are oppress­ing. And we have to take the time to ren­der out in sto­ries a mod­el for think­ing dif­fer­ent­ly about that. For specif­i­cal­ly inter­ro­gat­ing that ques­tion. For say­ing, isn’t oppressed per­son right to over­throw their oppres­sor? Isn’t some­one who sees the world dif­fer­ent­ly right to ques­tion the met­rics by which they’re being judged? That’s one way to read Blade Runner, by the way. There’s a bur­geon­ing host of autists—people with autism—who are look­ing at Blade Runner and going maybe the prob­lem isn’t that these stand-ins for autis­tic peo­ple don’t feel. Or don’t feel the right way. Maybe they feel too much. Maybe the way that they feel is present but dif­fer­ent enough that the humans in their capac­i­ty don’t under­stand what it is that they’re feel­ing, and are rein­ter­pret­ing that nar­ra­tive in that way.

And so think­ing about how we take those nar­ra­tives of oppres­sion and specif­i­cal­ly ask, well what if the peo­ple who are being mod­eled or mir­rored here are the ones who get to tell the sto­ry? What sto­ry would they tell about this instead? That ques­tion becomes deeply deeply impor­tant, specif­i­cal­ly because if it’s not inter­ro­gat­ed it will be used to fur­ther mar­gin­al­ize them. To fur­ther dis­en­fran­chise them from the tools that are being used to oper­ate and con­trol their lives.

Dihal: That is a great read­ing of Blade Runner. That I was­n’t famil­iar with yet, because the read­ing of Blade Runner that is most often advanced and that is being used for lots of dif­fer­ent nar­ra­tives about arti­fi­cial intel­li­gence is the slave nar­ra­tive. So the AI stands in for the oppressed racial oth­er.

The same with again aliens. I’ve think think­ing for instance of the film District 9, which shows racial seg­re­ga­tion except it’s humans ver­sus the aliens. And in both these cas­es, Blade Runner and District 9, you can see that by means of hav­ing the AI and the alien as the racial oth­er, you pre­sume the all the humans are white. You need no racial diver­si­ty among your humans because you have a racial oth­er. And you can see that in Blade Runner! These are fugi­tive slaves; all the androids are white. Nearly all were humans. And as far as I remem­ber that’s not any bet­ter in the new Blade Runner. And in District 9 for the fact that it’s set in South Africa, again very few non-white human pro­tag­o­nists.

Williams: The num­ber of black South Africans who appear in District 9 is I want to say some­thing around the order of twelve total? and they are basi­cal­ly a face­less gang.

Dihal: Yeah and aren’t they sup­posed to be racial stereo­types of Nigerians?

Williams: Mm hm.

Konstantinou: Yeah. It was very, yeah, con­tro­ver­sial.

Williams: Yeah. And so yeah, that’s tak­ing the time to again just specif­i­cal­ly dig down on those facts and say, we have told this oth­er­ing sto­ry for so long, and it has made its way into the process of what it is that we build these things to do if not to be. What if we did this oth­er­wise? Oughtn’t we do this oth­er­wise? And tak­ing the time to do so.

Hudson: I think there’s lots of ways in which that pat­tern also shows up in oth­er gen­res, right. There’s so many ways in which AI sto­ries to my mind repli­cate hor­ror tropes, right. Like the androids are zom­bies. The sort of dis­em­bod­ied Siri voic­es are ghosts, right? So we’re in a well-trod kind of lit­er­ary tra­di­tion here one way or anoth­er.

Anyone else on this ques­tion?

So, one thing that is unique about this AI dis­course that we’re hav­ing is that it goes back a long way? In some ways much fur­ther than like, we’ve been talk­ing about what ifs of AI way before we start­ed hav­ing orga­ni­za­tions that put AI in their sort of hype notes, right? And now we’re here, but there’s been a whole evo­lu­tion of this con­ver­sa­tion along the way. And so Chris maybe…I know you have some data on how how we’ve we talked about AI has evolved over the last cen­tu­ry.

Noessel: Yeah. So when the analy­sis that I’m going to share with…in sort of the solo talk, one of the things I took a look at was the valence and the preva­lence of which nar­ra­tives have been told when, from the begin­ning of cin­e­ma to now. And there are four main eras, if you will. And this data isn’t in the solo talk so I’m hap­py to expli­cate it.

We’re going to bypass Le voy­age dans la lune par­tial­ly because it was a piece of vaude­ville that was put to film, and regard Metropolis as the first seri­ous piece of sci­ence fic­tion. And Fritz Lang’s mas­ter­piece was the sort of begin­ning of this very dark, dystopi­an era, where espe­cial­ly European film­mak­ers were using tech­nol­o­gy to illus­trate the evils of the Industrial Revolution. And so the very begin­ning of AI in sci-fi was just…it’s ter­ri­ble, it’s dark, it’s going to require us to feed our chil­dren to the machines.

Then start­ing with Robby the Robot in Forbidden Planet, there was an era of pos­i­tiv­i­ty and almost sort of like American adver­tis­ing for how awe­some AI will be. It’ll be like look! they won’t even be able to dis­obey you with­out short cir­cuit­ing. Won’t that be mar­velous!

And that peri­od last­ed prob­a­bly up until the 80s. And themes such as RoboCop began to ask ques­tions about well, maybe it’s not as pret­ty as it—because by then of course America had become sort of the cin­e­mat­ic jug­ger­naut of the world—began to admit that maybe it’s not going to be all Robbys in the world. And so it was a peri­od of inves­ti­gat­ing the com­pli­ca­tions. And in fact that was the emer­gence of evil AI” rather than sort of a sys­temic machine like we saw in Metropolis. So we see things like the hor­ri­ble Proteus IV in Demon Seed that just comes right out the gate as evil. It’s also a peri­od unques­tion­ing gen­e­sis nar­ra­tives like cham­pagne on a key­board brings a com­put­er to life, or a light­ning bolt strikes a plane and sud­den­ly it wants to rebel.

And that con­tin­ued up until the aughts, two thou­sand aughts, and that final peri­od is where peo­ple are begin­ning to deal with the real­i­ties and the nuances of AI even get into that sort of otherness—what does it mean to be oth­er. And that’s sort of where we are. What’s most inter­est­ing about these trends is they don’t quite fol­low the science—the peaks and val­leys of AI hype, and the AI win­ters. There’s not a tight cor­re­la­tion, which I would have expect­ed.

So, those the sort of four big eras. There are lots of oth­er analy­sis but…ought not to go into them.

Hudson: Yeah, I’m curi­ous if any­one else has thoughts on what are like, some of the high­lights of those moments? And maybe some works that you did­n’t men­tion that define some of those types of waves.

Konstantinou: I mean, so one inter­est­ing way to track these trends might be to look at the way like, a fran­chise like Star Trek treats AI. And so like the lat­est sea­son of Star Trek: Discovery has an evil AI from…I think it’s from the future, as its main ene­my.

Noessel: Spoiler alert.

Konstantinou: I’m sor­ry. Yeah. Well. Yeah, I’m gonna ruin it all. Yeah.

Williams: It’s been over three weeks.

Konstantinou: But it’s—I did­n’t real­ly spoil any­thing. But it’s kind of an unusu­al… And it’s tied up with kind of the ori­gin of the utopi­an soci­ety that is the Federation. And this is a show that’s osten­si­bly set in the past of the fran­chise, but it’s a much more moral­ly ambiva­lent, dark­er vision of AI, of the use of these sys­tems, com­pared say to like if you remem­ber the holo­graph­ic doc­tor from Star Trek: Voyager who’s there to help and many plot­lines are ded­i­cat­ed to explor­ing his emerg­ing human­i­ty. Or Data from Star Trek: The Next Generation. And so it does seem that like, a fran­chise like Star Trek would be an inter­est­ing way to think about pub­lic sen­ti­ments about AI and how they’re chang­ing.

Dihal: And relat­ed to that, you could see the same hap­pen­ing in Star Wars

Williams: Mm hm.

Dihal: —where ini­tial­ly you have the sort of unquestioned…the AIs are com­ic relief, mov­ing towards, well the most recent Star Wars films where it’s much more ambiva­lent. So in Solo, there is an AI who stands up for robot rights and who goes to robot fight­ing pits and tells the robots who fight in these pits that they don’t have to have such a life. That they have free will. And she claims that she has a roman­tic rela­tion­ship with a her human copi­lot. Now that’s quite a dif­fer­ent way of look­ing at than sort of R2-D2 beep­ing around a bit.

Hudson: And R2-D2 in of the first…one of the clas­sic scenes in Star Wars being told like, we don’t serve your kind here, right. Like the droid’s got­ta leave. Yeah, that’s a pret­ty big jump.

So I guess— Did you have any any oth­er high­lights that you want­ed to add, Damien?

Williams: I was think­ing about actu­al­ly kind of a tan­dem across these, RoboCop, and think­ing about RoboCop 1986 ver­sus RoboCop 2014, and the dif­fer­ent por­tray­als of what that kind of police state, drone war­fare, robot­ics, and AI nar­ra­tives, what those look like. Like you can see a lot of sim­i­lar­i­ty between the two, obvi­ous­ly, because it’s just a strict remake. But there’s also nuance in what the char­ac­ters, and inte­ri­or to the nar­ra­tive con­sid­er to be the prob­lem of what has hap­pened here. Versus you know, in RoboCop 1986 it was the, Oh no, Murphy’s not Murphy any­more.” Or is he? And while that plays in some­what in the remake, that changes to be more about not just is he still him­self, but he’s been turned into and he—like they very clear­ly show that his auto­mat­ed sys­tems can be turned on and he can be made into lit­er­al­ly a pilot­ed drone, in human, bipedal form.

And so that shift about auto­mat­ed war-fighting, and the mil­i­ta­riza­tion of police, and the automiza­tion of the mil­i­ta­riza­tion of police, becomes much more the cur­rent fear in 2014 ver­sus this notion of how do we stop crime in Detroit and oh no is that per­son still real­ly a per­son in 1986.

Noessel: There’s also a shift­ing role of the state ver­sus cor­po­ratism—

Williams: Yes, yes.

Noessel: —across the two films.

Williams: Yes.

Noessel: I hes­i­tate to men­tion RoboCop 2[Konstantinou laughs]

Williams: Yeah. As long as you don’t go to RoboCop 3 I think we’re fine.

Noessel: Yeah…uh…

Dihal: I this where I can bring in that I’ve always main­tained that Inspector Gadget is a par­o­dy of RoboCop?

Williams: Yes.

Konstantinou: I think that’s right, yeah.

Williams: Fanastic.

Hudson: Yeah. The shift­ing role of the state kin­da like, puts me in mind of…I recent­ly read a pret­ty old sci-fi sto­ry, I think it was by Asimov, called Franchise” which was I think from…it’s from the 50s but is of course set in 2008. And in it the super­com­put­er Multivac fig­ures out who is like the exact one per­son you need to poll to fig­ure out how to pick the pres­i­dent and how to decide all the elec­tions. And I was think­ing about that in com­par­i­son with a more recent incar­na­tion of the all-seeing super­com­put­er from Person of Interest, the…

Noessel: The Machine.

Hudson: The Machine. And how that’s very much like a sort of sur­veil­lance state. And what The Machine does, and it’s evil coun­ter­part, is not based on like who gets to vote and fig­ur­ing out— Which I think was a much con­test­ed ques­tion in the 50s. The Machine is like, who gets elim­i­nat­ed by the anti-terror kill squad, right. And so that shift I think prob­a­bly we can track to our own polit­i­cal dis­course.

So I want to touch on kind of one more thing and then we’ll take some ques­tions. To come back last time to the hopes and fears, I know Kanta you are now doing some research that explores a much broad­er swath of of AI nar­ra­tives that maybe we haven’t even dis­cussed here today. So are those hopes and fears, do you feel like those are inher­ent­ly Western hopes and fears, and that oth­er cul­tures and oth­er soci­eties, and even oth­er gen­res might have a dif­fer­ent take on AI?

Dihal: Yeah, so this is the project that I lead called Global AI Narratives on. Rather than us doing all the research and basi­cal­ly us try­ing to look at every­thing that’s done across the world, we’re build­ing a net­work of schol­ars who in their own regions are experts on this and bring­ing those togeth­er so that we can com­pare and get answers to ques­tions like this.

So so far we have done so in Singapore and in Japan. And at the Japan work­shop, there there was indeed some fas­ci­nat­ing rev­e­la­tions, espe­cial­ly about what does the media image of an AI look like. And in Japan, the… So when we would here have the Terminator or even, as I showed in my PowerPoint pre­sen­ta­tion, two Terminators and a nuclear explo­sion because it can’t be dra­mat­ic enough. In Japan, the most com­mon go-to image is a pudgy blue car­toon cat called Doraemon. Anyone famil­iar with Doraemon?

So Doraemon was a real­ly long-running TV series and man­ga series. And this was some­thing that peo­ple grew up with, and espe­cial­ly the generation…well basi­cal­ly age 30 and above in Japan. And that’s why that nar­ra­tive is so much more influ­en­tial. And yeah it’s a cutesy cat, and also it’s— Well it is an android, a robot, but shaped like a cat. It comes from the future and it tries to solve prob­lems that the human pro­tag­o­nist runs into by means of grab­bing futur­is­tic tools from its pouch. And every time there’s a new tool that’s sup­posed to be able to fix all the prob­lems and then it does­n’t. So that’s a com­plete­ly dif­fer­ent nar­ra­tive of the robot bud­dy. And a hope­ful one.

Noessel: Interesting. Genevieve Bell—formerly of Intel and now she’s back home in Australia—and I had a quick chat before we spoke at a con­fer­ence. And she not­ed that exis­ten­tial threats, or let me say sta­tus threats, that AI and robots pose are not a prob­lem as her research has shown, in Japan or Shinto soci­eties.

Williams: Right.

Noessel: Because they already have this notion that every­thing has a spir­it. So the fact that spir­it is embod­ied in tech­nol­o­gy is not real­ly an issue? So it’s really…like, that’s a new con­cept for us, that our wash­ing machine might have a hope or a fear of its own but not nec­es­sar­i­ly for Shinto prac­ti­tion­ers.

Hudson: Exactly. Wasn’t that…I think visu­al­ized very ele­gant­ly in one of the shorts in the…was it Love, Death & Robots

Noessel: Oh yeah.

Hudson: …show, where you had a spir­it fox being hunt­ed in this sort of Medieval Japanese soci­ety, and over time as soci­ety evolves gets turned into a steam­punk cyborg basi­cal­ly. I had­n’t… Yeah, I total­ly agree that sort of the cul­tur­al tone and what was pos­si­ble I think was very dif­fer­ent in that.

Dihal: There’s also dystopi­an nar­ra­tives that can dif­fer quite dra­mat­i­cal­ly across cul­tures. So for instance, if you’re talk­ing about the apoc­a­lypse in sort of main­stream sci­ence fic­tion, it is some­thing in the future. It’s some­thing that can be avert­ed. It’s some­thing that— An apoc­a­lypse sto­ry is a warn­ing.

Now, for many soci­eties across the world the apoc­a­lypse has already hap­pened.

Williams: Correct.

Dihal: I mean, if you look at Native Americans, the apoc­a­lypse has already hap­pened. So the sto­ries that you get about the future are very dif­fer­ent, informed by such past.

Williams: I would like to open it up to the peo­ple.


Hudson: Okay! Well I think we'll take some questions now. So we have mics going around. Great. Back there on the right.

Audience 1: We already have a powered persuasion architecture. It seems that there's algorithms that know us better than we know ourselves, and have gone as far as not only to get us to spend money but sway elections, maybe instigate ethnic cleansing. What are the…what were the science fiction warnings that we missed for this? I don't remember… You know, these are modern myths and myths told us to know thyself. But doesn't seem we've been getting that recently. Or did I miss that?

Noessel: There's a positive version of that; I can think of [crosstalk] an example.

Williams: The Culture.

Noessel: Which is that the long arc of the I, Robot series told of a bigger and bigger AI that was influencing society. But at the end of the short stories it was actually—it had faded so far into the background and humans had just become prosperous. And they didn't even make the connection. That's not quite the warning that you're looking for but I know that was as Asimov's ultimate arc for I, Robot. Other examples of broadly dystopian AIs… Where was the warning?

Dihal: Perhaps not so much the stories of intelligent machines, although— Okay, so there is Colossus by DF Jones. It was a novel and it was turned into a film in the mid 50s…?

Noessel: I don't remember.

Dihal: Which was basically about the US builds a computer that can control…

Noessel: Oh, Colossus: The Forbin Project.

Williams: Yes.

Dihal: Yes.

Noessel: Uh, '72?

Williams: Oh, 70s.

Dihal: 70s? Oh, it's later than I thought. So, the US has a defense supercomputer, and then it turns out that the Soviets also have a defense supercomputer. And they decide that they know what's best for humanity based on the cultural and political system in which they have been produced. And then it starts saying okay, surrender all your power to us, humans.

Humans say no.

Colossus says well, you have given me access to all your nukes. Colossus throws nukes. humans have to obey.

Williams: More… Sorry, just to kinda jump in. A more recent example is actually Person of Interest. The latter arc of Person of Interest, and again, spoilers but the show's been over for four years now. And it's all on Netflix so you have no excuse.

The kind of the culminating arc is about two competing AI supercomputers who are warring against each other to nudge people in a certain direction. The name of the supercomputer in like the US military arc of it, the thing is called Northern Lights. And that was made about two years before Edward Snowden's PRISM leaks? They got to a point where they actually had to say, "Okay, we need to completely reframe the story that we are telling. Because the things that we have been talking about, as soon as we write them…they happen. So we need to think differently about what it is we're considering science fiction about AI to be." And that was entirely about you know, a very large system of algorithmic nudging and influence moving people through what they thought was just the water of their lives.

Noessel: I think what's delightful—just one last bit about the question—is that the bigger technologies, I think the less we've thought about them. Lots of older films that are having to be remade are having to account for the fact of cell phones. Which of course, instant access to any person on the planet was not a narrative possibility when stories are told in the 50s. So when they remake The Blob, they have to think about it. Or Battlestar Galactica has to think, "Well, why would we disable the networks? I know, to Cylons have access to it." So I think Facebook's actually one of those technologies that is so pervasive, and the sousveillance that's involved, and the persuasion that it has, took everyone by surprise. It's a great question.

Hudson: It could also be that we told the stories but we told them about television instead of Facebook, right. And I think we learned good lessons about TV being this problematic medium that maybe we forgot when we switched mediums.

Noessel: I'm also wondering if…Santa Claus was…

Hudson: Yeah. Yeah.

Noessel: No seriously.

Audience 2: I've heard of AI described as a dual-use technology. But I've also heard you say during your presentation that A, people tend to go to extremes when they're describing it. And also you mentioned that 70% of British citizens have a very dystopian view of the technology. So I guess the question I would ask is, assuming that a narrative about AI is rooted in a time and place, what are we missing today? What are our blindspots?

Noessel: I'm about to present that.

I mean, there are other answers to that, but I literally did a longitudinal stu— Longitudinal, is that the word for— A study of science fiction tropes to find out the stories we're not telling. So if you can wait…

Kevin Bankston: That's actually a great—

Noessel: Okay, okay.

Hudson: Let's call it.

Noessel: There might be one more question, 'cause that clock says we have one more minute. Can we? Can we, Kevin? Yeah.

I mean, I'm eager to talk. But.

Mike Nelson: Hi, I'm Mike Nelson with Georgetown University and teach in the Communications, Culture, and Technology program. This has been fascinating. You've talked a lot about the robot overlord scenario, where we give them all the power and they use it. You've talked a lot about the robot underclass that rises up. But there's another scenario that appears less frequently, and that is the robot underlords, who kind of take over the basic grunt work of civilization and slowly work up the stack until they're sort of taking care of all our needs. And then civilization dies of boredom and complicity. Wall-E is the best one. Kurt Vonnegut, Player Piano, Kurt Vonnegut and the Tralfamadorians. Do you have other examples, and how likely is it do you think that we'll just be so lazy we'll cease to challenge ourselves?

Williams: Bradbury.

Dihal: Yeah, that's the obsolescence narrative that I mentioned. There was a screenshot from from Wall-E illustrating it. It's a fairly common one. Usually it has to do with work, but you also have the sort of social obsolescence. So well, everybody immediately jumped on Wall-E. You said Bradbury?

Williams: Yeah. There's a Ray Bradbury story. I cannot remember the name of it off the top of my head but it's entirely about a future, probably Martian, civilization in which the people are…gone, because the automated systems of their house and the automated systems of their world took care of everything to the point where they had no reason to do anything.

As for your secondary question, I honestly don't think that's very likely. I think we get bored easy. And we innovate on that boredom. Real well. And we find new ways to amuse ourselves even if they're just remixes of old ways.

Nelson: And now there's robot entertainment.

Williams: We do.

Dihal: And also the more tasks that we managed to give to machines, and robots, and computers, the busier we seem to get?

Williams: Yeah.

Noessel: Star Trek the original series, "Spock's Brain" is another example.

Hudson: Well we're going to have stop there. Thank you to our panelists very much.

Further Reference

Event page


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.