Kevin Bankston: Next up I’d like to introduce Madeline Ashby, who like Kanta is on the advisory board of our AI Policy Futures project, but unlike Kanta couldn’t make it today. Madeline, in addition to being a professional futurist who consults with companies and government agencies, is also an accomplished science fiction writer, best known for her most recent novel Company Town and for her Machine Dynasty series. Her work on the third and final book of that series is what kept her from being here in person today, so she has instead recorded a five-ish minute message that’ll serve as a lead-in to our next panel. So if we could queue up Madeline’s video, that would be great.
Madeline Ashby: Hi, my name is Madeline Ashby and I welcome you to this event, and I apologize that I cannot be there with you. Now the reason that I cannot be there with you is that I’m a science fiction writer as well as being a futurist. And I am wrapping up the edits on the final book in a trilogy about artificial intelligence, the subject of our conversation. And it’s requiring a lot of my attention in part because I’m realizing that it’s sort of the last chance I have to work with these characters and make a final statement about you know, what I was trying to do when I decided that I wanted to write about artificial intelligences, and the evolution of consciousness, and what it would be to be a different kind of consciousness. I think that writing about artificial intelligence is basically a phenomenological question. It is about what it is to be a bat. It is about you know, taking on this otherness.
And I think that one of the challenges when we talk about how we’re going to write about AI is that so much has already been written. Both at the level of myth…you know. When we talk about stories like the golem, stories like Pinocchio, things like that, those are also artificial intelligence stories. But also there’s this whole gamut of pop culture stories, right. And as I’m sort of finishing this trilogy out, I realize now that I’m sort of thinking about all of those other sort of renditions of this story, or all the other versions of this story and how I can possibly set my characters apart.
And one of the ways that I’ve tried to do that is to make sure that my characters make reference to, in their dialogue, other depictions of robots. So in the Machine Dynasty, which is my trilogy about robots who eat each other and you know, evil grandmothers and so on and so forth, these robots make reference to the fact that the word “robot” comes from the old Slavonic word for slave. They are aware of the fact that in popular culture already, they have been depicted as godless killing machines, or sexbots, or skinjobs, or what have you. And they’re aware of it; they’ve seen depictions of themselves.
And I think that you know, if you believe that eventually artificial intelligence will achieve sort of an anthropomorphized consciousness or human-like consciousness, or even just a mammalian consciousness, a mammal-like consciousness…you know, you’re talking about something that might later read what you wrote about it. The same way as when you blog about your kids. There’s every possibility that they’re going to find out what you wrote. And I think that’s one of the most interesting challenges about this, is that you know, you have to be kind of careful about what it is that you’re going to say. What expectations are you creating? What are you…telling this thing to be? What are you telling it to become? And can you tell it to become something better than you? You know, can it be better than what you are? Is it a true evolution? Can it go beyond you?
And I think that’s sort of one of the most interesting challenges as we frame sort of debates about artificial intelligence, debates about what intelligence is, what consciousness is. You know, why is it that we think that our version of intelligence is “the best.” Why is it that human intelligence gets sort of this primacy? Why is it considered the best? Isn’t that sort of an anthropocentric, narcissistic attitude for us to take? Aren’t we discounting other models of intelligence? Whale intelligence, dolphin intelligence, the intelligence of bees. Raven intelligence. All of those other models exist on this planet, you know. They are aliens among us. And those are natural intelligences. There’s nothing artificial about them. And yet they are just as foreign to us as some of the fictional things that we are probably talking about today.
So, because I can’t be there with you, I guess what I would ask you to keep in mind is eventually, you might have to explain what you wrote. You might have to explain the story you told. You might have to explain why you represented an entire type of intelligence in a certain way. When we talk about representation in fiction we’re often talking about really loaded categories, like really sensitive topics, really…you know, we’re talking about the subaltern. We’re talking about marginalized people. We’re talking about bringing representations forward of people who have been characterized as villainous. As evil. As depraved. As perverse. As all of the things that you know… all of the qualities that sort of get penalized later on. Or that are considered bad.
And so I guess you know, when we talk about how we represent artificial intelligence, think about the lessons that it might be learning from you. Is it seeing itself? Is it seeing itself represented? Is it seeing itself? You know, is it seeing the potential for good? Is it seeing the potential for growth? You know, we ask that question about ourselves: how are we representing ourselves? How are we representing different groups of ourselves? How are we representing the multiplicity of humanity? And we should possibly start considering how we represent the multiplicity of intelligence as well.
And so I guess that’s sort of what I would say. Hopefully I would be more articulate if I were actually there. But edits are pretty killer, so. Good luck guys.
Kevin Bankston: Thank you Madeline. And good luck with your edits. Now I’d like to welcome to the stage the panelists for our second panel on AI in sci-fi. So come on up, folks. The moderator Andrew Hudson is a science fiction writer himself and a graduate student at ASU, where he studies how speculative futures can better help us imagine how to live through climate change, and where he has also been leading the research for our AI Policy Futures projects, so thank you for that. So Andrew take it away and let’s go till 3:20 instead of 3:15.
Andrew Hudson: Yeah, sure. Thanks everyone, and thanks to the rest my panel. We’re the fiction panel, following up on the fact panel. And I thought the fact panel did a really good job laying out some frustrations that I think are very reasonable to have with the science fiction literature that has used this term “AI.” And so I’ll just say on behalf of sci-fi writers, “Our bad.” But I think what I hope we can do in this panel is have a slightly more literary discussion to try to answer well why were those the stories that we were telling and like, what has been the point of telling those stories even though they don’t now necessarily always align with the policy problems that we’re having. But what was the use of them. So I’ll let the rest of my panelists introduce themselves. But I was hoping we could start as we go through with responding to Madeline’s provocation and say like, what kinds of blog posts are we leaving about our children, human or non, and the type of society that they’re going to be creating? Hello Chris.
Chris Noessel: Sure. I have an opportunity to introduce myself as a solo speaker next, so I’ll be very brief. I’m here for being the author of SciFiInterfaces.com, a nerdy blog. But I actually think that Madeleine’s injunction about thinking about your progeny as your audience is…I kinda don’t want to think about that. Partially because it will help both my biological progeny, ideological progeny, understand where they came from came from better. Because I don’t want to put a veneer on that and lie, or change what I would say.
Lee Konstantinou: I’m Lee Konstantinou. I’m a professor in the English department at the University of Maryland, College Park, so I’m a local. I teach and write scholarship on science fiction, and I’m also a writer of science fiction. I’ve written a novel, I’ve written a bunch of short stories. I’m thinking a lot about AI in different projects that I’m working on. And I don’t know if we’re going to introduce ourselves first and then answer the question, but to Madeline’s provocation you know, the thing that came to mind is that the person who writes the blog post about their child is really in a way not writing about their children at all. They’re often writing about themselves, writing about their own hopes and aspirations.
And one thing I would say about a lot of our science fictional narratives that feature AI is that they’re often not really about AI in any kind of technical sense. They’re not engaging in the project of forecasting. They’re not trying to give us a technical blueprint for the future. And so to our AI progeny who will watch this video I say, “It wasn’t about you at all, it was all about us,” you know.
Kanta Dihal: Well I’m Kanta Dihal. I’ve just been introduced by Kevin so I’ll just go straight to the question. So well, I don’t have children of my own but I do have the strong belief that yes, you might want to keep them in mind when you publicly write about them. Because I recall a friend showing me a blog post that a pregnant family member had made, and her musings on how she hoped that this child was going to turn out healthy because otherwise she didn’t want it.
Which brings me to the idea of, and I guess it must be mentioned or maybe I’ll doom you all by saying this, but Roko’s Basilisk is the sort of thought experiment/terrify your children before going to bed story that if you know that a superintelligence is going to exist in the future, then you have to bear in mind that it is going to know everything you do in your life, so you better dedicate your life to making sure that this superintelligence is going to be built and not hinder it because otherwise that superintelligence will make a copy of your brain and torture into eternity in cyberspace.
Yeah. So, now you’ll all have to go out and do that and write nice blogs about the AI.
Damien Williams: I’m Damien Williams. I am a PhD researcher at Virginia Tech University. My work is in science, technology, and society. I’m researching the ways that bias and values get embedded into technological and non-technological systems, specifically looking at artificial intelligence, machine learning, human biotechnological interventions such as prostheses, implants, other what people might think of as cyborg implements. And when I use the word “bias” there, which is kind of what my question was earlier, I mean both in terms of perspectives but also in terms of models but also in terms of the things that undergird what eventually become prejudices. Bias in that, and understanding it as another way of thinking about what it is that we model for and try to predict based off of.
My original Masters degree is in a combination of philosophy and religious studies. And so this conversation about what it is that we leave for our children, and what the Basilisk might do, and what the mind is, and what it is that consciousness might be and be modeled as in these stories…all of those things are pretty pertinent.
For Madeline’s provocation, I think that we do have to kind of think about our children, our progeny. But I don’t think that necessarily requires that we change what we say. But it means giving context to what we say and why we say it. My own parents, I want them to be honest with me about what they feel. And I don’t necessarily always have direct access to exactly why they feel when they feel it. We’re talking about a thing in Madeline’s provocation that will have the access to look at the context of literally everything, all the time, forever. So in that sense if we’re talking about a progeny which will be able to reach back and see why we thought what we thought when we thought it, I think we should be careful not just what we say, but be careful to be willing to think about why it is we feel what we feel. And not just toss ideas out there without that context.
And I think that’s about communication. I think that’s about not just you know, hedging our bets so that the Basilisk doesn’t kill us, or torture a copy of our brains forever and ever. I think it’s about being willing to be open and communicative with another mind that while it might be drastically different from ours is still made from us. And I think that’s just parenting in any capacity.
Hudson: Well hopefully context is kind of what we can give some [of] today around some of the stories that have shaped a lot of the mythos that we’ve built up. So I want to go back to Kanta’s hopes and fears dichotomies, which I think are really fascinating, and maybe ask the panel are these a reflection of the way sci-fi has played into narratives that we already had in our society about the majority versus the marginalized or versus minorities or versus the outsiders, and how have maybe some of the core AI stories forwarded those narratives, or produced counter narratives? Maybe Lee, do you want to start us off?
Konstantinou: Yeah. So I said in my previous answer that science fiction narratives about AI are often allegorical in their scope. And one of the main or great kind of allegorical subjects of science fiction about AI is the question of power and authority, and domination, right, which [to Chowdhury] your your talk I think outlined so beautifully.
And so I think what we sort of find in our science fiction narratives about AI are like, every possible combination of forms of domination. You get AIs that kill all humans. You get humans who in one way or the other are dominating or torturing AIs. You can think of a narrative like Westworld or Ex Machina where the AIs could arguably be said to have good reason for rebelling against their human masters. You get works of science fiction like Dune or Battlestar Galactica where there is a prior AI revolution or AI uprising that leads to the elimination or extermination of AI. And I think you get all of these variants, and they’re often not very nuanced. You know, they they pick a side, they pick a trajectory, and I think the most interesting science fiction is finding a kind of more nuanced or kind of pluralistic vision of what AI might be, that’s breaking out of these tropes.
So a recent book by the novelist Annalee Newitz, her book Autonomous I think is one of actually the best visions of a world in which AI come in all shapes and sizes. They were embodied in a variety of ways. They have political opinions. They’re…kind of wrong, misguided, foolish, courageous. And they’re not quite human at the same time. And so I think a promising science fiction is sort of science fiction that is moving in that more complex direction. For me, for my taste. I don’t know if that answers your question but.
Hudson: Chris, I know your taste aims a little more poppy, but what do you see in this type of predominant narrative versus like, counternarrative?
Noessel: I do study big-budget films and television shows, mostly. And those, the creators of those stories are always hedging their bets? because they want to make as much money as possible? with their stories. And that means that they can only go too far outside of a paradigm before they begin to lose that. Primer is a great film about time travel, but it is not accessible to the majority of pop sci-fi viewers. And that dichotomy of yes, I can get Thor; he’s a dude with a hammer, or I can’t quite understand the anglerfish metaphor of Under the Skin means that the things I study tend to be on this safer side of sci-fi. And what I see across the narratives that I analyze is they work on a principle of what you know, plus one.
So, to abuse a phrase—and unfortunately I can’t remember the fellow who coined it but like, what if phones, but too much. Or—
Williams: Daniel Ortberg.
Noessel: Thank you, what is his name.
Williams: Daniel Ortberg.
Noessel: Thank you.
Williams: The Toast.
Noessel: They can’t really go to the extents of you know, you can’t waste twenty minutes of an audience’s time with a giant background in order to explain why this moment that you’re about to see in the cinema is relevant. They have to play fast and quick, and that keeps the stories in cinema and television fairly…mmm…less risky.
Hudson: Kanta, so is this I think, a fair way to spin out from your your dichotomies?
Dihal: Yes, definitely, and then when you’re looking at the relationship between these kinds of narratives and the older narrative traditions that they fit in, again it’s almost as if AI is the sort of hyperbolic version of technology making everything possible. But it’s very similar to narratives of flying. I mean flying was a dream, a technological dream, for thousands of years, until it actually happened and it took a form very much unlike what had been imagined in all those narratives. There was no wing flapping, and there were no steam engines up in the air. But, we could fly. And we can fly now, and nowadays it’s just really everyday business. So in the same sense, these stories about AI are in all kinds of ways anticipating relating ourselves to intelligent machines.
And so on representation and counternarratives, I think one thing that many stories of AI make clear is they presume, or at first sight, they are about…they seem to be about humans versus non-humans. So humankind as this one globule in which all of us here and everyone out there is included, versus the rest. And the same with narratives of aliens.
Well what these narratives actually reveal is that humanity is something that is granted as a matter of degree. Some people are considered more human than others. And when you get an intelligent machine, it slots into that hierarchy and shakes up that hierarchy. And intelligence is actually a way in which that hierarchy has been maintained, what with things like—here in the US context, the SAT being developed by a eugenicist in order to keep people of color out of the universities.
So intelligence as this benchmark for how human some…something or someone is gets really problematic when you bring in an artificial intelligence that might be more intelligent. Because that one might start poking all the way up at the top saying, “Scuse me. I’m at the top now, according to your benchmarks.” And that’s where people like Elon Musk start worrying.
Hudson: Yeah. I really like the flying question. And one question that I’ve heard that I find really provocative is, does a submarines swim? And the question of whether a machine thinks may actually be as arbitrary as why does a plane fly but we don’t really like saying that a submarine swims? It’s just sort of a gimmick of language.
But yeah, to your your other point, it seems like AI stands in for the other in lots of allegorical stories. And so maybe Damien, maybe could you give us some examples of this if you have? And is it helpful to have these types of stories, now that we’re talking about the ways that real life AI systems other other human beings?
Williams: To answer your second question first, yes.
To answer your first question next, I mean the examples that we have go down through history. I mean, we have as we’ve kinda of talked about a number of times… Madeline Ashby brought this up in her recorded talk, the word for robot comes from from the Slavonic word for slave. But that’s from a piece called RUR (Rossum’s Universal Robots), and that’s about an oppressed working class that were enslaved and made into a group of workers. They were made to be these workers. But there’s also instances where we talk about the idea of… I mean, we can even look back to when robots were being promised to everybody in just IBM copy, and this idea that everybody would have a robot slave of their own. Like that was literally ad copy that was in magazines. Like the days of slavery will come back. “Don’t worry, we don’t mean humans.”
So, it’s always been this undercurrent, the notion of the oppressed, the marginalized, the uprising, and kind of overcoming, and the tension between on the one hand we think that’s right and we think it’s justified; and on the other hand, we’re scared of it. Because it’ll be uprising against us. We have that in Westworld from the original. We have that in all of Asimov’s stories. We have that in basically anything with a machine intelligence that somehow turns its own creation into a fact of making humans and its creators obsolescent. That kind of process of obsolescence becomes the stand-in for oh no, have we become the ones that got overthrown? Whoever expected this could happen to us?
And I think that it’s important that we still think about…not necessarily in the same dynamics of those kinds of slave narratives of oppression but in terms of marginalized peoples and thinking about the ways that we look at the… Robots are often stand-ins for… Even when they’re not representing overthrowers. They’re often stand-ins for people with non-standard or neurodiverse positionalities in the world. For autistic people. For people with ADHD. For people who think and see and experience the world differently. And there’s often in even just our linguistic conceits, there’s a line drawn between neurodiverse populations and roboticness, or machine-like qualities.
And so that’s why I think the answer to your second question is yes. It has to be investigated. We still have to think about these things, because even as we are creating systems which other people… Which take in data points or are constructed at the very outset in such a way that they will marginalize or further repress, they are still going to be used as touchpoints and metaphors for talking about the very people and the very populations whom they are oppressing. And we have to take the time to render out in stories a model for thinking differently about that. For specifically interrogating that question. For saying, isn’t oppressed person right to overthrow their oppressor? Isn’t someone who sees the world differently right to question the metrics by which they’re being judged? That’s one way to read Blade Runner, by the way. There’s a burgeoning host of autists—people with autism—who are looking at Blade Runner and going maybe the problem isn’t that these stand-ins for autistic people don’t feel. Or don’t feel the right way. Maybe they feel too much. Maybe the way that they feel is present but different enough that the humans in their capacity don’t understand what it is that they’re feeling, and are reinterpreting that narrative in that way.
And so thinking about how we take those narratives of oppression and specifically ask, well what if the people who are being modeled or mirrored here are the ones who get to tell the story? What story would they tell about this instead? That question becomes deeply deeply important, specifically because if it’s not interrogated it will be used to further marginalize them. To further disenfranchise them from the tools that are being used to operate and control their lives.
Dihal: That is a great reading of Blade Runner. That I wasn’t familiar with yet, because the reading of Blade Runner that is most often advanced and that is being used for lots of different narratives about artificial intelligence is the slave narrative. So the AI stands in for the oppressed racial other.
The same with again aliens. I’ve think thinking for instance of the film District 9, which shows racial segregation except it’s humans versus the aliens. And in both these cases, Blade Runner and District 9, you can see that by means of having the AI and the alien as the racial other, you presume the all the humans are white. You need no racial diversity among your humans because you have a racial other. And you can see that in Blade Runner! These are fugitive slaves; all the androids are white. Nearly all were humans. And as far as I remember that’s not any better in the new Blade Runner. And in District 9 for the fact that it’s set in South Africa, again very few non-white human protagonists.
Williams: The number of black South Africans who appear in District 9 is I want to say something around the order of twelve total? and they are basically a faceless gang.
Dihal: Yeah and aren’t they supposed to be racial stereotypes of Nigerians?
Williams: Mm hm.
Konstantinou: Yeah. It was very, yeah, controversial.
Williams: Yeah. And so yeah, that’s taking the time to again just specifically dig down on those facts and say, we have told this othering story for so long, and it has made its way into the process of what it is that we build these things to do if not to be. What if we did this otherwise? Oughtn’t we do this otherwise? And taking the time to do so.
Hudson: I think there’s lots of ways in which that pattern also shows up in other genres, right. There’s so many ways in which AI stories to my mind replicate horror tropes, right. Like the androids are zombies. The sort of disembodied Siri voices are ghosts, right? So we’re in a well-trod kind of literary tradition here one way or another.
Anyone else on this question?
So, one thing that is unique about this AI discourse that we’re having is that it goes back a long way? In some ways much further than like, we’ve been talking about what ifs of AI way before we started having organizations that put AI in their sort of hype notes, right? And now we’re here, but there’s been a whole evolution of this conversation along the way. And so Chris maybe…I know you have some data on how how we’ve we talked about AI has evolved over the last century.
Noessel: Yeah. So when the analysis that I’m going to share with…in sort of the solo talk, one of the things I took a look at was the valence and the prevalence of which narratives have been told when, from the beginning of cinema to now. And there are four main eras, if you will. And this data isn’t in the solo talk so I’m happy to explicate it.
We’re going to bypass Le voyage dans la lune partially because it was a piece of vaudeville that was put to film, and regard Metropolis as the first serious piece of science fiction. And Fritz Lang’s masterpiece was the sort of beginning of this very dark, dystopian era, where especially European filmmakers were using technology to illustrate the evils of the Industrial Revolution. And so the very beginning of AI in sci-fi was just…it’s terrible, it’s dark, it’s going to require us to feed our children to the machines.
Then starting with Robby the Robot in Forbidden Planet, there was an era of positivity and almost sort of like American advertising for how awesome AI will be. It’ll be like look! they won’t even be able to disobey you without short circuiting. Won’t that be marvelous!
And that period lasted probably up until the 80s. And themes such as RoboCop began to ask questions about well, maybe it’s not as pretty as it—because by then of course America had become sort of the cinematic juggernaut of the world—began to admit that maybe it’s not going to be all Robbys in the world. And so it was a period of investigating the complications. And in fact that was the emergence of “evil AI” rather than sort of a systemic machine like we saw in Metropolis. So we see things like the horrible Proteus IV in Demon Seed that just comes right out the gate as evil. It’s also a period unquestioning genesis narratives like champagne on a keyboard brings a computer to life, or a lightning bolt strikes a plane and suddenly it wants to rebel.
And that continued up until the aughts, two thousand aughts, and that final period is where people are beginning to deal with the realities and the nuances of AI even get into that sort of otherness—what does it mean to be other. And that’s sort of where we are. What’s most interesting about these trends is they don’t quite follow the science—the peaks and valleys of AI hype, and the AI winters. There’s not a tight correlation, which I would have expected.
So, those the sort of four big eras. There are lots of other analysis but…ought not to go into them.
Hudson: Yeah, I’m curious if anyone else has thoughts on what are like, some of the highlights of those moments? And maybe some works that you didn’t mention that define some of those types of waves.
Konstantinou: I mean, so one interesting way to track these trends might be to look at the way like, a franchise like Star Trek treats AI. And so like the latest season of Star Trek: Discovery has an evil AI from…I think it’s from the future, as its main enemy.
Noessel: Spoiler alert.
Konstantinou: I’m sorry. Yeah. Well. Yeah, I’m gonna ruin it all. Yeah.
Williams: It’s been over three weeks.
Konstantinou: But it’s—I didn’t really spoil anything. But it’s kind of an unusual… And it’s tied up with kind of the origin of the utopian society that is the Federation. And this is a show that’s ostensibly set in the past of the franchise, but it’s a much more morally ambivalent, darker vision of AI, of the use of these systems, compared say to like if you remember the holographic doctor from Star Trek: Voyager who’s there to help and many plotlines are dedicated to exploring his emerging humanity. Or Data from Star Trek: The Next Generation. And so it does seem that like, a franchise like Star Trek would be an interesting way to think about public sentiments about AI and how they’re changing.
Dihal: And related to that, you could see the same happening in Star Wars—
Williams: Mm hm.
Dihal: —where initially you have the sort of unquestioned…the AIs are comic relief, moving towards, well the most recent Star Wars films where it’s much more ambivalent. So in Solo, there is an AI who stands up for robot rights and who goes to robot fighting pits and tells the robots who fight in these pits that they don’t have to have such a life. That they have free will. And she claims that she has a romantic relationship with a her human copilot. Now that’s quite a different way of looking at than sort of R2-D2 beeping around a bit.
Hudson: And R2-D2 in of the first…one of the classic scenes in Star Wars being told like, we don’t serve your kind here, right. Like the droid’s gotta leave. Yeah, that’s a pretty big jump.
So I guess— Did you have any any other highlights that you wanted to add, Damien?
Williams: I was thinking about actually kind of a tandem across these, RoboCop, and thinking about RoboCop 1986 versus RoboCop 2014, and the different portrayals of what that kind of police state, drone warfare, robotics, and AI narratives, what those look like. Like you can see a lot of similarity between the two, obviously, because it’s just a strict remake. But there’s also nuance in what the characters, and interior to the narrative consider to be the problem of what has happened here. Versus you know, in RoboCop 1986 it was the, “Oh no, Murphy’s not Murphy anymore.” Or is he? And while that plays in somewhat in the remake, that changes to be more about not just is he still himself, but he’s been turned into and he—like they very clearly show that his automated systems can be turned on and he can be made into literally a piloted drone, in human, bipedal form.
And so that shift about automated war-fighting, and the militarization of police, and the automization of the militarization of police, becomes much more the current fear in 2014 versus this notion of how do we stop crime in Detroit and oh no is that person still really a person in 1986.
Noessel: There’s also a shifting role of the state versus corporatism—
Williams: Yes, yes.
Noessel: —across the two films.
Williams: Yes.
Noessel: I hesitate to mention RoboCop 2… [Konstantinou laughs]
Williams: Yeah. As long as you don’t go to RoboCop 3 I think we’re fine.
Noessel: Yeah…uh…
Dihal: I this where I can bring in that I’ve always maintained that Inspector Gadget is a parody of RoboCop?
Williams: Yes.
Konstantinou: I think that’s right, yeah.
Williams: Fanastic.
Hudson: Yeah. The shifting role of the state kinda like, puts me in mind of…I recently read a pretty old sci-fi story, I think it was by Asimov, called “Franchise” which was I think from…it’s from the 50s but is of course set in 2008. And in it the supercomputer Multivac figures out who is like the exact one person you need to poll to figure out how to pick the president and how to decide all the elections. And I was thinking about that in comparison with a more recent incarnation of the all-seeing supercomputer from Person of Interest, the…
Noessel: The Machine.
Hudson: The Machine. And how that’s very much like a sort of surveillance state. And what The Machine does, and it’s evil counterpart, is not based on like who gets to vote and figuring out— Which I think was a much contested question in the 50s. The Machine is like, who gets eliminated by the anti-terror kill squad, right. And so that shift I think probably we can track to our own political discourse.
So I want to touch on kind of one more thing and then we’ll take some questions. To come back last time to the hopes and fears, I know Kanta you are now doing some research that explores a much broader swath of of AI narratives that maybe we haven’t even discussed here today. So are those hopes and fears, do you feel like those are inherently Western hopes and fears, and that other cultures and other societies, and even other genres might have a different take on AI?
Dihal: Yeah, so this is the project that I lead called Global AI Narratives on. Rather than us doing all the research and basically us trying to look at everything that’s done across the world, we’re building a network of scholars who in their own regions are experts on this and bringing those together so that we can compare and get answers to questions like this.
So so far we have done so in Singapore and in Japan. And at the Japan workshop, there there was indeed some fascinating revelations, especially about what does the media image of an AI look like. And in Japan, the… So when we would here have the Terminator or even, as I showed in my PowerPoint presentation, two Terminators and a nuclear explosion because it can’t be dramatic enough. In Japan, the most common go-to image is a pudgy blue cartoon cat called Doraemon. Anyone familiar with Doraemon?
So Doraemon was a really long-running TV series and manga series. And this was something that people grew up with, and especially the generation…well basically age 30 and above in Japan. And that’s why that narrative is so much more influential. And yeah it’s a cutesy cat, and also it’s— Well it is an android, a robot, but shaped like a cat. It comes from the future and it tries to solve problems that the human protagonist runs into by means of grabbing futuristic tools from its pouch. And every time there’s a new tool that’s supposed to be able to fix all the problems and then it doesn’t. So that’s a completely different narrative of the robot buddy. And a hopeful one.
Noessel: Interesting. Genevieve Bell—formerly of Intel and now she’s back home in Australia—and I had a quick chat before we spoke at a conference. And she noted that existential threats, or let me say status threats, that AI and robots pose are not a problem as her research has shown, in Japan or Shinto societies.
Williams: Right.
Noessel: Because they already have this notion that everything has a spirit. So the fact that spirit is embodied in technology is not really an issue? So it’s really…like, that’s a new concept for us, that our washing machine might have a hope or a fear of its own but not necessarily for Shinto practitioners.
Hudson: Exactly. Wasn’t that…I think visualized very elegantly in one of the shorts in the…was it Love, Death & Robots…
Noessel: Oh yeah.
Hudson: …show, where you had a spirit fox being hunted in this sort of Medieval Japanese society, and over time as society evolves gets turned into a steampunk cyborg basically. I hadn’t… Yeah, I totally agree that sort of the cultural tone and what was possible I think was very different in that.
Dihal: There’s also dystopian narratives that can differ quite dramatically across cultures. So for instance, if you’re talking about the apocalypse in sort of mainstream science fiction, it is something in the future. It’s something that can be averted. It’s something that— An apocalypse story is a warning.
Now, for many societies across the world the apocalypse has already happened.
Williams: Correct.
Dihal: I mean, if you look at Native Americans, the apocalypse has already happened. So the stories that you get about the future are very different, informed by such past.
Williams: I would like to open it up to the people.
Hudson: Okay! Well I think we’ll take some questions now. So we have mics going around. Great. Back there on the right.
Audience 1: We already have a powered persuasion architecture. It seems that there’s algorithms that know us better than we know ourselves, and have gone as far as not only to get us to spend money but sway elections, maybe instigate ethnic cleansing. What are the…what were the science fiction warnings that we missed for this? I don’t remember… You know, these are modern myths and myths told us to know thyself. But doesn’t seem we’ve been getting that recently. Or did I miss that?
Noessel: There’s a positive version of that; I can think of [crosstalk] an example.
Williams: The Culture.
Noessel: Which is that the long arc of the I, Robot series told of a bigger and bigger AI that was influencing society. But at the end of the short stories it was actually—it had faded so far into the background and humans had just become prosperous. And they didn’t even make the connection. That’s not quite the warning that you’re looking for but I know that was as Asimov’s ultimate arc for I, Robot. Other examples of broadly dystopian AIs… Where was the warning?
Dihal: Perhaps not so much the stories of intelligent machines, although— Okay, so there is Colossus by DF Jones. It was a novel and it was turned into a film in the mid 50s…?
Noessel: I don’t remember.
Dihal: Which was basically about the US builds a computer that can control…
Noessel: Oh, Colossus: The Forbin Project.
Williams: Yes.
Dihal: Yes.
Noessel: Uh, ’72?
Williams: Oh, 70s.
Dihal: 70s? Oh, it’s later than I thought. So, the US has a defense supercomputer, and then it turns out that the Soviets also have a defense supercomputer. And they decide that they know what’s best for humanity based on the cultural and political system in which they have been produced. And then it starts saying okay, surrender all your power to us, humans.
Humans say no.
Colossus says well, you have given me access to all your nukes. Colossus throws nukes. humans have to obey.
Williams: More… Sorry, just to kinda jump in. A more recent example is actually Person of Interest. The latter arc of Person of Interest, and again, spoilers but the show’s been over for four years now. And it’s all on Netflix so you have no excuse.
The kind of the culminating arc is about two competing AI supercomputers who are warring against each other to nudge people in a certain direction. The name of the supercomputer in like the US military arc of it, the thing is called Northern Lights. And that was made about two years before Edward Snowden’s PRISM leaks? They got to a point where they actually had to say, “Okay, we need to completely reframe the story that we are telling. Because the things that we have been talking about, as soon as we write them…they happen. So we need to think differently about what it is we’re considering science fiction about AI to be.” And that was entirely about you know, a very large system of algorithmic nudging and influence moving people through what they thought was just the water of their lives.
Noessel: I think what’s delightful—just one last bit about the question—is that the bigger technologies, I think the less we’ve thought about them. Lots of older films that are having to be remade are having to account for the fact of cell phones. Which of course, instant access to any person on the planet was not a narrative possibility when stories are told in the 50s. So when they remake The Blob, they have to think about it. Or Battlestar Galactica has to think, “Well, why would we disable the networks? I know, to Cylons have access to it.” So I think Facebook’s actually one of those technologies that is so pervasive, and the sousveillance that’s involved, and the persuasion that it has, took everyone by surprise. It’s a great question.
Hudson: It could also be that we told the stories but we told them about television instead of Facebook, right. And I think we learned good lessons about TV being this problematic medium that maybe we forgot when we switched mediums.
Noessel: I’m also wondering if…Santa Claus was…
Hudson: Yeah. Yeah.
Noessel: No seriously.
Audience 2: I’ve heard of AI described as a dual-use technology. But I’ve also heard you say during your presentation that A, people tend to go to extremes when they’re describing it. And also you mentioned that 70% of British citizens have a very dystopian view of the technology. So I guess the question I would ask is, assuming that a narrative about AI is rooted in a time and place, what are we missing today? What are our blindspots?
Noessel: I’m about to present that.
I mean, there are other answers to that, but I literally did a longitudinal stu— Longitudinal, is that the word for— A study of science fiction tropes to find out the stories we’re not telling. So if you can wait…
Kevin Bankston: That’s actually a great—
Noessel: Okay, okay.
Hudson: Let’s call it.
Noessel: There might be one more question, ’cause that clock says we have one more minute. Can we? Can we, Kevin? Yeah.
I mean, I’m eager to talk. But.
Mike Nelson: Hi, I’m Mike Nelson with Georgetown University and teach in the Communications, Culture, and Technology program. This has been fascinating. You’ve talked a lot about the robot overlord scenario, where we give them all the power and they use it. You’ve talked a lot about the robot underclass that rises up. But there’s another scenario that appears less frequently, and that is the robot underlords, who kind of take over the basic grunt work of civilization and slowly work up the stack until they’re sort of taking care of all our needs. And then civilization dies of boredom and complicity. Wall‑E is the best one. Kurt Vonnegut, Player Piano, Kurt Vonnegut and the Tralfamadorians. Do you have other examples, and how likely is it do you think that we’ll just be so lazy we’ll cease to challenge ourselves?
Williams: Bradbury.
Dihal: Yeah, that’s the obsolescence narrative that I mentioned. There was a screenshot from from Wall‑E illustrating it. It’s a fairly common one. Usually it has to do with work, but you also have the sort of social obsolescence. So well, everybody immediately jumped on Wall‑E. You said Bradbury?
Williams: Yeah. There’s a Ray Bradbury story. I cannot remember the name of it off the top of my head but it’s entirely about a future, probably Martian, civilization in which the people are…gone, because the automated systems of their house and the automated systems of their world took care of everything to the point where they had no reason to do anything.
As for your secondary question, I honestly don’t think that’s very likely. I think we get bored easy. And we innovate on that boredom. Real well. And we find new ways to amuse ourselves even if they’re just remixes of old ways.
Nelson: And now there’s robot entertainment.
Williams: We do.
Dihal: And also the more tasks that we managed to give to machines, and robots, and computers, the busier we seem to get?
Williams: Yeah.
Noessel: Star Trek the original series, “Spock’s Brain” is another example.
Hudson: Well we’re going to have stop there. Thank you to our panelists very much.