Klint Finley: Welcome to Mindful Cyborgs episode 54. I’m Klint Finley. My usual co-hosts Sara Watson and Chris Dancy couldn’t make it today, so I am flying solo. But we’ve got a special guest today, Damien Williams. He’s a writer for afutureworththinkingabout.com and a teacher of philosophy and religion at various universities in Georgia. [He] focuses on transhumanism, pop culture, and the academic study of the occult. Damien, welcome to the show.
Damien Williams: Thank you very much for having me.
Finley: So I understand you just got done giving a talk.
Williams: The conference was called the Work of Cognition and Neuroethics in Science Fiction. It was put on by the Center for Cognition and Neuroethics in Flint, Michigan, and my talk was called “The Quality of Life: The Implications of Augmented Personhood and Machine Intelligence in Science Fiction.”
Finley: What did you talk about? What was in that talk? What was the gist of it?
Williams: Overall, the gist was of looking at different ways we have represented things like cybernetic enhancements in humans, mental, chemical enhancements, non-human intelligence, artificial intelligence, in our science fiction media over the years, and the ways in which we tell ourselves those stories and the kinds of lessons that we pull from those stories over and over again, and basically making the case that as we get closer and closer to fulfilling these dreams of ours, these seeming continual aspirations of ours, we need to think more carefully and more clearly about the kinds of stories we’ve been telling ourselves. We’ve just told ourselves a lot of cautionary, don’t go too far, don’t go too fast, don’t fly too high kind of stories, but those stories tend to always end our failure, and that seems to be kind of a bad precedent to set. So the case that I’m trying to make is if we’re going to keep doing this work and we’re going to keep telling ourselves these stories, we should probably start telling ourselves stories that teach us how to learn from our mistakes and how to learn from the stories we’ve told ourselves.
Finley: It’s easy to think of some examples of what you’re talking about, kind of the bad enhancements, like the Johnny Depp movie from a few years ago. Are there any exceptions? One that comes to mind to me is Limitless.
Williams: There are more and more, recently, films that don’t take the sole position that enhancement’s bad or that non-human intelligence is bad and is going to kill us all. Things like, recently Chappie took a little bit of a position on both of those things without giving too much away about the film. It’s pretty new, so I don’t know how many people have actually seen it yet. But it does delve both into non-human intelligence and machine consciousness, and also the idea of augmenting human consciousness, and what that does for us and what that looks like. It asks those kinds of questions without too much of a heavy hand on this “don’t fly too high” kind of mentality. It actually says it’s being done so how should we do it? In what direction ought we go, since we’re already striking out, we’re already heading out to do these things? How should we proceed? And asks more of a question about the quality of the things that we’re doing, rather than whether we should do them at all.
Also, one of my favorite go-tos in these conversations is Terminator: The Sarah Connor Chronicles, the TV show on Fox from 2007–2009. It handled these questions in a very nuanced way, without being too moralizing about it. That’s not to say there was no moralization, but it modulated the moralization pretty well.
There are certain episodes of things like Star Trek: Deep Space Nine, which I recently re-watched the whole of in preparation for this talk. It actually looks at these qualities of augmentation in a very very interesting way. Doctor Julian Bashir, played by Alexander Siddig (or Siddig El Fadil) in the show, was revealed to be an enhanced human being. He was genetically modified, and we actually get to see the reasoning in that universe behind the prohibitions for genetic modification. But we also get to see people come to recognize that these fears about genetic modification, these fears about enhanced humans trying to put themselves as over and above or in a ruling class above “normal humans” are probably unfounded when we actually engage these processes of enhancement with an understanding of what we’re doing rather than just doing them to do them.
Finley: It seems like the movie Her was also another example of kind of seeing where this could go where it wasn’t necessarily “AI was going to kill all the humans” or something like that.
Williams: Yeah, very very much so. It actually was one of the first pleasant surprises I’ve gotten in film representations of this in recent years. As you say, it didn’t take that “AI’s going to kill all the humans” tack. It actually said we’re looking at a different kind of mind and consciousness, the concerns of which might be so far beyond our human scope and understanding that they’re not really going to kill us because they’re not really going to be concerned with us. They’re going to have many other things that are holding their interest that they’re concerned about, that they’re interested in. So why should we worry about this vastly more complex and vastly different consciousness deciding to turn itself against us, when it’ll be fascinated by various aspects of the universe that we have no way to even comprehend?
Finley: Wasn’t one of the arguments that we as humans use a gigantic amount of energy and resources to stay alive on this planet and to [?] ourselves. A machine consciousness might want those resources for something else and decide to eradicate us. So I don’t know. From that kind of science fiction scenario, is there not a case to be made that we might be essentially setting ourselves up for failure, for termination?
Williams: That possibility exists for us right now. I mean, before we even go about developing a new kind of mind or a new kind of life, a non-biological life on this planet, we have to contest with that idea right now that the resources that we are making use of in order to live as a species, we’re fighting amongst ourselves for them. And at the same time, while we’re fighting over those resources, we’re fighting to keep each other from, in many real ways, developing new forms, new pathways to sate those needs to develop new resources. So our discussion of alternative technologies for energy has been stymied for decades by self-interested parties looking to maintain a kind of control over certain means of production.
And I don’t know that that possibility, the eventuality, that we manage to create a non-human, non-biological, consciousness, a machine life, the idea of this kind of preference for one type of resource or one type of energy, even in the face of the opportunity for developing new energy resources, I don’t think that’s going to necessarily exist within it. I don’t think there’s going to be the kind of… There’s no reason that there would necessarily be these politically contentious arguments about oil vs. solar vs. coal vs. wind power, from the standpoint of a machine mind. I think if we’re talking about a thing that’s capable of recognizing its place in the world, its needs, and the processes that can allow it to survive, I think the development of multiple different avenues for resource allocation and energy production would probably be top on its priorities.
Did you ever see the movie Limitless?
Finley: Yeah, that was the one I mentioned earlier.
Williams: That one. So we’re looking at one of first things that happens… Well, one of the last things that happens in the movie, but one of the first things that I think should have happened, and the most logical thing, is you find that you’re given a drug that makes you ridiculously intelligent. You find out that you are a being that’s capable of massive amounts of correlative intelligence and capable of figuring out all kinds of problems, and you’re part of a distributed network of similar beings, but this thing kills you or you have limited resources in the current paradigm or current framework in which you exist. Isn’t one of the first things you’re going to do as this massively intelligent, massively capable being to figure out how to overcome that limitation? To figure out how to wrest control of yourself from these limited resources? I think that that would probably (I can’t state this for sure; this is obviously a hypothetical.), but I think that a machine consciousness in that context would probably set its sights towards figuring out the best way to make sure that it had enough energy, in multiple forms, for a long time to come. I would like to see a story in which an AI is developed and the first thing it does is develop a comprehensive plan for wind and solar power retention, solar power transmission at high fidelity, and the best kind of batteries humanity has ever seen and just freely spreads them around the globe because it’s the only way it’s going to survive for more than six years.
Finley: I know you’re mostly kind of focused on the philosophical aspects of all of this, but I wonder do you look at the technological developments? Because I spend a fair amount of time looking at this sort of thing, so I have my own opinions. But do you think that this is actually a real issue that humanity is going to have to deal with imminently? Non-human intelligences that are more intelligent than we are?
Williams: I don’t know about imminently. I don’t think it’s going to be necessarily a problem within the next five to ten, fifteen, to maybe even twenty years. But my perspective on it has always been, because I am more philosophically focused in these things, why not try to address the issues before they arrive? Why not try to think about these questions before they become problems that we have to fix? Rather instead try to make ourselves aware of the issues, aware of the potentials, and put certain understandings in place, even if those understandings are just adaptability protocols. We are capable of thinking about these questions with a bit higher ratio of reflex. It’s not going to blindside us, necessarily. Even if it’s a surprise, it doesn’t catch us flat-footed. We’re always thinking about the possibility. Whether those possibilities are going to become actualities in anything like a timeframe that is our lifetime, I know people who are doing direct research in algorithmic intelligence right now and they say this is probably not going to be an issue unless a massive leap forward in processing capability, compartmentalization, and understanding of how reflexivity and self-awareness arises in what we consider to be consciousness, what we experience as consciousness. Unless those things happen soon, those massive leaps forward, we’re not going to be able to purposefully develop, intentionally develop, a machine consciousness at any point very very soon.
Finley: Yeah, that’s what I think as well. There’s another question, though, I guess related to the philosophy of it, which is whether some of these lines of thinking are applicable to other aspects of life. We’ve been telling stories along these lines for a really long time. You were talking about the flying too close to the sun metaphor that’s the Icarus myth. There’s also the Golem and the sorcerer’s apprentice, and all these ideas kind of creating machines or creating things that aren’t human that kind of get away from our control. Did you read Tim Maly’s micro-essay from a couple of years ago about the idea of corporations as essentially bad AI?
Williams: Yes. I remember that piece. That was good.
Finley: I’ve been trying myself, and I haven’t really gone very far down this road yet, to think about a lot of these stories about AI and consciousness and trying to rethink those, as the old saying goes, that science fiction is about the present and not the future, and trying to rethink about those as being stories about how we’ve kind of let corporations take over a lot of our lives, and we’ve allowed other people to kind of run the show in so many different ways. Whether that’s kind of technologically in terms of Facebook and Google doing things behind the scenes in the cloud that we don’t really understand, we don’t know what they’re doing. Or if it’s Monsanto making food that we don’t necessarily know what it is. Do you have any thoughts on that?
Williams: Honestly, I often feel very much the same way in that regard, that the superstructure of the corporations and their interconnectedness into our lives, the interwoven nature of into, as you say, all the aspects of our lives is such that there are very few people out there right now who can accurately comprehend the operations and intricacies of their operations as a whole or even, on a smaller scale, individually. Because the corporations and the entities themselves are so massive that the borders between them as they operate at such a high level are so blurry that knowing how they are specifically interacting with aspects of economic policy, politics, what’s available to you on their grocery store shelves, how you can get to work on a certain day, traffic patterns, airline worker strikes, all of these things become so intricately interwoven and interconnected that understanding them and knowing precisely how they’re operating becomes a full-time way of life in and of itself.
Recognizing that these are themselves, right now, the closest thing we have to non-human consciousness with its own desires and intentions and that they act as this kind of almost distributed network that is in many ways working against itself but still also working towards the overall health of the whole. Every aspect like Monsanto, Facebook, Google, and all of these corporate entities, they all have their own individual desires, but as they operate, a picture could be painted to say that they are operating for the sake of the health of the network as a whole. That they’re operating for the sake of the entire structure of which they are all a part with each other.
But having a grasp on that, having an understanding of what that looks like, what those desires look like, what those “intentions” (if we’re going to call it an intentional structure) look like, they’re almost entirely opaque to us. And I think that that is in some real sense analogous to what we could expect in the case of encountering a massively non-corporate artificially-intelligent entity, and algorithmic machine intelligence. It’s going to have so many interconnections to the networks of our lives as a whole, and it will be so distributed across them and throughout them, that our understanding of its operations might be equally as opaque. And there’s a case to be made that we’ve already accidentally created our own AI. And if Tim’s right, then they are these corporations. They are these unfortunately kind of selfish actors whose desires have been developed out of the kind of starting principles and the opening parameters that we’ve given their programming, and have simply followed the logical progression of that programming to becoming what they are now.
Finley: Well, that gives us a lot to think about until next week. So I think maybe next week we can drill into some of the more religious or occult aspects of all of this. Thanks a lot for joining us, and we’ll see you again same time, same place next week.
Williams: Fantastic. Thank you very much for having me.
Further Reference
The Mindful Cyborgs site, where this episode’s page has additional links and credits.