Klint Finley: Welcome to Mindful Cyborgs episode 54. I’m Klint Finley. My usu­al co-hosts Sara Watson and Chris Dancy could­n’t make it today, so I am fly­ing solo. But we’ve got a spe­cial guest today, Damien Williams. He’s a writer for afu​ture​worth​thinkingabout​.com and a teacher of phi­los­o­phy and reli­gion at var­i­ous uni­ver­si­ties in Georgia. [He] focus­es on tran­shu­man­ism, pop cul­ture, and the aca­d­e­m­ic study of the occult. Damien, wel­come to the show.

Damien Williams: Thank you very much for hav­ing me.

Finley: So I under­stand you just got done giv­ing a talk.

Williams: The con­fer­ence was called the Work of Cognition and Neuroethics in Science Fiction. It was put on by the Center for Cognition and Neuroethics in Flint, Michigan, and my talk was called The Quality of Life: The Implications of Augmented Personhood and Machine Intelligence in Science Fiction.”

Finley: What did you talk about? What was in that talk? What was the gist of it?

Williams: Overall, the gist was of look­ing at dif­fer­ent ways we have rep­re­sent­ed things like cyber­net­ic enhance­ments in humans, men­tal, chem­i­cal enhance­ments, non-human intel­li­gence, arti­fi­cial intel­li­gence, in our sci­ence fic­tion media over the years, and the ways in which we tell our­selves those sto­ries and the kinds of lessons that we pull from those sto­ries over and over again, and basi­cal­ly mak­ing the case that as we get clos­er and clos­er to ful­fill­ing these dreams of ours, these seem­ing con­tin­u­al aspi­ra­tions of ours, we need to think more care­ful­ly and more clear­ly about the kinds of sto­ries we’ve been telling our­selves. We’ve just told our­selves a lot of cau­tion­ary, don’t go too far, don’t go too fast, don’t fly too high kind of sto­ries, but those sto­ries tend to always end our fail­ure, and that seems to be kind of a bad prece­dent to set. So the case that I’m try­ing to make is if we’re going to keep doing this work and we’re going to keep telling our­selves these sto­ries, we should prob­a­bly start telling our­selves sto­ries that teach us how to learn from our mis­takes and how to learn from the sto­ries we’ve told ourselves.

Finley: It’s easy to think of some exam­ples of what you’re talk­ing about, kind of the bad enhance­ments, like the Johnny Depp movie from a few years ago. Are there any excep­tions? One that comes to mind to me is Limitless.

Williams: There are more and more, recent­ly, films that don’t take the sole posi­tion that enhance­men­t’s bad or that non-human intel­li­gence is bad and is going to kill us all. Things like, recent­ly Chappie took a lit­tle bit of a posi­tion on both of those things with­out giv­ing too much away about the film. It’s pret­ty new, so I don’t know how many peo­ple have actu­al­ly seen it yet. But it does delve both into non-human intel­li­gence and machine con­scious­ness, and also the idea of aug­ment­ing human con­scious­ness, and what that does for us and what that looks like. It asks those kinds of ques­tions with­out too much of a heavy hand on this don’t fly too high” kind of men­tal­i­ty. It actu­al­ly says it’s being done so how should we do it? In what direc­tion ought we go, since we’re already strik­ing out, we’re already head­ing out to do these things? How should we pro­ceed? And asks more of a ques­tion about the qual­i­ty of the things that we’re doing, rather than whether we should do them at all.

Also, one of my favorite go-tos in these con­ver­sa­tions is Terminator: The Sarah Connor Chronicles, the TV show on Fox from 20072009. It han­dled these ques­tions in a very nuanced way, with­out being too mor­al­iz­ing about it. That’s not to say there was no mor­al­iza­tion, but it mod­u­lat­ed the mor­al­iza­tion pret­ty well.

There are cer­tain episodes of things like Star Trek: Deep Space Nine, which I recent­ly re-watched the whole of in prepa­ra­tion for this talk. It actu­al­ly looks at these qual­i­ties of aug­men­ta­tion in a very very inter­est­ing way. Doctor Julian Bashir, played by Alexander Siddig (or Siddig El Fadil) in the show, was revealed to be an enhanced human being. He was genet­i­cal­ly mod­i­fied, and we actu­al­ly get to see the rea­son­ing in that uni­verse behind the pro­hi­bi­tions for genet­ic mod­i­fi­ca­tion. But we also get to see peo­ple come to rec­og­nize that these fears about genet­ic mod­i­fi­ca­tion, these fears about enhanced humans try­ing to put them­selves as over and above or in a rul­ing class above nor­mal humans” are prob­a­bly unfound­ed when we actu­al­ly engage these process­es of enhance­ment with an under­stand­ing of what we’re doing rather than just doing them to do them.

Finley: It seems like the movie Her was also anoth­er exam­ple of kind of see­ing where this could go where it was­n’t nec­es­sar­i­ly AI was going to kill all the humans” or some­thing like that.

Williams: Yeah, very very much so. It actu­al­ly was one of the first pleas­ant sur­pris­es I’ve got­ten in film rep­re­sen­ta­tions of this in recent years. As you say, it did­n’t take that AI’s going to kill all the humans” tack. It actu­al­ly said we’re look­ing at a dif­fer­ent kind of mind and con­scious­ness, the con­cerns of which might be so far beyond our human scope and under­stand­ing that they’re not real­ly going to kill us because they’re not real­ly going to be con­cerned with us. They’re going to have many oth­er things that are hold­ing their inter­est that they’re con­cerned about, that they’re inter­est­ed in. So why should we wor­ry about this vast­ly more com­plex and vast­ly dif­fer­ent con­scious­ness decid­ing to turn itself against us, when it’ll be fas­ci­nat­ed by var­i­ous aspects of the uni­verse that we have no way to even comprehend? 

Finley: Wasn’t one of the argu­ments that we as humans use a gigan­tic amount of ener­gy and resources to stay alive on this plan­et and to [?] our­selves. A machine con­scious­ness might want those resources for some­thing else and decide to erad­i­cate us. So I don’t know. From that kind of sci­ence fic­tion sce­nario, is there not a case to be made that we might be essen­tial­ly set­ting our­selves up for fail­ure, for termination?

Williams: That pos­si­bil­i­ty exists for us right now. I mean, before we even go about devel­op­ing a new kind of mind or a new kind of life, a non-biological life on this plan­et, we have to con­test with that idea right now that the resources that we are mak­ing use of in order to live as a species, we’re fight­ing amongst our­selves for them. And at the same time, while we’re fight­ing over those resources, we’re fight­ing to keep each oth­er from, in many real ways, devel­op­ing new forms, new path­ways to sate those needs to devel­op new resources. So our dis­cus­sion of alter­na­tive tech­nolo­gies for ener­gy has been stymied for decades by self-interested par­ties look­ing to main­tain a kind of con­trol over cer­tain means of production. 

And I don’t know that that pos­si­bil­i­ty, the even­tu­al­i­ty, that we man­age to cre­ate a non-human, non-biological, con­scious­ness, a machine life, the idea of this kind of pref­er­ence for one type of resource or one type of ener­gy, even in the face of the oppor­tu­ni­ty for devel­op­ing new ener­gy resources, I don’t think that’s going to nec­es­sar­i­ly exist with­in it. I don’t think there’s going to be the kind of… There’s no rea­son that there would nec­es­sar­i­ly be these polit­i­cal­ly con­tentious argu­ments about oil vs. solar vs. coal vs. wind pow­er, from the stand­point of a machine mind. I think if we’re talk­ing about a thing that’s capa­ble of rec­og­niz­ing its place in the world, its needs, and the process­es that can allow it to sur­vive, I think the devel­op­ment of mul­ti­ple dif­fer­ent avenues for resource allo­ca­tion and ener­gy pro­duc­tion would prob­a­bly be top on its priorities.

Did you ever see the movie Limitless?

Finley: Yeah, that was the one I men­tioned earlier.

Williams: That one. So we’re look­ing at one of first things that hap­pens… Well, one of the last things that hap­pens in the movie, but one of the first things that I think should have hap­pened, and the most log­i­cal thing, is you find that you’re giv­en a drug that makes you ridicu­lous­ly intel­li­gent. You find out that you are a being that’s capa­ble of mas­sive amounts of cor­rel­a­tive intel­li­gence and capa­ble of fig­ur­ing out all kinds of prob­lems, and you’re part of a dis­trib­uted net­work of sim­i­lar beings, but this thing kills you or you have lim­it­ed resources in the cur­rent par­a­digm or cur­rent frame­work in which you exist. Isn’t one of the first things you’re going to do as this mas­sive­ly intel­li­gent, mas­sive­ly capa­ble being to fig­ure out how to over­come that lim­i­ta­tion? To fig­ure out how to wrest con­trol of your­self from these lim­it­ed resources? I think that that would prob­a­bly (I can’t state this for sure; this is obvi­ous­ly a hypo­thet­i­cal.), but I think that a machine con­scious­ness in that con­text would prob­a­bly set its sights towards fig­ur­ing out the best way to make sure that it had enough ener­gy, in mul­ti­ple forms, for a long time to come. I would like to see a sto­ry in which an AI is devel­oped and the first thing it does is devel­op a com­pre­hen­sive plan for wind and solar pow­er reten­tion, solar pow­er trans­mis­sion at high fideli­ty, and the best kind of bat­ter­ies human­i­ty has ever seen and just freely spreads them around the globe because it’s the only way it’s going to sur­vive for more than six years.

Finley: I know you’re most­ly kind of focused on the philo­soph­i­cal aspects of all of this, but I won­der do you look at the tech­no­log­i­cal devel­op­ments? Because I spend a fair amount of time look­ing at this sort of thing, so I have my own opin­ions. But do you think that this is actu­al­ly a real issue that human­i­ty is going to have to deal with immi­nent­ly? Non-human intel­li­gences that are more intel­li­gent than we are?

Williams: I don’t know about immi­nent­ly. I don’t think it’s going to be nec­es­sar­i­ly a prob­lem with­in the next five to ten, fif­teen, to maybe even twen­ty years. But my per­spec­tive on it has always been, because I am more philo­soph­i­cal­ly focused in these things, why not try to address the issues before they arrive? Why not try to think about these ques­tions before they become prob­lems that we have to fix? Rather instead try to make our­selves aware of the issues, aware of the poten­tials, and put cer­tain under­stand­ings in place, even if those under­stand­ings are just adapt­abil­i­ty pro­to­cols. We are capa­ble of think­ing about these ques­tions with a bit high­er ratio of reflex. It’s not going to blind­side us, nec­es­sar­i­ly. Even if it’s a sur­prise, it does­n’t catch us flat-footed. We’re always think­ing about the pos­si­bil­i­ty. Whether those pos­si­bil­i­ties are going to become actu­al­i­ties in any­thing like a time­frame that is our life­time, I know peo­ple who are doing direct research in algo­rith­mic intel­li­gence right now and they say this is prob­a­bly not going to be an issue unless a mas­sive leap for­ward in pro­cess­ing capa­bil­i­ty, com­part­men­tal­iza­tion, and under­stand­ing of how reflex­iv­i­ty and self-awareness aris­es in what we con­sid­er to be con­scious­ness, what we expe­ri­ence as con­scious­ness. Unless those things hap­pen soon, those mas­sive leaps for­ward, we’re not going to be able to pur­pose­ful­ly devel­op, inten­tion­al­ly devel­op, a machine con­scious­ness at any point very very soon.

Finley: Yeah, that’s what I think as well. There’s anoth­er ques­tion, though, I guess relat­ed to the phi­los­o­phy of it, which is whether some of these lines of think­ing are applic­a­ble to oth­er aspects of life. We’ve been telling sto­ries along these lines for a real­ly long time. You were talk­ing about the fly­ing too close to the sun metaphor that’s the Icarus myth. There’s also the Golem and the sor­cer­er’s appren­tice, and all these ideas kind of cre­at­ing machines or cre­at­ing things that aren’t human that kind of get away from our con­trol. Did you read Tim Maly’s micro-essay from a cou­ple of years ago about the idea of cor­po­ra­tions as essen­tial­ly bad AI?

Williams: Yes. I remem­ber that piece. That was good.

Finley: I’ve been try­ing myself, and I haven’t real­ly gone very far down this road yet, to think about a lot of these sto­ries about AI and con­scious­ness and try­ing to rethink those, as the old say­ing goes, that sci­ence fic­tion is about the present and not the future, and try­ing to rethink about those as being sto­ries about how we’ve kind of let cor­po­ra­tions take over a lot of our lives, and we’ve allowed oth­er peo­ple to kind of run the show in so many dif­fer­ent ways. Whether that’s kind of tech­no­log­i­cal­ly in terms of Facebook and Google doing things behind the scenes in the cloud that we don’t real­ly under­stand, we don’t know what they’re doing. Or if it’s Monsanto mak­ing food that we don’t nec­es­sar­i­ly know what it is. Do you have any thoughts on that?

Williams: Honestly, I often feel very much the same way in that regard, that the super­struc­ture of the cor­po­ra­tions and their inter­con­nect­ed­ness into our lives, the inter­wo­ven nature of into, as you say, all the aspects of our lives is such that there are very few peo­ple out there right now who can accu­rate­ly com­pre­hend the oper­a­tions and intri­ca­cies of their oper­a­tions as a whole or even, on a small­er scale, indi­vid­u­al­ly. Because the cor­po­ra­tions and the enti­ties them­selves are so mas­sive that the bor­ders between them as they oper­ate at such a high lev­el are so blur­ry that know­ing how they are specif­i­cal­ly inter­act­ing with aspects of eco­nom­ic pol­i­cy, pol­i­tics, what’s avail­able to you on their gro­cery store shelves, how you can get to work on a cer­tain day, traf­fic pat­terns, air­line work­er strikes, all of these things become so intri­cate­ly inter­wo­ven and inter­con­nect­ed that under­stand­ing them and know­ing pre­cise­ly how they’re oper­at­ing becomes a full-time way of life in and of itself.

Recognizing that these are them­selves, right now, the clos­est thing we have to non-human con­scious­ness with its own desires and inten­tions and that they act as this kind of almost dis­trib­uted net­work that is in many ways work­ing against itself but still also work­ing towards the over­all health of the whole. Every aspect like Monsanto, Facebook, Google, and all of these cor­po­rate enti­ties, they all have their own indi­vid­ual desires, but as they oper­ate, a pic­ture could be paint­ed to say that they are oper­at­ing for the sake of the health of the net­work as a whole. That they’re oper­at­ing for the sake of the entire struc­ture of which they are all a part with each other. 

But hav­ing a grasp on that, hav­ing an under­stand­ing of what that looks like, what those desires look like, what those inten­tions” (if we’re going to call it an inten­tion­al struc­ture) look like, they’re almost entire­ly opaque to us. And I think that that is in some real sense anal­o­gous to what we could expect in the case of encoun­ter­ing a mas­sive­ly non-corporate artificially-intelligent enti­ty, and algo­rith­mic machine intel­li­gence. It’s going to have so many inter­con­nec­tions to the net­works of our lives as a whole, and it will be so dis­trib­uted across them and through­out them, that our under­stand­ing of its oper­a­tions might be equal­ly as opaque. And there’s a case to be made that we’ve already acci­den­tal­ly cre­at­ed our own AI. And if Tim’s right, then they are these cor­po­ra­tions. They are these unfor­tu­nate­ly kind of self­ish actors whose desires have been devel­oped out of the kind of start­ing prin­ci­ples and the open­ing para­me­ters that we’ve giv­en their pro­gram­ming, and have sim­ply fol­lowed the log­i­cal pro­gres­sion of that pro­gram­ming to becom­ing what they are now.

Finley: Well, that gives us a lot to think about until next week. So I think maybe next week we can drill into some of the more reli­gious or occult aspects of all of this. Thanks a lot for join­ing us, and we’ll see you again same time, same place next week.

Williams: Fantastic. Thank you very much for hav­ing me.

Further Reference

The Mindful Cyborgs site, where this episode’s page has addi­tion­al links and credits.