Luke Robert Mason: You’re lis­ten­ing to the Futures Podcast with me, Luke Robert Mason.

On this episode, I speak to aca­d­e­m­ic and law lec­tur­er, John Dahaner.

When I said that humans are obso­lesc­ing, that does­n’t mean that they’re going to become extinct or irrel­e­vant to the future. It just means that their activ­i­ties will be less sig­nif­i­cant.
John Danaher, excerpt from inter­view.

John shared his insights into the pos­si­bil­i­ty of a post work econ­o­my, the impacts of increas­ing automa­tion, and how our future might be deter­mined by either becom­ing a cyborg, or retreat­ing into the vir­tu­al.

This episode was record­ed on loca­tion in London, England, before John’s book launch event at London Futurists.

Mason: You open the book with this phrase—this state­ment, real­ly, Human obso­les­cence is immi­nent.” What did you mean by that?

John Danaher: Yeah, I’ve been tak­ing a bit of flack for using that phrase to open the book because it sounds so pes­simistic and omi­nous. I have to con­fess it was a lit­tle bit of rhetor­i­cal hyper­bole. I mean that humans are becom­ing less use­ful in mak­ing changes in the world, or to par­tic­u­lar domains of activ­i­ty. I try to trace out this trend towards human obso­les­cence across dif­fer­ent domains over human his­to­ry. Agriculture is an obvi­ous exam­ple where once upon a time, the major­i­ty of peo­ple used to work in agri­cul­ture. We now see a sig­nif­i­cant decline or reduc­tion in the num­ber of peo­ple work­ing in agri­cul­tur­al relat­ed indus­tries. Less than 5% in most European coun­tries, from over 50% as lit­tle as 100 years ago.

I also look at the decline of human activ­i­ty in man­u­fac­tur­ing, in med­i­cine and the pro­fes­sions and law, and in sci­en­tif­ic inquiry. I look at some new stud­ies that have been done on robot­ic sci­en­tists who can cre­ate their own exper­i­ments and test their own hypothe­ses. In pol­i­tics, in bureau­crat­ic man­age­ment and in polic­ing. So, I look at the trend towards automa­tion across all these domains of activ­i­ty, that I think sup­ports the claim that there is this grow­ing obso­les­cence of humans.

There is one qual­i­fi­ca­tion to that though, which is that when I say that humans are obso­lesc­ing, that does­n’t mean that they’re going to be going extinct or irrel­e­vant to the future. It just means that their activ­i­ties will be less sig­nif­i­cant.

Mason: I mean, you actu­al­ly go one step fur­ther, and you say that this could be an oppor­tu­ni­ty for opti­mism.

Danaher: So this is it. This is the kind of rhetor­i­cal strat­e­gy in a sense. You’re set­ting it up with this seem­ing­ly omi­nous claim that we’re obso­lesc­ing, and this is some­thing that a lot of peo­ple will be wor­ried about. They’ll view it in a pes­simistic light, but I try to argue that it is actu­al­ly an oppor­tu­ni­ty for opti­mism. Partly because it allows us to tran­scend our exist­ing way of life—in par­tic­u­lar, to escape from the drudgery of paid employ­ment and to pur­sue a post-work utopia.

Mason: I think that is real­ly impor­tant, because you’re not talk­ing about the obso­les­cence of human­i­ty from Planet Earth. You’re talk­ing about the obso­les­cence of human­i­ty with­in the work­place, in the work­force.

Danaher: Yes, exact­ly. Yeah.

Mason: So can we talk a lit­tle bit about this, this idea of a post-work future? In the book, you set up this notion of the fact that a post-work future will be a good thing. Do you think it real­ly will be a utopi­an out­come to have a post work future? Or do you think it could lead to bore­dom and chaos?

Danaher: If it’s the case that humans are no longer going to be use­ful in the work­place or are going to become less and less use­ful over time, such that more and more peo­ple will not pur­sue paid employ­ment in their lives—this leads to two depri­va­tions. One is a depri­va­tion of income, which of course is essen­tial nowa­days because peo­ple need an income in order to sur­vive, to pay for the goods and ser­vices that enable them to flour­ish. But also it could lead to a depri­va­tion of mean­ing, because we live in soci­eties where work is val­orised, in the sense that it’s val­ued. Work eth­ic is seen as a pos­i­tive thing. It’s how peo­ple make a con­tri­bu­tion to their soci­eties. It’s how peo­ple often define them­selves. If we take that away from peo­ple, they’re going to have this cri­sis of mean­ing. So, how can we fill that gap in order to address that cri­sis of mean­ing? The real goal of the book was to exam­ine that poten­tial cri­sis of mean­ing, and whether there is actu­al­ly an oppor­tu­ni­ty for opti­mism embed­ded in it.

Mason: I mean, there are some peo­ple who love their jobs. But in the book, you say you should real­ly hate your job.

Danaher: What I argue in the book is not so much that every­body will nec­es­sar­i­ly hate their jobs, because hatred is a feel­ing you have towards work that you do. It’s quite pos­si­ble that many peo­ple have pos­i­tive feel­ings towards the work that they do. What I argue instead, is that work is struc­tural­ly bad and that we’ve fall­en into a pat­tern of employ­ment that is bad for many work­ers, get­ting worse—partly as a result of tech­nol­o­gy. And so we should be con­cerned about the future of work, and we should look to pos­si­ble ways to trans­form and pos­si­bly even tran­scend the need for work.

Mason: So what we’re talk­ing about is real­ly work that’s done for mon­ey. Exchange time for mon­ey, that’s your def­i­n­i­tion of work.

Danaher: Yeah, you know, you have to be care­ful when you talk about the post-work future and the con­cept of work. The first thing you learn when you talk about the notion of a post-work future is that peo­ple have dif­fer­ent def­i­n­i­tions in mind of what work is. Some peo­ple have very expan­sive def­i­n­i­tions of work. They think work is any phys­i­cal or men­tal activ­i­ty per­formed by humans. So if you talk about a post work future—if that’s your def­i­n­i­tion in mind—then it prob­a­bly real­ly just makes no sense because humans are always going to per­form some kinds of phys­i­cal or men­tal activ­i­ties. We’re always going to work in that broad and expan­sive sense. I try to adopt a nar­row­er inter­pre­ta­tion of what work is as paid employ­ment. So that means that work, for me, is not any par­tic­u­lar kind of activ­i­ty. It is, rather, a con­di­tion under which activ­i­ties are per­formed, name­ly a con­di­tion of eco­nom­ic reward of some sort. The eco­nom­ic reward does not nec­es­sar­i­ly have to be imme­di­ate­ly realised. Sometimes there are unpaid forms of work that are done in the hope of receiv­ing future rewards. So there are lots of young stu­dents who take unpaid intern­ships, for exam­ple, in the hope that they will secure paid employ­ment.

Mason: The rea­son we’re able to talk about a post work future is this pos­si­bil­i­ty of automa­tion. I mean, that’s at the core of the book—the fear that we’re going to become obso­lete because of automa­tion. Automation is going to be the thing that’s going to take our jobs. I just won­der why automa­tion of work is both pos­si­ble and desir­able.

Danaher: Lots of things have been writ­ten in the past decade or so about tech­no­log­i­cal unem­ploy­ment and the future of work, and there have been many inter­est­ing argu­ments and claims about the per­cent­age of jobs that are com­put­eriseable or automat­able. I try to engage with those kinds of stud­ies and look at whether it’s real­ly pos­si­ble to auto­mate work. I think there’s a cou­ple of points to bear in mind when you’re try­ing to eval­u­ate that claim. One is that I think a lot of peo­ple approach this with the wrong set of con­cepts. So they think about the dis­place­ment of work­ers or jobs, when they real­ly should be think­ing about the dis­place­ment of work relat­ed tasks.

The kinds of automat­ing tech­nolo­gies that we’re devel­op­ing at the moment are—to use a some­what tech­ni­cal term—they’re kind of nar­row forms of arti­fi­cial intel­li­gence. They can be good at per­form­ing cer­tain kinds of func­tions or tasks in the work­place. They’re not gen­er­al forms of intel­li­gence, they can’t choose to per­form all the dif­fer­ent tasks in the work­place. So what hap­pens when you auto­mate or intro­duce automat­ing tech­nol­o­gy into the work­place is that you replace humans in the per­for­mance of cer­tain tasks. That does­n’t nec­es­sar­i­ly mean that you elim­i­nate jobs or elim­i­nate work­ers, because often­times, work­ers can move into com­ple­men­tary tasks.

One of the exam­ples I have of this in the book is to do with legal work­places, let’s say. Within a giv­en law firm there are lots of tasks that a lawyer or a team of lawyers will per­form in order to pro­vide a valu­able ser­vice to their clients. They’ll engage in doc­u­ment review, review­ing con­tracts or oth­er com­plex legal doc­u­ments. They will engage in legal research look­ing at cas­es and statutes to see how the law can be used to the ben­e­fit of their client. They will enter­tain and schmooze with their clients to make them feel good about the ser­vice that they’re offer­ing. They will present and argue in court on behalf of their clients. So there are all these dif­fer­ent tasks that are per­formed with­in that work­place. Automating tech­nolo­gies at the moment can do some of those tasks. We’ve got pret­ty good tech­nolo­gies now for doc­u­ment review, and emerg­ing tech­nolo­gies that enable some kinds of basic legal research and pre­dic­tion of the out­come of cas­es for law firms. At the moment, we don’t have robots that are very good at schmooz­ing with clients and enter­tain­ing them. So if you intro­duce automat­ing tech­nolo­gies into a legal work­place, you might find that human work­ers are dis­placed from the tasks of doc­u­ment review and cer­tain kinds of legal research. They move into the more cus­tomer rela­tions side of it, and maybe also then argue cas­es in court, in order to per­suade a judge or a jury.

Automation changes the dynam­ic of the work­place. That might mean that some work­ers are elim­i­nat­ed because their jobs are pure­ly defined in terms of the tasks that machines are good at. But oth­er work­ers aren’t nec­es­sar­i­ly elim­i­nat­ed because they have these oth­er things that they can per­form that com­ple­ment what machines do.

Mason: I think that’s the piece that we so eas­i­ly for­get. In actu­al fact, this automa­tion of the work­place could lead to a com­ple­men­tary rela­tion­ship between AI and the human. In fact, IBM Watson in the US, when they look at the work that they’re doing to review med­ical papers, they talk about it as a col­lab­o­ra­tion between the doc­tor and IBM Watson. IBM Watson nev­er diag­noses a patient. It makes sug­ges­tions to a doctor—a human doctor—to then go and diag­nose a patient based on the infor­ma­tion that IBM Watson has ingest­ed and has tried to under­stand. I think if we start see­ing the future of work being a col­lab­o­ra­tion, then maybe there’s some­thing more excit­ing about how we engage with this automa­tion. Rather than see it as a threat, maybe we could see it as a poten­tial col­lab­o­ra­tor.

Danaher: In most of these debates about tech­no­log­i­cal unem­ploy­ment, we focus on the dis­place­ment poten­tial of automa­tion, how it dis­places work­ers. There’s also this relat­ed phe­nom­e­non of how automa­tion can com­ple­ment what human work­ers do, and we can col­lab­o­rate with machines. That’s kind of the hope, I think, amongst the main­stream eco­nom­ic views—that real­ly, tech­nol­o­gy won’t result in this mas­sive decline in jobs. It’ll just involve this struc­tur­al reori­en­ta­tion of the work­place so that we just col­lab­o­rate with machines. We do what we’re good at and the machines do what they’re good at. This is the main objec­tion to the claim that we’ll have wide­spread tech­no­log­i­cal unem­ploy­ment.

When I say that there’s a pos­si­bil­i­ty of a post-work future. I don’t think that means that no one will work in the future. I just mean that a grow­ing per­cent­age of the adult human pop­u­la­tion will not work for a liv­ing. One of the ways in which I illus­trate this is in terms of some­thing called the labour force par­tic­i­pa­tion rate in coun­tries. So the labour force par­tic­i­pa­tion rate is the num­ber of adults of work­ing age who both want work and are at work. In most Western European coun­tries, that fig­ure is some­where between 60 and 70%. So that means it’s already the case that about 30 to 40% of the human pop­u­la­tion don’t work for a liv­ing or don’t even want to work for a liv­ing. So when I talk about a post-work future, I’m talk­ing about a future in which that num­ber of non work­ing adults con­tin­ues to grow. What does it mean to reach a true post work future? I don’t know if there’s an exact bound­ary line. But cer­tain­ly if it’s more than 50% of the pop­u­la­tion that is not work­ing, I think you’ve rad­i­cal­ly changed the kind of world that we live in.

In terms of this com­ple­men­tar­i­ty effect of automa­tion, I’m a lit­tle bit scep­ti­cal about the poten­tial for this to be a recipe for lots of jobs in the future. When we think about the com­ple­men­tar­i­ty effect, the assump­tion here is that machines will replace humans in some kinds of tasks, but this will open up a space of com­ple­men­tary tasks for human work­ers. But there’s a chal­lenge here, which is, can you actu­al­ly get the peo­ple who are dis­placed into these com­ple­men­tary tasks? It may turn out to be dif­fi­cult to do that. They may need to be edu­cat­ed and retrained. There are cer­tain work­ers who may be at a stage of their lives where it’s just not real­ly fea­si­ble or pos­si­ble to edu­cate and retrain them. It may also be the case that there’s just not a huge amount of polit­i­cal will to do this, or polit­i­cal sup­port for it, or edu­ca­tion­al sup­port for it. So can we actu­al­ly adapt to this new real­i­ty where we have to train dif­fer­ent kinds of skills? That’s a seri­ous chal­lenge.

There’s also anoth­er chal­lenge here, which is that the assump­tion is that we’ll be able to train humans into these new tasks at a rate that is faster than the rate at which tech­nol­o­gy is improv­ing in those tasks. This is some­thing that I think peo­ple get wrong when they think about automa­tion and nar­row AI. They assume that AI is only good at par­tic­u­lar tasks. But of course, we’re devel­op­ing mul­ti­ple dif­fer­ent streams of AI that are good at dif­fer­ent tasks. So it could be the case that we can train machines to per­form these com­ple­men­tary tasks faster than we can train humans. To give a prac­ti­cal illus­tra­tion of this, it takes about 20 to 30 years to train a human into the work­place.

Mason: And to that point, you say in the book that it can actu­al­ly lead to this thing called the cycle of immis­er­a­tion—this cycle where­by it can nev­er catch up.

Danaher: Yeah. So it’s this idea that that automa­tion can be par­tic­u­lar­ly chal­leng­ing for young peo­ple, because they need to train them­selves to have the skills that are val­ued in the econ­o­my. That means they have to get an edu­ca­tion that will give them those skills. But how can they get the edu­ca­tion, because edu­ca­tion is increas­ing­ly cost­ly and expen­sive for peo­ple? So often­times, the way in which stu­dents can pay for their edu­ca­tion is that they work part time, but an awful lot of those jobs that they work part time in are the jobs that are most at threat of automa­tion. How are they going to be able to pay for the edu­ca­tion that gets them to escape from this threat of automa­tion? This is this poten­tial cycle of immis­er­a­tion that they can nev­er get out of—the rush that they’re in.

Mason: There’s a lot of scep­ti­cism around this idea of tech­no­log­i­cal unem­ploy­ment. In the book, you use the Luddite fal­la­cy to explain some of that scep­ti­cism. I mean, what is the Luddite fal­la­cy?

Danaher: So the Luddites were famous­ly these pro­test­ers in the ear­ly part of the Industrial Revolution. They were fol­low­ers of Ned Ludd, who some peo­ple claim is a fic­tion­al char­ac­ter. There’s an inter­est­ing his­to­ry there, to that. They smashed these machines because they saw them as a threat to their employ­ment, but look­ing back on their activ­i­ties from the van­tage point of 150 years lat­er, it seems that they were wrong to do so, in the sense that the kinds of automa­tion that exists in the ear­ly phas­es of the Industrial Revolution did­n’t lead to wide­spread tech­no­log­i­cal unem­ploy­ment. In fact, there’s prob­a­bly more peo­ple out of work in the world today than there ever has been before.

It seems like a fal­la­cy to assume that automa­tion will dis­place jobs. That real­ly then kind of leads into this argu­ment about the com­ple­men­tar­i­ty effect. There isn’t a fixed num­ber of jobs out there to go around. We’re always cre­at­ing new jobs in light of the new kind of socio-technical real­i­ty that we’ve cre­at­ed.

Mason: Even if some of those jobs as David Graeber says are bull­shit jobs.”

Danaher: Right, yeah. Even though they’re mean­ing­less or point­less admin­is­tra­tive jobs, they’re still jobs that are paid.

Mason: As you’ve just out­lined, the automa­tion work is—to a degree—both pos­si­ble and desir­able. But you’re clear to state in the book that the automa­tion life, how­ev­er, is not as desir­able. Could you explain the dif­fer­ence between the two and why that’s so impor­tant?

Danaher: If we look into how auto­mat­ed tech­nolo­gies affect life more gen­er­al­ly, not just work­ing life—I think there are rea­sons for pes­simism. One of the ways in which I illus­trate this in the book is to use the exam­ple of the Pixar movie, WALL‑E. Very rough­ly, WALL‑E depicts this kind of dystopi­an future for human­i­ty where the Earth has become envi­ron­men­tal­ly despoiled. Humans have had to nav­i­gate off the plan­et to these space­ships that are bring­ing them to some oth­er place that they can live, and there are lots of robots in this future. Lots of automat­ing tech­nolo­gies. The humans on these Interstellar space­ships are real­ly fat, obese, slug like beings. They float around in these elec­tron­ic chairs. They’re fed a diet of fast food and light enter­tain­ment, and there are all these robots around them scur­ry­ing about, doing all the work that needs to be done to fly these ships. This has been referred to by some tech­nol­o­gy crit­ics as the sofalar­i­ty. We all just end up on our sofas being fed enter­tain­ment, and food and every­thing we need by automat­ing tech­nol­o­gy. So we don’t real­ly do any­thing, we just sit back and enjoy the ride. Even though this is an extreme and satir­i­cal depic­tion of the auto­mat­ed future, it does—I think—contain a ker­nel of truth and some­thing that we should be con­cerned about.

An awful lot of how we derive mean­ing and val­ue from our lives depends on our agency. The fact that we, through our activ­i­ties, make some kind of dif­fer­ence to the world. We do things that are objec­tive­ly valu­able to the soci­eties in which we live in and maybe in some oth­er grander, cos­mic sense of objec­tive val­ue, and that we are sub­jec­tive­ly engaged and sat­is­fied by the actions that we per­form. The prob­lem with auto­mat­ed tech­nolo­gies is that they kind of cut or sev­er the link between human action and what hap­pens in the world. Because what you’re doing when you rely upon an auto­mat­ed tech­nol­o­gy is that you’re out­sourc­ing either phys­i­cal or cog­ni­tive activ­i­ty to a machine, so that you’re no longer the agent that’s mak­ing the dif­fer­ence to the world. I think this is a seri­ous threat to human mean­ing and flour­ish­ing and some­thing that we should be con­cerned about.

Mason: In the book, you set up these two pos­si­ble sce­nar­ios, these two pos­si­ble utopias: The cyborg utopia, and the vir­tu­al utopia. First, I want to talk about this idea of this cyborg utopia. I mean, how would we build a cyborg utopia?

Danaher: People might be famil­iar with the sto­ry of the ori­gin of the term. You know, neol­o­gism, a cyber­net­ic organ­ism. This idea that has kind of tak­en hold in bio­log­i­cal sci­ences and social sci­ences is a notion of some­thing that humans can aspire to, that they can become more machine-like. What does that mean in prac­tice? There are two dif­fer­ent under­stand­ings of what a cyborg is, par­tic­u­lar­ly in phi­los­o­phy. The one under­stand­ing is that a cyborg is a lit­er­al fusion between human biol­o­gy and a machine. That you’re inte­grat­ing machine-like mech­a­nisms into bio­log­i­cal mech­a­nisms so that they form one hybrid sys­tem. An exam­ple of this would be some­thing like a brain-computer inter­face, where you’re incor­po­rat­ing elec­tri­cal cir­cuits or chips into neur­al cir­cuits in order to per­form some func­tion from the com­bi­na­tion of the two things.

For peo­ple who lis­ten to this pod­cast, you inter­viewed one of the lead­ing pio­neers in cyborg tech­nol­o­gy ear­ly on—Kevin Warwick, right? He’s done all these inter­est­ing pio­neer­ing stud­ies on brain com­put­er inter­faces and how you can implant chips in one per­son­’s brain and send a sig­nal to a robot­ic arm. That’s a kind of illus­tra­tion of this form of lit­er­al fusion between human biol­o­gy and tech­nol­o­gy.

There’s anoth­er under­stand­ing of what a cyborg is though, that’s quite pop­u­lar in cer­tain sec­tors with the philo­soph­i­cal com­mu­ni­ty. Mainly asso­ci­at­ed with a fig­ure called Andy Clark, who says that we’re all nat­ur­al born cyborgs. That we are, by our very natures, a tech­no­log­i­cal species. One of the defin­ing traits of human­i­ty is that we’ve always lived in a tech­no­log­i­cal ecol­o­gy. We don’t live in the nat­ur­al world—we live in a world that’s con­struct­ed by our tech­nolo­gies. We have these rela­tion­ships of depen­den­cy with tech­nol­o­gy and also inter­de­pen­den­cy. We use ham­mers and axes and so forth to do things in the world, and we’ve been increas­ing the amount of tech­nol­o­gi­sa­tion of our every­day lives over the past sev­er­al thou­sand years. So we’re more inte­grat­ed with, and more depen­dent on tech­nol­o­gy. We’re becom­ing more cyborg-like over time.

For Clark, the rela­tion­ship that you have with your smartphone—let’s say if you’re using Google Maps to walk around a city—you have a very inter­de­pen­dent rela­tion­ship with the tech­nol­o­gy. You have a lit­tle avatar that you fol­low on screen, and your move­ments affect the image that you see on the screen. That kind of depen­den­cy rela­tion­ship is an illus­tra­tion of this oth­er path to cyborg sta­tus. It does­n’t mean that you lit­er­al­ly fuse your bio­log­i­cal cir­cuits with the machine cir­cuits, but you have this kind of sym­bi­ot­ic rela­tion­ship with the tech­nol­o­gy. That means you are a cyborg. You know the dif­fer­ence between these two kinds of cyborg are dif­fer­ences of degree as opposed to dif­fer­ences of fun­da­men­tal type, I think. The more inter­de­pen­den­cy you have with an arte­fact, the more cyborg like you become.

Mason: It’s sur­pris­ing to me that you start with a cyborg artist, Neil Harbisson, who’s a colour­blind artist who has for want of a bet­ter descrip­tion an anten­na sur­gi­cal­ly implant­ed into the back of his skull that allows him to hear colour, although it’s not quite hear­ing, it’s slight­ly more nuanced than that. It’s a form of elec­tri­cal bone con­duc­tion, which is vibrat­ing his skull, which gives him a sense of sound. What’s inter­est­ing about Neil Harbison is that he’s a col­or­blind artist who’s now able to hear this colour and he now dreams with these sono-chromatic dreams. He no longer sees this anten­na as a device, but he sees it as an organ; as part of his body. My own inter­ac­tions with Neil…if you go up to him and you watch peo­ple try to touch the anten­na, it’s as if I came up to you, John, and tried to touch your nose. He has the same sort of revul­sion to it. It feels to him very much like this organ has become—as Andy Clark would say—profoundly embod­ied. I just won­der why you start­ed with that exam­ple of an artist explor­ing the cyborg-isation of his body because what he’s doing seems the fur­thest thing away from me as some­thing which is prac­ti­cal for use in the work­force.

Danaher: I think he’s a good exam­ple. Partly because he’s some­body who self iden­ti­fies as a cyborg. I use a quote from an inter­view with him where he says, I don’t use tech­nol­o­gy, I am technology.”—that’s the phrase that he uses. I think he’s his­tor­i­cal­ly set up some­thing like the Cyborg Society that cam­paigns for the rights of cyborgs and, more recent­ly, some­thing like the Transpecies Society where he’s argu­ing for a post human iden­ti­ty as a con­cept.

What I find inter­est­ing about what Neil is doing is that he is using tech­nol­o­gy in a way to tran­scend the lim­i­ta­tions of human bio­log­i­cal form. To me, what he’s doing is he’s cre­at­ing a new kind of sen­so­ry engage­ment with the world, which I find inter­est­ing. He’s exper­i­ment­ing with the lim­its of human form. To me, this is a utopi­an project, because one of the things I argue in the book is that we should­n’t have a con­cep­tion of what a utopia is. That is, it’s a blue­print for the ide­al society—something like Plato’s Republic, or Thomas More’s utopia, where it’s a very rigid for­mu­la for what the ide­al soci­ety should look like. I think we should have a more hori­zon­al under­stand­ing of what a utopia is. A utopi­an soci­ety is one that’s kind of dynam­ic in the right ways. It’s not some­thing that’s dri­ven by inter­per­son­al con­flict and vio­lence. That’s the wrong kind of dynamism that you want in a soci­ety. So it’s sta­ble in that respect, it’s peace­ful. But there’s an open future for peo­ple, that we’re expand­ing into new hori­zons. What I think Neil is doing is he’s expand­ing into a new hori­zon of pos­si­ble human exis­tence, and that’s what I find stim­u­lat­ing and excit­ing about what he’s doing.

Mason: It seems to me they’re try­ing to explore a spec­trum of human pos­si­bil­i­ties, and the cyborg is no longer as Kevin Warwick or even Tim Cannon from Grindhouse Wetware would say. It’s no longer about upgrad­ing or mak­ing the human bet­ter or stronger or faster or smarter. For Neil, or Moon, it’s real­ly about explor­ing a mul­ti­tude of dif­fer­en­ti­at­ed sen­so­ry modal­i­ties, allow­ing them­selves to be more sim­i­lar to ani­mals than to machines.

Danaher: It’s not nec­es­sar­i­ly that they’re try­ing to com­pete with machines in terms of cog­ni­tive abil­i­ty. What they are doing is that they are explor­ing dif­fer­ent kinds of mor­phol­o­gy, dif­fer­ent kinds of phe­nom­e­nol­o­gy, and dif­fer­ent ways of expe­ri­enc­ing and engag­ing with the world. There’s two dif­fer­ent visions of what tran­shu­man­ism is, let’s say. There is the kind of human­i­ty on steroids view, which is that we’re upgrad­ing our exist­ing abil­i­ties. You just want more intel­li­gence, more strength, more happiness—that kind of thing. Maybe the David Pierce under­stand­ing of tran­shu­man­ism, that it’s the three supers: super intel­li­gence, super hap­pi­ness and super longevi­ty; super long lives. What Neil and Moon are doing is some­thing dif­fer­ent, which is try­ing to explore the adja­cent pos­si­bil­i­ty, I guess. The oth­er forms of human exis­tence that might be pos­si­ble out there.

Mason: So the ques­tion then becomes, are we cre­at­ing that form of cyborg utopia—to have some­thing to do in a post-work soci­ety? Because there’s not real­ly going to help us com­pete with machines. Versus what Tim Cannon is argu­ing for, which is to enhance human­i­ty to a lev­el at which it can be com­pet­i­tive to machine-like process­es. If we’re going to be com­pet­ing in the work­place against automa­tion robots and AI, then if we’re able to upgrade our brain and retain all of the fuzzi­ness that makes humans special—but also do all the things that machines can do—then that makes us a much more use­ful work­er.

Danaher: There are these dif­fer­ent ways of pur­su­ing the cyborg project, either the one of tran­scend­ing what is pos­si­ble for humans, and explor­ing new forms of sen­so­ry and embod­ied engage­ment with the world. I out­line that as one of the main argu­ments in favour of the cyborg utopia. But the coun­ter­point to that, and one of the detrac­tions from it, I think, is pur­su­ing the oth­er ver­sion of it, which is like upgrad­ed humans, because I think what’s gonna hap­pen if we do that is it’s just going to dou­ble down on the worst fea­tures of the econ­o­my that we have at the moment. So you know, instead of just com­pet­ing on edu­ca­tion for employ­a­bil­i­ty, you’re also going to be com­pet­ing on hav­ing the right kinds of cyborg implants. Some peo­ple might think this holds a degree of hope for the future of work because what it might do is it might increase the pow­er of labour rel­a­tive to cap­i­tal, because cyborg work­ers have more bar­gain­ing pow­er than ordi­nary human work­ers. But I’m scep­ti­cal of that because it depends on how cyborg implants get dis­trib­uted amongst the work­force, you know. Is this some­thing that’s only going to be avail­able to an elite few?

Also, if you think about the kinds of things that a cyborg work­er could do bet­ter than a machine, based on what we see at the moment, it’s prob­a­bly going to be some­thing like a ware­house work­er or phys­i­cal work­er with an exoskele­ton that just enables them to per­form dex­ter­ous phys­i­cal tasks with greater speed, effi­cien­cy, that kind of thing. At the moment, it’s the case that those kinds of work are often the least val­ued and least pleas­ant forms of work in human soci­ety. So if that’s the way that the cyborg implants are going to go, it does­n’t seem to be a recipe for flour­ish­ing or utopia.

Mason: It does seem that the thing that’s on the near hori­zon is the sort of cyborg upgrades that are sim­i­lar to non neur­al pros­thet­ics; the exoskele­tons that allow humans to lift heav­ier

objects. But it also feels like there is going to be a race around the human brain, around brain com­put­er inter­faces. It feels like Brian Johnson and his Kernel Co. in com­pe­ti­tion with Elon Musk’s neur­al link might be the bat­tle we see over the future of work. I just want­ed your opin­ion on those sorts of cyber­net­ic enhance­ments, the ones that look like they’re going to be on the mar­ket poten­tial­ly very soon—if the ways in which they’re advo­cat­ing for these sorts of tech­nolo­gies hold true.

Danaher: If these implants are cre­at­ed part­ly with the aim of upgrad­ing humans in such a way that they’re com­pet­i­tive with machines, I think we’re going to dou­ble down on the worst fea­tures of the employ­ment mar­ket. So this isn’t a recipe for a post-work utopia, in my sense. The oth­er thing then, I sup­pose, is just a degree of scep­ti­cism about the claims that are made on behalf of these kinds of tech­nolo­gies, par­tic­u­lar­ly in the short term. There are a lot of crit­i­cisms of the kinds of things that Elon Musk is com­ing up with. Whether they real­ly will be this kind of tran­scen­dent implant. What I see at the moment is inter­est­ing exper­i­ments and proofs of con­cept. But I don’t real­ly see any­thing that is gen­uine­ly trans­for­ma­tive. I’m def­i­nite­ly open to being sur­prised in this field.

Part of my scep­ti­cism here stems from old­er research inter­ests that I’ve had in the human enhance­ment debate around phar­ma­co­log­i­cal enhance­ments. The philoso­phers spent a lot of time debat­ing those things, and lots of inter­est­ing work was done in it. But let’s be hon­est, in real­i­ty, we haven’t real­ly had any gen­uine phar­ma­co­log­i­cal enhance­ments, just pret­ty minor improve­ments. We might be going down the same route when it comes to these kinds of cyborg enhance­ments. That’s anoth­er rea­son as well—why I think the alter­na­tive path­way to the cyborg future, which is not one of upgrad­ing human­i­ty, but one of mov­ing into this adja­cent possible—isn’t a more inter­est­ing path­way.

Mason: There is some­thing inter­est­ing that the poten­tial of the cyborg utopia leads to. Whether its longevi­ty and col­lec­tive after­life, or even cyborgs in space, which, odd­ly enough, fea­tures both as an advan­tage of a cyborg utopia and a dis­ad­van­tage of a cyborg utopia. It was one of the most inter­est­ing pos­si­bil­i­ties in that chap­ter, and I just won­der if you could explain a lit­tle bit more about the pos­si­bil­i­ty of cyborgs in space.

Danaher: Yeah, it does have that kind of cheesy 1980s sci­ence fic­tion title, or some­thing even more dat­ed. So with­in that chap­ter on the cyborg utopia, one of the argu­ments in favour of Cyborgism is space explo­ration and trav­el. This is kind of the orig­i­nal ratio­nale for the cyborg, the orig­i­nal coin­ing of the term was that it would help us to explore space. But why would explor­ing space be a utopi­an project? Well, part of it goes back to this notion of expand­ing the hori­zons of human pos­si­bil­i­ty. So peo­ple like Neil Harbisson—they’re expand­ing the hori­zons of pos­si­ble human embod­ied exis­tence. That’s one hori­zon that we can explore. But there’s also gen­uine geo­graph­i­cal hori­zons that we can explore. The sad real­i­ty is that we’ve explored most of the hori­zons here on earth and the hori­zons that are left to us are in space. So, space pro­vides this almost infi­nite land­scape that we can expand out into, and explore new pos­si­ble forms of human exis­tence in that infi­nite land­scape. That’s inter­est­ing, I think. To me, it’s part of this need for dynamism and open­ness in the future.

There’s also an argu­ment that I’m quite influ­enced by by a guy called Ian Crawford, who is one of the lead­ing pro­po­nents of human space explo­ration, where he out­lines this intel­lec­tu­al argu­ment for space trav­el. It’s to the extent that we think that new knowl­edge and new intel­lec­tu­al chal­lenges are a part of what gives mean­ing to our lives. It seems like explor­ing space is going to be a recipe for that kind of intel­lec­tu­al excite­ment and engage­ment, both in terms of sci­en­tif­ic explo­ration of space, sci­en­tif­ic exper­i­men­ta­tion, sci­en­tif­ic exam­i­na­tion of inter­stel­lar envi­ron­ments and oth­er plan­ets, but also new forms of aes­thet­ic expres­sion.

One of the points that Crawford makes is that—to some extent anyway—our aes­thet­ic expres­sion depends on the kinds of expe­ri­ences that we have. As we expand out to explore new envi­ron­ments, we’re going to have new kinds of aes­thet­ic expe­ri­ences and new forms of aes­thet­ic expres­sion. It’s a recipe for enhanced cos­mic art­work, for exam­ple. Also that we’ll have to explore new forms of polit­i­cal and social arrange­ment. How we deal with mul­ti gen­er­a­tional star­ships. How will we man­age colonies on mul­ti­ple plan­ets? What kind of polit­i­cal organ­i­sa­tion, what kind of eth­i­cal rules do we need for that? So there’s some­thing inter­est­ing here. There are jobs for polit­i­cal and eth­i­cal philoso­phers in this world. It’s an intel­lec­tu­al­ly stim­u­lat­ing project.

There’s also anoth­er point here, which is that it may in some sense be exis­ten­tial­ly nec­es­sary for us to explore space. It cer­tain­ly seems to be true in the long run, that we’ll need to get off the plan­et if we want to sur­vive. But maybe even in the short run, it’s some­thing that we need to do to actu­al­ly con­tin­ue human existence…and con­tin­ued human exis­tence is a nec­es­sary con­di­tion for con­tin­ued human flour­ish­ing.

The coun­ter­point to that is that there could be a lot of risks embed­ded in it. The philoso­pher Phil Torres—he’s writ­ten this inter­est­ing paper about the exis­ten­tial risks of space coloni­sa­tion. One of the points he makes is that as we expand out onto dif­fer­ent plan­ets, it’s pos­si­ble that humans will spe­ci­ate because they’ll be fac­ing dif­fer­ent kinds of selec­tive pres­sures in dif­fer­ent envi­ron­ments. So they’ll form dif­fer­ent groups with dif­fer­ent needs and dif­fer­ent ide­olo­gies. There’s going to be a recipe for poten­tial con­flicts between the dif­fer­ent groups and dif­fer­ent plan­ets. How do we man­age con­flict here on earth? Well, going back to the work of British polit­i­cal philoso­pher Thomas Hobbes, we need some kind of Leviathan, some kind of polit­i­cal insti­tu­tion­al struc­ture that keeps the peace between peo­ple. Torres’ point is that it’s gonna be very, very dif­fi­cult to have a cos­mic solar sys­tem wide or inter­galac­tic Leviathan. So what’s gonna hap­pen then, is that there’s a dan­ger that these dif­fer­ent colonies with dif­fer­ent inter­ests and needs per­ceive each oth­er as a threat to their con­tin­ued sur­vival and flour­ish­ing. So they engage in these pre­emp­tive strikes to wipe out the threat. There’s no cos­mic Leviathan to keep the peace, and so we’re gonna have this mas­sive inter­galac­tic war. This leads Torres to con­clude that we should delay space coloni­sa­tion and explo­ration as much as pos­si­ble.

Some of what Torres says I think is fan­ci­ful and spec­u­la­tive. I think there are rea­sons to believe that actu­al­ly, sur­viv­ing dif­fer­ent plans might reduce the kinds of con­flicts between dif­fer­ent groups. I use this kind of glib phrase in the book from Robert Frost that, Good fences make for good neigh­bours, and what could be a bet­ter fence than a cou­ple of light years of cold dark space.” But there’s also going to be prob­lems on indi­vid­ual colonies with­in space that because they face such extreme con­di­tions of exis­tence that aren’t nec­es­sar­i­ly hos­pitable to crea­tures like us, they could cre­ate the con­di­tions for very author­i­tar­i­an forms of gov­ern­ment. The astro­bi­ol­o­gist Charles Cockle has writ­ten some very inter­est­ing papers on this phe­nom­e­non, about tyran­ny in space colonies being a seri­ous prob­lem. Those are some rea­sons to be cau­tious about the project of space coloni­sa­tion, being some­thing that’s tru­ly utopi­an.

Mason: I won­der almost if the work that Neil Harbisson is doing with trans-species and the new polit­i­cal ways in which we’ll have to organ­ise soci­ety here on Earth as we cre­ate a dif­fer­en­ti­at­ed form of human­i­ty based on all of our dif­fer­ent cyber­net­ic addi­tions and enhance­ments that will pre­pare us for deal­ing with the pol­i­tics of sub spe­ci­a­tion.

Danaher: Yeah, no, I think that’s a weak­ness in the Torres argu­ment. So the assump­tion that he’s mak­ing is that stay­ing on plan­ets is bet­ter than going off plan­et but actu­al­ly stay­ing on, there are lots of exis­ten­tial risks that we face when we’re on plan­et and we could face very sim­i­lar kinds of polit­i­cal strife. So we’re gonna have to con­front those kinds of prob­lems any­way, prob­a­bly, even if we stay put on Earth.

Mason: What you were just say­ing is the rea­son that cybor­gism isn’t real­ly the utopia we’re look­ing for is because it feels like these devel­op­ments are so far away, but the utopia that could just be around the cor­ner is the Virtual utopia. Just help me to under­stand what you mean when you talk about this vir­tu­al utopia.

Danaher: This is the trick­i­est part of the book, by far. It’s also the bit that I think has con­fused most peo­ple. One thing I’ll just say at the out­set is that I think the con­cept of a vir­tu­al form of exis­tence is inher­ent­ly prob­lem­at­ic and neb­u­lous. I don’t think there’s ever such a thing as a com­plete­ly vir­tu­al way of life. But there is a way of life, I think, that has ele­ments to it that qual­i­fy as vir­tu­al. Now how I under­stand the con­cept of a vir­tu­al way of life…I’m bet­ter at defin­ing what it’s not, then nec­es­sar­i­ly defin­ing what it is. The forms of vir­tu­al utopia that I don’t agree with are what I call the stereo­typ­i­cal view of what a vir­tu­al utopia is, which is the computerised—the com­put­er sim­u­la­tion view. So what a vir­tu­al form of exis­tence is, is that you immerse your­self in a com­put­er sim­u­lat­ed envi­ron­ment. Something like…let’s say the Holodeck from Star Trek, or Neal Stephenson’s meta­verse from his pop­u­lar nov­el in the ear­ly 90s, the Snow Crash—which was actu­al­ly quite influ­en­tial for peo­ple cre­at­ing vir­tu­al real­i­ty tech­nolo­gies. That form of exis­tence? That’s cer­tain­ly vir­tu­al in some sens­es because some of the things that hap­pen with­in a com­put­er sim­u­lat­ed envi­ron­ment or some of the objects and peo­ple you encounter aren’t quite real.

One of the illus­tra­tions I have of this in the book is, imag­ine you’re in a com­put­er sim­u­lat­ed envi­ron­ment, where there’s an apple on a table, let’s say. Clearly, the apple isn’t a real apple. It’s a visu­al rep­re­sen­ta­tion of an apple. It does­n’t have the phys­i­cal prop­er­ties that a real apple has to have. It does­n’t have the right mix of pro­teins, and sug­ars and all that. It exists as a sim­u­la­tion of a real world apple, and that’s what that’s what makes it vir­tu­al in that world. But it’s also true to say that lots of things that hap­pen in a com­put­er sim­u­la­tion will be real, and can have real con­ver­sa­tions with oth­er peo­ple through avatars in a vir­tu­al envi­ron­ment. We do this all the time already. We live an increas­ing amount of our lives in dig­i­tal spaces, but I don’t think any­one would say that the kinds of inter­ac­tions that we have in those spaces are not real. In fact, they’re very real and very con­se­quen­tial. The emo­tion­al expe­ri­ences that you can have in a com­put­er sim­u­lat­ed world can be real. You can be real­ly afraid and be real­ly hap­py. You can be real­ly trau­ma­tised by things that hap­pen to you. People can assault” you in a vir­tu­al environment—not in the sense that they phys­i­cal­ly harm you, but they can psy­cho­log­i­cal­ly harm you. In law, we recog­nise psy­cho­log­i­cal harm as a form of assault.

I think the stereo­typ­i­cal view of vir­tu­al real­i­ty is flawed, because it does­n’t make these dis­tinc­tions between the things that are real with­in a vir­tu­al envi­ron­ment or a com­put­er sim­u­lat­ed envi­ron­ment, and things that are not real with­in a com­put­er sim­u­lat­ed envi­ron­ment.

Mason: You make it very clear that you’re not talk­ing about vir­tu­al real­i­ty as we know it currently—the head­sets and the Oculus Rift. You’re talk­ing about this notion of the vir­tu­al where­by we’re com­fort­able with cer­tain things which are not phys­i­cal­ly real, and yet still actu­al. So for exam­ple, fic­tion­al char­ac­ters. You use the exam­ple of Sherlock Holmes in the book.

Danaher: Yeah. So the Sherlock Holmes exam­ple is that—how does he exist? Well, he clear­ly does­n’t exist as a real phys­i­cal per­son, but he does exist as a real fic­tion­al char­ac­ter. You can make claims about Sherlock Holmes that are true and false. Sherlock Holmes lived at 22A Baker Street. That’s a real claim about the fic­tion­al char­ac­ter Sherlock Holmes. You know, you can describe actions that took place in the nov­el. So, he has a real form of exis­tence —he just does­n’t exist as a real phys­i­cal per­son. Different kinds of things in the world have dif­fer­ent exis­tence con­di­tions attached to them. So things like apples and chairs—they have to have a real phys­i­cal exis­tence in order to count as an instance of an apple or a chair. But there are oth­er things that don’t actu­al­ly have to have a phys­i­cal exis­tence to count as a real thing, right? So that’s actu­al­ly one of the points I make about Sherlock Holmes. You could have detec­tives that exist in pure­ly com­put­er sim­u­lat­ed form, because what a detec­tive is, is real­ly just a func­tion­al thing. It solves crimes.

There are already peo­ple try­ing to cre­ate AI that can help in solv­ing crimes. Are those AIs not real? Are they vir­tu­al, sim­ply because they exist inside a com­put­er? No, because they are func­tion­al objects. What they need to real­ly exist is to per­form the right func­tion. Again, this gets back to the point that things that exist in com­put­er sim­u­lat­ed envi­ron­ments. Some of them are not real, some of them are pure­ly virtual—but some of them are actu­al­ly real, because they per­form the func­tions that those things are sup­posed to per­form.

Mason: They’re real inso­far as they can have an effect on us, and our emo­tions and our expe­ri­ence.

Danaher:  Yeah, so that’s anoth­er kind of real­i­ty. Yeah. So they do make a dif­fer­ence to the world in some way.

Mason: You use the author of Sapiens Yuval Noah Harari’s idea of how cer­tain things that we per­ceive in real life, every­day real­i­ty as hav­ing some ele­ment of sim­u­la­tion to them. These fic­tions, these meta-fictions that we cre­ate, to us, become real—whether it’s reli­gion or cap­i­tal­ism. These aren’t nat­ur­al things. They’re arti­fi­cial things that we’ve giv­en a degree of agency. Therefore those fic­tions, again, become a real part of every­day lived real­i­ty.

Danaher: The Harari view is kind of the coun­ter­point to the stereo­typ­i­cal view. The stereo­typ­i­cal view of vir­tu­al real­i­ty is that we have this com­put­er sim­u­lat­ed thing. The Harari view—I refer to it in the book as the coun­ter­in­tu­itive view—which is that actu­al­ly pret­ty much  large chunks of our lives are real­ly all vir­tu­al. That’s his main claim, right. You know, there are two ways of mak­ing that claim. Harari makes it one way, but I’m going to make an adja­cent claim that I think sup­ports the same point, which is that actu­al­ly a huge amount of our lives are lived in arti­fi­cial­ly con­struct­ed envi­ron­ments as is. Right now as we’re speak­ing, we’re hav­ing this con­ver­sa­tion in a room that shields us from the exter­nal envi­ron­ment, has arti­cle light­ing, arti­fi­cial heat­ing, and so forth. Humans have long been cre­at­ing these arti­fi­cial envi­ron­ments in which we can live out our lives, in which we are shield­ed from a lot of the con­se­quences, a lot of the neg­a­tive fea­tures of the real world.

You could argue that the long term trend for civ­i­liza­tion is to have an increas­ing­ly vir­tu­al form of life, liv­ing inside increas­ing­ly arti­fi­cial envi­ron­ments. So this is kind of a par­al­lel to Andy Clark’s point about us being nat­ur­al born cyborgs. What I’m sug­gest­ing here is that we’re kind of nat­ur­al born vir­tu­al beings as well. Harari’s point is slight­ly dif­fer­ent, which is that actu­al­ly, in addi­tion to the arti­fi­cial­i­ty of the envi­ron­ments that we live in, a lot of the mean­ing and val­ue that we attach to the activ­i­ties we per­form in these envi­ron­ments is a pro­jec­tion of our imag­i­na­tion. He uses this exam­ple of reli­gion. He uses this illus­tra­tion of if you look around Jerusalem, lots of peo­ple attach reli­gious sig­nif­i­cance and mean­ing to arte­facts in that phys­i­cal envi­ron­ment, but that’s not actu­al­ly intrin­sic or inher­ent in the objects. If you invest into them sci­en­tif­i­cal­ly, you would­n’t find their holi­ness, so to speak. It’s some­thing that we project onto the envi­ron­ment through our minds. This is a more gen­er­al point that has been made by oth­ers in more or less rad­i­cal forums.

I use a quote from Terence McKenna in the book—which is one of the most extreme illustrations—which is that real­i­ty is a col­lec­tive hal­lu­ci­na­tion. But you know, philoso­phers as respectable as Emmanuel Kant have essen­tial­ly argued that a large part of what we expe­ri­ence in the world is some­thing that we project onto that world. We’re run­ning a kind of vir­tu­al real­i­ty sim­u­la­tor in our minds that we use to inter­pret our expe­ri­ences. Harari goes a step fur­ther. When peo­ple are wor­ried about what the future holds, does that mean the world is going to live inside vir­tu­al real­i­ty machines and play com­put­er games all the time? He makes the claim that actu­al­ly we’re already doing that, and he goes so far as to sug­gest that reli­gion is itself a vir­tu­al real­i­ty game. He also uses con­sumerist cap­i­tal­ism as an illus­tra­tion of this. Religion is a vir­tu­al real­i­ty game where you score points by per­form­ing the right behav­iours, and you lev­el up at the end by going to par­adise. This is lit­er­al­ly the claim he makes, right?

Yeah, as provoca­tive as Harari is, I think he’s right to say that a large part of what we cur­rent­ly do and the way we cur­rent­ly live is vir­tu­al­ly sim­u­lat­ed in our minds. I think he goes a step too far, because I think if you asked reli­gious believ­ers whether what they’re doing is a vir­tu­al real­i­ty, they would say to us, Absolutely not, I real­ly believe that these things are holy, and what I’m doing real­ly mat­ters. I don’t think that what I’m doing is incon­se­quen­tial or triv­ial. It’s not a game to me.” So what I argue for instead, is that we kind of embrace this Harari-like coun­ter­in­tu­itive view of what vir­tu­al real­i­ty is, but we step back a lit­tle bit from his extreme inter­pre­ta­tion, which is that every­thing is kind of a vir­tu­al real­i­ty game. We argue that there’s only cer­tain kinds of things that are vir­tu­al real­i­ty games, and they are things that we know our­selves to be games. So we know that there is a kind of arbi­trary set of rules that we’ve applied to the way in which we engage and per­form activ­i­ties.

All games to me are a form of vir­tu­al real­i­ty. Take the exam­ple of chess—there’s noth­ing in the laws of physics that dic­tates that you have to move pieces around the chess­board in a par­tic­u­lar way. You don’t. We have con­struct­ed a set of rules that we apply to how we engage with the chess­board, and they con­strain how we behave in the envi­ron­ment. We know that they are arbi­trary rules. Nevertheless, peo­ple play these games, and there are good ways of play­ing them. There are ways of play­ing it skill­ful­ly and well, and peo­ple derive great mean­ing and sat­is­fac­tion from play­ing these games. Some peo­ple ded­i­cate their entire lives to doing so, right? But they know that they are games. Just to fin­ish the point, that with the vir­tu­al utopia chap­ter is that we can use that as a mod­el for a vir­tu­al utopia, where every­thing we do is, in a sense, a game.

Mason: When you set up the propo­si­tions at the begin­ning of the book, you’re talk­ing about this vir­tu­al utopia. I won­dered, Is John sug­gest­ing that we will escape into vir­tu­al real­i­ty?”, but no—what you’re sug­gest­ing is some­thing much more nuanced. You set up the qual­i­ties that a vir­tu­al utopia should have, which are very sim­i­lar to rules of the game. I just won­der if you could share some of those qual­i­ties and why you think those are so impor­tant for cre­at­ing this vir­tu­al utopia.

Danaher: My under­stand­ing of vir­tu­al utopia is tech­no­log­i­cal­ly agnos­tic, in that I think you can realise a vir­tu­al form of exis­tence in many dif­fer­ent kinds of envi­ron­ments. You can do it in a com­put­er sim­u­lat­ed envi­ron­ment, and I don’t deny that. I’m open to that pos­si­bil­i­ty, and I use exam­ples of that in the book. You can also realise that in the real world—games are a way of doing this. So you know, I rely in the book on a the­o­ry from a philoso­pher called Bernard Suits about what a game is. Suits wrote this very odd book back in the 70s. It’s a dia­logue about what a game is and what a utopia is. What he argues is that a game is some­thing that has three prop­er­ties. It has a pret­ty luso­ry goal. It has a luso­ry atti­tude and a set of con­sti­tu­tive rules. A pre­lu­so­ry goal is some­thing that you do that can be iden­ti­fied before you know what the game is, that con­sti­tutes suc­cess in the game. In a sense, he argues that it’s kind of the scor­ing of points in a game.

To use the illus­tra­tion I have in the book—the game of golf. The pre­lu­so­ry goal in golf is to get your ball into a hole, and that’s the end state that you want to reach. The con­sti­tu­tive rules are the way in which you have to go about achiev­ing the pre­lu­so­ry goal. The con­sti­tu­tive rules—what they do is they set up arbi­trary obsta­cles to achiev­ing the goal in the most effi­cient, pos­si­ble way. So the most effi­cient way to get a ball into a hole is just to pick it up, walk down the fair­way and drop it in the hole. But that, of course, is not how you’re sup­posed to play golf. There’s lim­i­ta­tions on what you can do, you have to use a club to hit the ball to get it in the hole. There are all sorts of oth­er rules about when you’re not allowed to ground your club, when you’re in a haz­ard and you have to drop it out of a cer­tain area. So there’s all these addi­tion­al con­sti­tu­tive rules that place con­straints on how we can get the ball into the hole. Those are the con­sti­tu­tive rules and the luso­ry atti­tude is just a pos­i­tive ori­en­ta­tion towards the game; that you accept the con­stituent rules as the con­straints on how you achieve the goal.

The short way of express­ing Suits’ view of what a game is, is that it is the vol­un­tary tri­umph over arbi­trary obsta­cles. That’s the essence of what a game is. That’s what I’m argu­ing for in the book—is that we can actu­al­ly use this as a mod­el for a utopi­an form of exis­tence, where what we should try to do is to play games, cre­ate more games, and explore a land­scape of dif­fer­ent pos­si­ble games. This holds with­in it the poten­tial for utopia. But the key thing, then, about that under­stand­ing is that it does­n’t have to be com­put­er sim­u­lat­ed. We can be play­ing games in the real phys­i­cal world, and that would count as a form of vir­tu­al exis­tence, because—again to go back to the point I made about Harari—for me, what’s wrong with Harari is that he does­n’t acknowl­edge that some peo­ple don’t see the rules and con­straints on their behav­iour as pure­ly arbi­trary. Whereas when you’re play­ing a game, you are aware of the fact that they are arbi­trary.

Mason: The way in which you dis­cuss vir­tu­al utopia: one instance is like a game as you just described, but you also describe it as an oppor­tu­ni­ty for world build­ing. I won­der if you could explain that sec­ond form of under­stand­ing of vir­tu­al utopia, and then bring those togeth­er to help us under­stand what a vir­tu­al utopia might actu­al­ly look like in prac­tice.

Danaher: So you’re right, there are two argu­ments that I have for a vir­tu­al utopia. One is based on this game-like mod­el. The oth­er one is a slight­ly more polit­i­cal under­stand­ing of what a vir­tu­al utopia is. I look at the work of the philoso­pher Robert Nozick, who wrote a famous book back in the 70s called Anarchy, State, and Utopia. That book is famous for the Anarchy and State parts, but most peo­ple ignore the last part of the book, which is the Utopian part—which to me is actu­al­ly the most inter­est­ing part of the book because it’s the most nov­el part of it. He has this very inter­est­ing analy­sis of what a utopia is. What he says is that a utopi­an world is a world that is sta­ble. And a world that is sta­ble is a world in which every mem­ber of that world likes it more than any oth­er pos­si­ble world. Then he argues that you can’t pos­si­bly realise that utopia in the real world because every­one has dif­fer­ent under­stand­ings of what an ide­al form of exis­tence would look like. They have dif­fer­ent pref­er­ences, dif­fer­ent ways in which they will order what is valu­able and impor­tant to them.

Some peo­ple might pri­ori­tise playing—to use the game analogy—one kind of game over anoth­er kind of game. We can’t have a utopia in which every­one is forced to play chess. Or, to use a lit­er­ary illus­tra­tion, Hermann has this nov­el, The Glass Bead Game, where there’s this one sin­gle game that every­one is ori­ent­ed towards play­ing in soci­ety. This is the source of mean­ing and val­ue in that soci­ety. That does­n’t look utopi­an because some peo­ple have dif­fer­ent pref­er­ences. So Nozick says, Well, you can’t realise a sta­ble world or utopi­an world. So what can you do?” He says, What you can do is you can try to cre­ate a meta utopia.” What that means is you cre­ate a world build­ing mech­a­nism; a way in which peo­ple can cre­ate the kind of world that they pre­fer that match­es their pref­er­ences, and then some­how they’re kept iso­lat­ed from peo­ple with com­pet­ing pref­er­ences. He argues that a lib­er­tar­i­an, min­i­mal state is the meta-utopia, A min­i­mal state allows peo­ple to cre­ate these dif­fer­ent asso­ci­a­tions that have what­ev­er val­ue struc­ture they pre­fer, and they can live with­in those asso­ci­a­tions, and they can migrate between dif­fer­ent asso­ci­a­tions if they like. All the state does is it just tries to keep the peace between the dif­fer­ent asso­ci­a­tions. That’s what a meta-utopia is. It’s just it’s a world build­ing mech­a­nism for peo­ple to cre­ate the asso­ci­a­tions that they pre­fer.

What I argue in the book is that I think that’s an inter­est­ing pro­pos­al and mod­el of what utopi­an exis­tence would look like, but it faces some prac­ti­cal lim­i­ta­tions, par­tic­u­lar­ly if we’re going to try and realise it in the real world, in the phys­i­cal world. Because there are geo­graph­i­cal lim­i­ta­tions of space—how are we going to cre­ate all these dif­fer­ent worlds? These dif­fer­ent asso­ci­a­tions? How are you actu­al­ly going to please the bound­aries between the dif­fer­ent asso­ci­a­tions? And what if one asso­ci­a­tion prefers to con­vert every­body else to their calls, their mis­sion­ar­ies or impe­ri­al­ists? That’s the lan­guage that Nozick uses in analy­sis. It seems it’s gonna be very prac­ti­cal­ly dif­fi­cult to do this. What I do suggest—and this is where I do rely heav­i­ly on the notion of a com­put­er sim­u­lat­ed mod­el of utopia, vir­tu­al utopianism—is that what we could do is that we could cre­ate dif­fer­ent worlds in a com­put­er sim­u­lat­ed envi­ron­ment, and then we don’t face the same kinds of phys­i­cal con­straints and con­cerns or prac­ti­cal dif­fi­cul­ties that we would face in Nozick’s vision of utopia. So I don’t see those two dif­fer­ent utopias—utopia games and the utopia of the vir­tu­al meta-utopia—as two dif­fer­ent things. I think they’re com­ple­men­tary visions of what vir­tu­al utopia is. You can play the games, you can also cre­ate these dif­fer­ent vir­tu­al com­put­er sim­u­lat­ed asso­ci­a­tions in which you can con­sort with like mind­ed peo­ple.

I should also add, though, that when I argue for this utopi­an vision—one in which we can build dif­fer­ent worlds, and we can play dif­fer­ent games—I don’t mean by that, that those are the only things that we do. It’s not that we only ever play games. There’s still lots of oth­er things that are open to you, in life. You can have friend­ships, you can have fam­i­lies, you can have dif­fer­ent kinds of social organ­i­sa­tions, you can per­form good moral deeds towards your neigh­bours. These things are all still acces­si­ble to us in this mod­el. It’s just that instead of work being the main focus or tra­di­tion­al polit­i­cal struc­tures being the main focus of our atten­tion, we focus on games instead.

Mason: If these are pos­si­ble utopias, then why don’t we start them right now, here, on ter­ra fir­ma? Here on ter­res­tri­al Earth? There are so many prob­lems that we could solve through gam­i­fy­ing cer­tain things, such as cli­mate change, that would enable us to con­tin­ue to live on this plan­et, rather than go off and live our Cyborg future out in space. I won­der, could what you’re prof­fer­ing in the book be applied to the real world as we live in it now with the chal­lenges that we’re fac­ing on the hori­zon? The biggest one being cli­mate.

Danaher: To some extent, I think that what I’m propos­ing in the book is already hap­pen­ing. I use some exam­ples that sug­gest that the amount of time that peo­ple spend on leisure—playing com­put­er games is one illus­tra­tion of this—has increased, par­tic­u­lar­ly in young peo­ple over time, because they find it more dif­fi­cult to find employ­ment. So it’s already the case that there’s this kind of gam­i­fi­ca­tion of life tak­ing place. Whether it can be used to solve exis­ten­tial risks, like cli­mate change? You know, there are peo­ple who are exper­i­ment­ing with ways of har­ness­ing col­lec­tive intel­li­gence and arti­fi­cial intel­li­gence to solve some of these prob­lems. I think Thomas Malone from MIT wrote this inter­est­ing pop­u­lar book last year called Superminds, where he talks a lot about some of the ways in which his lab is try­ing to cre­ate these games that enable peo­ple to come up with pol­i­cy pro­pos­als to solve real world prob­lems, which have a gam­i­fied struc­ture to them. I think those pro­pos­als are inter­est­ing.

 One of the assump­tions that I do have in the book is that I think we’re going to increas­ing­ly rely on arti­fi­cial intel­li­gence, and machines and automat­ing tech­nolo­gies to address some of these prob­lems over time. I spoke to a guy called Miles Brundage about this, actually—on my own pod­cast. He has this inter­est­ing paper, he wrote The Case for Conditional Optimism about AI. It’s very con­di­tion­al, but the one of the main points he makes is that AI can actu­al­ly help to solve glob­al coor­di­na­tion prob­lems that we have, includ­ing prob­lems around arms con­trol and cli­mate change. We can use gam­i­fied struc­tures to address some of these prob­lems, but I think it’s going to be part­ly a col­lab­o­ra­tion between humans and machines, and also increas­ing­ly some­thing that we out­source to machines.

Mason: In that case, Cyborg utopia or vir­tu­al utopia? If you had to pick, which one would you choose, John?

Danaher: I come down in favour of the vir­tu­al utopia, because I think it’s more prac­ti­cal­ly achiev­able in the short run. I think it also does con­tain some­thing that is some­thing gen­uine­ly post-work, and also allows for a seri­ous kind of human flour­ish­ing. That’s not some­thing that we’ve addressed in this con­ver­sa­tion. So let me just briefly say that when I ini­tial­ly present this notion of a utopia of games to peo­ple, they recoil from it because they think it’s some­thing triv­ial about that exis­tence. But I try to point out that actu­al­ly, there’s lots of good things that you can achieve with­in a game. You can per­form moral acts with­in a game-like struc­ture. You can achieve mas­tery over cer­tain skill sets. There are intrin­sic goods asso­ci­at­ed with the activ­i­ties that you per­form in a game. It also pro­vides this infi­nite land­scape of pos­si­bil­i­ty for us to explore, so it fits with this hori­zon­al mod­el of utopi­anism that I was out­lin­ing ear­li­er on.

I’m not, how­ev­er, com­plete­ly opposed to the cyborg utopia, as it has come out in this con­ver­sa­tion. There are cer­tain ways of becom­ing Cyborg-like that, I think, feed into this kind of vir­tu­al mod­el of utopia. It’s about new kinds of enter­tain­ment, as we were say­ing, and new forms of exis­tence, and not about dou­bling down on the worst fea­tures of human exis­tence. On bal­ance, though, I think that the cyborg utopia is less like­ly in the medi­um term, and so that’s why I favour the vir­tu­al utopia.

Mason: I mean these are the two things that link these forms of utopia. Is it real­ly the fact that a post-work soci­ety is going to give us so much more oppor­tu­ni­ty to explore a spec­trum of dif­fer­ence in the ways in which we live in the future?

Danaher: Yeah, I think I think that’s right. I like the way that you framed it—which I wish I now used in the book—which is that we have two pos­si­bil­i­ties: just exper­i­ment­ing with our bod­ies and minds and exper­i­ment­ing with the envi­ron­ments in which we live. One cor­re­sponds to the cyborg utopia, and one cor­re­sponds to the vir­tu­al utopia. Even though I am scep­ti­cal about the medi­um term prospects of the cyborg utopia, that does­n’t mean that we should­n’t pur­sue it. It’s part­ly an issue of pri­ori­ti­sa­tion of resources over time, and where we put things…so it can be put on the back burn­er to some extent.

Mason: How con­fi­dent do you feel that either of these utopias will ever be achieved?

Danaher: Yeah, look, that’s a great ques­tion. So, I don’t nec­es­sar­i­ly feel con­fi­dent that either of them will be achieved. One thing I say in the book—and I’ve said a lot in inter­views that I’ve given—is that I’m not a tech­no­log­i­cal deter­min­ist, or fade­less. I don’t think these things are just nat­u­ral­ly going to hap­pen. These are things that will require polit­i­cal effort and col­lec­tive efforts. It’s not some­thing that’s going to hap­pen as a mat­ter of course. We’ll have to agi­tate for it, reform our soci­eties in favour of it. I had a very spe­cif­ic aim in this book, which was to eval­u­ate the dif­fer­ent pos­si­ble post-work utopias, because I felt that this was some­thing that was not being done in the lit­er­a­ture on automa­tion in the human future. There’s kind of an assump­tion that these things will be great and there are implied prin­ci­ples of eth­i­cal prin­ci­ples and val­ue prin­ci­ples that guide that claim, but they’re not made explic­it and they’re not sub­ject­ed to a kind of rig­or­ous analy­sis. That was what I was aim­ing to do in the book. The hope is that by artic­u­lat­ing a vision of what would be a good post-world utopia, this will pro­vide the moti­va­tion to think about how we can real­ly, prac­ti­cal­ly imple­ment it.

Mason: So real­ly, this is a book that is there to inspire a mul­ti­tude of pos­si­bil­i­ties for a post-work future, to encour­age peo­ple not to be so pes­simistic of the idea of human obso­les­cence in the work­place?

Danaher: Yeah, that’s exact­ly right. So it’s a book that’s try­ing to moti­vate and inspire peo­ple towards a pos­i­tive vision of the future.

Mason: John Danaher, thank you for your time.

Danaher: Thank you.

Mason: Thank you to John for shar­ing his insights into the devel­op­ments that might mas­sive­ly trans­form the world of work. You can find out more by pur­chas­ing his book Automation and Utopia: Human Flourishing in a World Without Work, avail­able now.

If you like what you’ve heard, then you can sub­scribe for our lat­est episode. Or fol­low us on Twitter, Facebook, or Instagram: @FUTURESPodcast.

More episodes, tran­scripts and show notes can be found at Futures Podcast dot net.

Thank you for lis­ten­ing to the Futures Podcast.

Further Reference

Episode page, with intro­duc­to­ry and pro­duc­tion notes. Transcript orig­i­nal­ly by Beth Colquhoun, repub­lished with per­mis­sion (mod­i­fied).


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Cash App, or even just sharing the link. Thanks.