Luke Robert Mason: You’re listening to the Futures Podcast with me, Luke Robert Mason.
On this episode, I speak to academic and law lecturer, John Dahaner.
When I said that humans are obsolescing, that doesn’t mean that they’re going to become extinct or irrelevant to the future. It just means that their activities will be less significant.
John Danaher, excerpt from interview.
John shared his insights into the possibility of a post work economy, the impacts of increasing automation, and how our future might be determined by either becoming a cyborg, or retreating into the virtual.
This episode was recorded on location in London, England, before John’s book launch event at London Futurists.
Mason: You open the book with this phrase—this statement, really, “Human obsolescence is imminent.” What did you mean by that?
John Danaher: Yeah, I’ve been taking a bit of flack for using that phrase to open the book because it sounds so pessimistic and ominous. I have to confess it was a little bit of rhetorical hyperbole. I mean that humans are becoming less useful in making changes in the world, or to particular domains of activity. I try to trace out this trend towards human obsolescence across different domains over human history. Agriculture is an obvious example where once upon a time, the majority of people used to work in agriculture. We now see a significant decline or reduction in the number of people working in agricultural related industries. Less than 5% in most European countries, from over 50% as little as 100 years ago.
I also look at the decline of human activity in manufacturing, in medicine and the professions and law, and in scientific inquiry. I look at some new studies that have been done on robotic scientists who can create their own experiments and test their own hypotheses. In politics, in bureaucratic management and in policing. So, I look at the trend towards automation across all these domains of activity, that I think supports the claim that there is this growing obsolescence of humans.
There is one qualification to that though, which is that when I say that humans are obsolescing, that doesn’t mean that they’re going to be going extinct or irrelevant to the future. It just means that their activities will be less significant.
Mason: I mean, you actually go one step further, and you say that this could be an opportunity for optimism.
Danaher: So this is it. This is the kind of rhetorical strategy in a sense. You’re setting it up with this seemingly ominous claim that we’re obsolescing, and this is something that a lot of people will be worried about. They’ll view it in a pessimistic light, but I try to argue that it is actually an opportunity for optimism. Partly because it allows us to transcend our existing way of life—in particular, to escape from the drudgery of paid employment and to pursue a post-work utopia.
Mason: I think that is really important, because you’re not talking about the obsolescence of humanity from Planet Earth. You’re talking about the obsolescence of humanity within the workplace, in the workforce.
Danaher: Yes, exactly. Yeah.
Mason: So can we talk a little bit about this, this idea of a post-work future? In the book, you set up this notion of the fact that a post-work future will be a good thing. Do you think it really will be a utopian outcome to have a post work future? Or do you think it could lead to boredom and chaos?
Danaher: If it’s the case that humans are no longer going to be useful in the workplace or are going to become less and less useful over time, such that more and more people will not pursue paid employment in their lives—this leads to two deprivations. One is a deprivation of income, which of course is essential nowadays because people need an income in order to survive, to pay for the goods and services that enable them to flourish. But also it could lead to a deprivation of meaning, because we live in societies where work is valorised, in the sense that it’s valued. Work ethic is seen as a positive thing. It’s how people make a contribution to their societies. It’s how people often define themselves. If we take that away from people, they’re going to have this crisis of meaning. So, how can we fill that gap in order to address that crisis of meaning? The real goal of the book was to examine that potential crisis of meaning, and whether there is actually an opportunity for optimism embedded in it.
Mason: I mean, there are some people who love their jobs. But in the book, you say you should really hate your job.
Danaher: What I argue in the book is not so much that everybody will necessarily hate their jobs, because hatred is a feeling you have towards work that you do. It’s quite possible that many people have positive feelings towards the work that they do. What I argue instead, is that work is structurally bad and that we’ve fallen into a pattern of employment that is bad for many workers, getting worse—partly as a result of technology. And so we should be concerned about the future of work, and we should look to possible ways to transform and possibly even transcend the need for work.
Mason: So what we’re talking about is really work that’s done for money. Exchange time for money, that’s your definition of work.
Danaher: Yeah, you know, you have to be careful when you talk about the post-work future and the concept of work. The first thing you learn when you talk about the notion of a post-work future is that people have different definitions in mind of what work is. Some people have very expansive definitions of work. They think work is any physical or mental activity performed by humans. So if you talk about a post work future—if that’s your definition in mind—then it probably really just makes no sense because humans are always going to perform some kinds of physical or mental activities. We’re always going to work in that broad and expansive sense. I try to adopt a narrower interpretation of what work is as paid employment. So that means that work, for me, is not any particular kind of activity. It is, rather, a condition under which activities are performed, namely a condition of economic reward of some sort. The economic reward does not necessarily have to be immediately realised. Sometimes there are unpaid forms of work that are done in the hope of receiving future rewards. So there are lots of young students who take unpaid internships, for example, in the hope that they will secure paid employment.
Mason: The reason we’re able to talk about a post work future is this possibility of automation. I mean, that’s at the core of the book—the fear that we’re going to become obsolete because of automation. Automation is going to be the thing that’s going to take our jobs. I just wonder why automation of work is both possible and desirable.
Danaher: Lots of things have been written in the past decade or so about technological unemployment and the future of work, and there have been many interesting arguments and claims about the percentage of jobs that are computeriseable or automatable. I try to engage with those kinds of studies and look at whether it’s really possible to automate work. I think there’s a couple of points to bear in mind when you’re trying to evaluate that claim. One is that I think a lot of people approach this with the wrong set of concepts. So they think about the displacement of workers or jobs, when they really should be thinking about the displacement of work related tasks.
The kinds of automating technologies that we’re developing at the moment are—to use a somewhat technical term—they’re kind of narrow forms of artificial intelligence. They can be good at performing certain kinds of functions or tasks in the workplace. They’re not general forms of intelligence, they can’t choose to perform all the different tasks in the workplace. So what happens when you automate or introduce automating technology into the workplace is that you replace humans in the performance of certain tasks. That doesn’t necessarily mean that you eliminate jobs or eliminate workers, because oftentimes, workers can move into complementary tasks.
One of the examples I have of this in the book is to do with legal workplaces, let’s say. Within a given law firm there are lots of tasks that a lawyer or a team of lawyers will perform in order to provide a valuable service to their clients. They’ll engage in document review, reviewing contracts or other complex legal documents. They will engage in legal research looking at cases and statutes to see how the law can be used to the benefit of their client. They will entertain and schmooze with their clients to make them feel good about the service that they’re offering. They will present and argue in court on behalf of their clients. So there are all these different tasks that are performed within that workplace. Automating technologies at the moment can do some of those tasks. We’ve got pretty good technologies now for document review, and emerging technologies that enable some kinds of basic legal research and prediction of the outcome of cases for law firms. At the moment, we don’t have robots that are very good at schmoozing with clients and entertaining them. So if you introduce automating technologies into a legal workplace, you might find that human workers are displaced from the tasks of document review and certain kinds of legal research. They move into the more customer relations side of it, and maybe also then argue cases in court, in order to persuade a judge or a jury.
Automation changes the dynamic of the workplace. That might mean that some workers are eliminated because their jobs are purely defined in terms of the tasks that machines are good at. But other workers aren’t necessarily eliminated because they have these other things that they can perform that complement what machines do.
Mason: I think that’s the piece that we so easily forget. In actual fact, this automation of the workplace could lead to a complementary relationship between AI and the human. In fact, IBM Watson in the US, when they look at the work that they’re doing to review medical papers, they talk about it as a collaboration between the doctor and IBM Watson. IBM Watson never diagnoses a patient. It makes suggestions to a doctor—a human doctor—to then go and diagnose a patient based on the information that IBM Watson has ingested and has tried to understand. I think if we start seeing the future of work being a collaboration, then maybe there’s something more exciting about how we engage with this automation. Rather than see it as a threat, maybe we could see it as a potential collaborator.
Danaher: In most of these debates about technological unemployment, we focus on the displacement potential of automation, how it displaces workers. There’s also this related phenomenon of how automation can complement what human workers do, and we can collaborate with machines. That’s kind of the hope, I think, amongst the mainstream economic views—that really, technology won’t result in this massive decline in jobs. It’ll just involve this structural reorientation of the workplace so that we just collaborate with machines. We do what we’re good at and the machines do what they’re good at. This is the main objection to the claim that we’ll have widespread technological unemployment.
When I say that there’s a possibility of a post-work future. I don’t think that means that no one will work in the future. I just mean that a growing percentage of the adult human population will not work for a living. One of the ways in which I illustrate this is in terms of something called the labour force participation rate in countries. So the labour force participation rate is the number of adults of working age who both want work and are at work. In most Western European countries, that figure is somewhere between 60 and 70%. So that means it’s already the case that about 30 to 40% of the human population don’t work for a living or don’t even want to work for a living. So when I talk about a post-work future, I’m talking about a future in which that number of non working adults continues to grow. What does it mean to reach a true post work future? I don’t know if there’s an exact boundary line. But certainly if it’s more than 50% of the population that is not working, I think you’ve radically changed the kind of world that we live in.
In terms of this complementarity effect of automation, I’m a little bit sceptical about the potential for this to be a recipe for lots of jobs in the future. When we think about the complementarity effect, the assumption here is that machines will replace humans in some kinds of tasks, but this will open up a space of complementary tasks for human workers. But there’s a challenge here, which is, can you actually get the people who are displaced into these complementary tasks? It may turn out to be difficult to do that. They may need to be educated and retrained. There are certain workers who may be at a stage of their lives where it’s just not really feasible or possible to educate and retrain them. It may also be the case that there’s just not a huge amount of political will to do this, or political support for it, or educational support for it. So can we actually adapt to this new reality where we have to train different kinds of skills? That’s a serious challenge.
There’s also another challenge here, which is that the assumption is that we’ll be able to train humans into these new tasks at a rate that is faster than the rate at which technology is improving in those tasks. This is something that I think people get wrong when they think about automation and narrow AI. They assume that AI is only good at particular tasks. But of course, we’re developing multiple different streams of AI that are good at different tasks. So it could be the case that we can train machines to perform these complementary tasks faster than we can train humans. To give a practical illustration of this, it takes about 20 to 30 years to train a human into the workplace.
Mason: And to that point, you say in the book that it can actually lead to this thing called the cycle of immiseration—this cycle whereby it can never catch up.
Danaher: Yeah. So it’s this idea that that automation can be particularly challenging for young people, because they need to train themselves to have the skills that are valued in the economy. That means they have to get an education that will give them those skills. But how can they get the education, because education is increasingly costly and expensive for people? So oftentimes, the way in which students can pay for their education is that they work part time, but an awful lot of those jobs that they work part time in are the jobs that are most at threat of automation. How are they going to be able to pay for the education that gets them to escape from this threat of automation? This is this potential cycle of immiseration that they can never get out of—the rush that they’re in.
Mason: There’s a lot of scepticism around this idea of technological unemployment. In the book, you use the Luddite fallacy to explain some of that scepticism. I mean, what is the Luddite fallacy?
Danaher: So the Luddites were famously these protesters in the early part of the Industrial Revolution. They were followers of Ned Ludd, who some people claim is a fictional character. There’s an interesting history there, to that. They smashed these machines because they saw them as a threat to their employment, but looking back on their activities from the vantage point of 150 years later, it seems that they were wrong to do so, in the sense that the kinds of automation that exists in the early phases of the Industrial Revolution didn’t lead to widespread technological unemployment. In fact, there’s probably more people out of work in the world today than there ever has been before.
It seems like a fallacy to assume that automation will displace jobs. That really then kind of leads into this argument about the complementarity effect. There isn’t a fixed number of jobs out there to go around. We’re always creating new jobs in light of the new kind of socio-technical reality that we’ve created.
Mason: Even if some of those jobs as David Graeber says are “bullshit jobs.”
Danaher: Right, yeah. Even though they’re meaningless or pointless administrative jobs, they’re still jobs that are paid.
Mason: As you’ve just outlined, the automation work is—to a degree—both possible and desirable. But you’re clear to state in the book that the automation life, however, is not as desirable. Could you explain the difference between the two and why that’s so important?
Danaher: If we look into how automated technologies affect life more generally, not just working life—I think there are reasons for pessimism. One of the ways in which I illustrate this in the book is to use the example of the Pixar movie, WALL‑E. Very roughly, WALL‑E depicts this kind of dystopian future for humanity where the Earth has become environmentally despoiled. Humans have had to navigate off the planet to these spaceships that are bringing them to some other place that they can live, and there are lots of robots in this future. Lots of automating technologies. The humans on these Interstellar spaceships are really fat, obese, slug like beings. They float around in these electronic chairs. They’re fed a diet of fast food and light entertainment, and there are all these robots around them scurrying about, doing all the work that needs to be done to fly these ships. This has been referred to by some technology critics as the sofalarity. We all just end up on our sofas being fed entertainment, and food and everything we need by automating technology. So we don’t really do anything, we just sit back and enjoy the ride. Even though this is an extreme and satirical depiction of the automated future, it does—I think—contain a kernel of truth and something that we should be concerned about.
An awful lot of how we derive meaning and value from our lives depends on our agency. The fact that we, through our activities, make some kind of difference to the world. We do things that are objectively valuable to the societies in which we live in and maybe in some other grander, cosmic sense of objective value, and that we are subjectively engaged and satisfied by the actions that we perform. The problem with automated technologies is that they kind of cut or sever the link between human action and what happens in the world. Because what you’re doing when you rely upon an automated technology is that you’re outsourcing either physical or cognitive activity to a machine, so that you’re no longer the agent that’s making the difference to the world. I think this is a serious threat to human meaning and flourishing and something that we should be concerned about.
Mason: In the book, you set up these two possible scenarios, these two possible utopias: The cyborg utopia, and the virtual utopia. First, I want to talk about this idea of this cyborg utopia. I mean, how would we build a cyborg utopia?
Danaher: People might be familiar with the story of the origin of the term. You know, neologism, a cybernetic organism. This idea that has kind of taken hold in biological sciences and social sciences is a notion of something that humans can aspire to, that they can become more machine-like. What does that mean in practice? There are two different understandings of what a cyborg is, particularly in philosophy. The one understanding is that a cyborg is a literal fusion between human biology and a machine. That you’re integrating machine-like mechanisms into biological mechanisms so that they form one hybrid system. An example of this would be something like a brain-computer interface, where you’re incorporating electrical circuits or chips into neural circuits in order to perform some function from the combination of the two things.
For people who listen to this podcast, you interviewed one of the leading pioneers in cyborg technology early on—Kevin Warwick, right? He’s done all these interesting pioneering studies on brain computer interfaces and how you can implant chips in one person’s brain and send a signal to a robotic arm. That’s a kind of illustration of this form of literal fusion between human biology and technology.
There’s another understanding of what a cyborg is though, that’s quite popular in certain sectors with the philosophical community. Mainly associated with a figure called Andy Clark, who says that we’re all natural born cyborgs. That we are, by our very natures, a technological species. One of the defining traits of humanity is that we’ve always lived in a technological ecology. We don’t live in the natural world—we live in a world that’s constructed by our technologies. We have these relationships of dependency with technology and also interdependency. We use hammers and axes and so forth to do things in the world, and we’ve been increasing the amount of technologisation of our everyday lives over the past several thousand years. So we’re more integrated with, and more dependent on technology. We’re becoming more cyborg-like over time.
For Clark, the relationship that you have with your smartphone—let’s say if you’re using Google Maps to walk around a city—you have a very interdependent relationship with the technology. You have a little avatar that you follow on screen, and your movements affect the image that you see on the screen. That kind of dependency relationship is an illustration of this other path to cyborg status. It doesn’t mean that you literally fuse your biological circuits with the machine circuits, but you have this kind of symbiotic relationship with the technology. That means you are a cyborg. You know the difference between these two kinds of cyborg are differences of degree as opposed to differences of fundamental type, I think. The more interdependency you have with an artefact, the more cyborg like you become.
Mason: It’s surprising to me that you start with a cyborg artist, Neil Harbisson, who’s a colourblind artist who has for want of a better description an antenna surgically implanted into the back of his skull that allows him to hear colour, although it’s not quite hearing, it’s slightly more nuanced than that. It’s a form of electrical bone conduction, which is vibrating his skull, which gives him a sense of sound. What’s interesting about Neil Harbison is that he’s a colorblind artist who’s now able to hear this colour and he now dreams with these sono-chromatic dreams. He no longer sees this antenna as a device, but he sees it as an organ; as part of his body. My own interactions with Neil…if you go up to him and you watch people try to touch the antenna, it’s as if I came up to you, John, and tried to touch your nose. He has the same sort of revulsion to it. It feels to him very much like this organ has become—as Andy Clark would say—profoundly embodied. I just wonder why you started with that example of an artist exploring the cyborg-isation of his body because what he’s doing seems the furthest thing away from me as something which is practical for use in the workforce.
Danaher: I think he’s a good example. Partly because he’s somebody who self identifies as a cyborg. I use a quote from an interview with him where he says, “I don’t use technology, I am technology.”—that’s the phrase that he uses. I think he’s historically set up something like the Cyborg Society that campaigns for the rights of cyborgs and, more recently, something like the Transpecies Society where he’s arguing for a post human identity as a concept.
What I find interesting about what Neil is doing is that he is using technology in a way to transcend the limitations of human biological form. To me, what he’s doing is he’s creating a new kind of sensory engagement with the world, which I find interesting. He’s experimenting with the limits of human form. To me, this is a utopian project, because one of the things I argue in the book is that we shouldn’t have a conception of what a utopia is. That is, it’s a blueprint for the ideal society—something like Plato’s Republic, or Thomas More’s utopia, where it’s a very rigid formula for what the ideal society should look like. I think we should have a more horizonal understanding of what a utopia is. A utopian society is one that’s kind of dynamic in the right ways. It’s not something that’s driven by interpersonal conflict and violence. That’s the wrong kind of dynamism that you want in a society. So it’s stable in that respect, it’s peaceful. But there’s an open future for people, that we’re expanding into new horizons. What I think Neil is doing is he’s expanding into a new horizon of possible human existence, and that’s what I find stimulating and exciting about what he’s doing.
Mason: It seems to me they’re trying to explore a spectrum of human possibilities, and the cyborg is no longer as Kevin Warwick or even Tim Cannon from Grindhouse Wetware would say. It’s no longer about upgrading or making the human better or stronger or faster or smarter. For Neil, or Moon, it’s really about exploring a multitude of differentiated sensory modalities, allowing themselves to be more similar to animals than to machines.
Danaher: It’s not necessarily that they’re trying to compete with machines in terms of cognitive ability. What they are doing is that they are exploring different kinds of morphology, different kinds of phenomenology, and different ways of experiencing and engaging with the world. There’s two different visions of what transhumanism is, let’s say. There is the kind of humanity on steroids view, which is that we’re upgrading our existing abilities. You just want more intelligence, more strength, more happiness—that kind of thing. Maybe the David Pierce understanding of transhumanism, that it’s the three supers: super intelligence, super happiness and super longevity; super long lives. What Neil and Moon are doing is something different, which is trying to explore the adjacent possibility, I guess. The other forms of human existence that might be possible out there.
Mason: So the question then becomes, are we creating that form of cyborg utopia—to have something to do in a post-work society? Because there’s not really going to help us compete with machines. Versus what Tim Cannon is arguing for, which is to enhance humanity to a level at which it can be competitive to machine-like processes. If we’re going to be competing in the workplace against automation robots and AI, then if we’re able to upgrade our brain and retain all of the fuzziness that makes humans special—but also do all the things that machines can do—then that makes us a much more useful worker.
Danaher: There are these different ways of pursuing the cyborg project, either the one of transcending what is possible for humans, and exploring new forms of sensory and embodied engagement with the world. I outline that as one of the main arguments in favour of the cyborg utopia. But the counterpoint to that, and one of the detractions from it, I think, is pursuing the other version of it, which is like upgraded humans, because I think what’s gonna happen if we do that is it’s just going to double down on the worst features of the economy that we have at the moment. So you know, instead of just competing on education for employability, you’re also going to be competing on having the right kinds of cyborg implants. Some people might think this holds a degree of hope for the future of work because what it might do is it might increase the power of labour relative to capital, because cyborg workers have more bargaining power than ordinary human workers. But I’m sceptical of that because it depends on how cyborg implants get distributed amongst the workforce, you know. Is this something that’s only going to be available to an elite few?
Also, if you think about the kinds of things that a cyborg worker could do better than a machine, based on what we see at the moment, it’s probably going to be something like a warehouse worker or physical worker with an exoskeleton that just enables them to perform dexterous physical tasks with greater speed, efficiency, that kind of thing. At the moment, it’s the case that those kinds of work are often the least valued and least pleasant forms of work in human society. So if that’s the way that the cyborg implants are going to go, it doesn’t seem to be a recipe for flourishing or utopia.
Mason: It does seem that the thing that’s on the near horizon is the sort of cyborg upgrades that are similar to non neural prosthetics; the exoskeletons that allow humans to lift heavier
objects. But it also feels like there is going to be a race around the human brain, around brain computer interfaces. It feels like Brian Johnson and his Kernel Co. in competition with Elon Musk’s neural link might be the battle we see over the future of work. I just wanted your opinion on those sorts of cybernetic enhancements, the ones that look like they’re going to be on the market potentially very soon—if the ways in which they’re advocating for these sorts of technologies hold true.
Danaher: If these implants are created partly with the aim of upgrading humans in such a way that they’re competitive with machines, I think we’re going to double down on the worst features of the employment market. So this isn’t a recipe for a post-work utopia, in my sense. The other thing then, I suppose, is just a degree of scepticism about the claims that are made on behalf of these kinds of technologies, particularly in the short term. There are a lot of criticisms of the kinds of things that Elon Musk is coming up with. Whether they really will be this kind of transcendent implant. What I see at the moment is interesting experiments and proofs of concept. But I don’t really see anything that is genuinely transformative. I’m definitely open to being surprised in this field.
Part of my scepticism here stems from older research interests that I’ve had in the human enhancement debate around pharmacological enhancements. The philosophers spent a lot of time debating those things, and lots of interesting work was done in it. But let’s be honest, in reality, we haven’t really had any genuine pharmacological enhancements, just pretty minor improvements. We might be going down the same route when it comes to these kinds of cyborg enhancements. That’s another reason as well—why I think the alternative pathway to the cyborg future, which is not one of upgrading humanity, but one of moving into this adjacent possible—isn’t a more interesting pathway.
Mason: There is something interesting that the potential of the cyborg utopia leads to. Whether its longevity and collective afterlife, or even cyborgs in space, which, oddly enough, features both as an advantage of a cyborg utopia and a disadvantage of a cyborg utopia. It was one of the most interesting possibilities in that chapter, and I just wonder if you could explain a little bit more about the possibility of cyborgs in space.
Danaher: Yeah, it does have that kind of cheesy 1980s science fiction title, or something even more dated. So within that chapter on the cyborg utopia, one of the arguments in favour of Cyborgism is space exploration and travel. This is kind of the original rationale for the cyborg, the original coining of the term was that it would help us to explore space. But why would exploring space be a utopian project? Well, part of it goes back to this notion of expanding the horizons of human possibility. So people like Neil Harbisson—they’re expanding the horizons of possible human embodied existence. That’s one horizon that we can explore. But there’s also genuine geographical horizons that we can explore. The sad reality is that we’ve explored most of the horizons here on earth and the horizons that are left to us are in space. So, space provides this almost infinite landscape that we can expand out into, and explore new possible forms of human existence in that infinite landscape. That’s interesting, I think. To me, it’s part of this need for dynamism and openness in the future.
There’s also an argument that I’m quite influenced by by a guy called Ian Crawford, who is one of the leading proponents of human space exploration, where he outlines this intellectual argument for space travel. It’s to the extent that we think that new knowledge and new intellectual challenges are a part of what gives meaning to our lives. It seems like exploring space is going to be a recipe for that kind of intellectual excitement and engagement, both in terms of scientific exploration of space, scientific experimentation, scientific examination of interstellar environments and other planets, but also new forms of aesthetic expression.
One of the points that Crawford makes is that—to some extent anyway—our aesthetic expression depends on the kinds of experiences that we have. As we expand out to explore new environments, we’re going to have new kinds of aesthetic experiences and new forms of aesthetic expression. It’s a recipe for enhanced cosmic artwork, for example. Also that we’ll have to explore new forms of political and social arrangement. How we deal with multi generational starships. How will we manage colonies on multiple planets? What kind of political organisation, what kind of ethical rules do we need for that? So there’s something interesting here. There are jobs for political and ethical philosophers in this world. It’s an intellectually stimulating project.
There’s also another point here, which is that it may in some sense be existentially necessary for us to explore space. It certainly seems to be true in the long run, that we’ll need to get off the planet if we want to survive. But maybe even in the short run, it’s something that we need to do to actually continue human existence…and continued human existence is a necessary condition for continued human flourishing.
The counterpoint to that is that there could be a lot of risks embedded in it. The philosopher Phil Torres—he’s written this interesting paper about the existential risks of space colonisation. One of the points he makes is that as we expand out onto different planets, it’s possible that humans will speciate because they’ll be facing different kinds of selective pressures in different environments. So they’ll form different groups with different needs and different ideologies. There’s going to be a recipe for potential conflicts between the different groups and different planets. How do we manage conflict here on earth? Well, going back to the work of British political philosopher Thomas Hobbes, we need some kind of Leviathan, some kind of political institutional structure that keeps the peace between people. Torres’ point is that it’s gonna be very, very difficult to have a cosmic solar system wide or intergalactic Leviathan. So what’s gonna happen then, is that there’s a danger that these different colonies with different interests and needs perceive each other as a threat to their continued survival and flourishing. So they engage in these preemptive strikes to wipe out the threat. There’s no cosmic Leviathan to keep the peace, and so we’re gonna have this massive intergalactic war. This leads Torres to conclude that we should delay space colonisation and exploration as much as possible.
Some of what Torres says I think is fanciful and speculative. I think there are reasons to believe that actually, surviving different plans might reduce the kinds of conflicts between different groups. I use this kind of glib phrase in the book from Robert Frost that, “Good fences make for good neighbours, and what could be a better fence than a couple of light years of cold dark space.” But there’s also going to be problems on individual colonies within space that because they face such extreme conditions of existence that aren’t necessarily hospitable to creatures like us, they could create the conditions for very authoritarian forms of government. The astrobiologist Charles Cockle has written some very interesting papers on this phenomenon, about tyranny in space colonies being a serious problem. Those are some reasons to be cautious about the project of space colonisation, being something that’s truly utopian.
Mason: I wonder almost if the work that Neil Harbisson is doing with trans-species and the new political ways in which we’ll have to organise society here on Earth as we create a differentiated form of humanity based on all of our different cybernetic additions and enhancements that will prepare us for dealing with the politics of sub speciation.
Danaher: Yeah, no, I think that’s a weakness in the Torres argument. So the assumption that he’s making is that staying on planets is better than going off planet but actually staying on, there are lots of existential risks that we face when we’re on planet and we could face very similar kinds of political strife. So we’re gonna have to confront those kinds of problems anyway, probably, even if we stay put on Earth.
Mason: What you were just saying is the reason that cyborgism isn’t really the utopia we’re looking for is because it feels like these developments are so far away, but the utopia that could just be around the corner is the Virtual utopia. Just help me to understand what you mean when you talk about this virtual utopia.
Danaher: This is the trickiest part of the book, by far. It’s also the bit that I think has confused most people. One thing I’ll just say at the outset is that I think the concept of a virtual form of existence is inherently problematic and nebulous. I don’t think there’s ever such a thing as a completely virtual way of life. But there is a way of life, I think, that has elements to it that qualify as virtual. Now how I understand the concept of a virtual way of life…I’m better at defining what it’s not, then necessarily defining what it is. The forms of virtual utopia that I don’t agree with are what I call the stereotypical view of what a virtual utopia is, which is the computerised—the computer simulation view. So what a virtual form of existence is, is that you immerse yourself in a computer simulated environment. Something like…let’s say the Holodeck from Star Trek, or Neal Stephenson’s metaverse from his popular novel in the early 90s, the Snow Crash—which was actually quite influential for people creating virtual reality technologies. That form of existence? That’s certainly virtual in some senses because some of the things that happen within a computer simulated environment or some of the objects and people you encounter aren’t quite real.
One of the illustrations I have of this in the book is, imagine you’re in a computer simulated environment, where there’s an apple on a table, let’s say. Clearly, the apple isn’t a real apple. It’s a visual representation of an apple. It doesn’t have the physical properties that a real apple has to have. It doesn’t have the right mix of proteins, and sugars and all that. It exists as a simulation of a real world apple, and that’s what that’s what makes it virtual in that world. But it’s also true to say that lots of things that happen in a computer simulation will be real, and can have real conversations with other people through avatars in a virtual environment. We do this all the time already. We live an increasing amount of our lives in digital spaces, but I don’t think anyone would say that the kinds of interactions that we have in those spaces are not real. In fact, they’re very real and very consequential. The emotional experiences that you can have in a computer simulated world can be real. You can be really afraid and be really happy. You can be really traumatised by things that happen to you. People can “assault” you in a virtual environment—not in the sense that they physically harm you, but they can psychologically harm you. In law, we recognise psychological harm as a form of assault.
I think the stereotypical view of virtual reality is flawed, because it doesn’t make these distinctions between the things that are real within a virtual environment or a computer simulated environment, and things that are not real within a computer simulated environment.
Mason: You make it very clear that you’re not talking about virtual reality as we know it currently—the headsets and the Oculus Rift. You’re talking about this notion of the virtual whereby we’re comfortable with certain things which are not physically real, and yet still actual. So for example, fictional characters. You use the example of Sherlock Holmes in the book.
Danaher: Yeah. So the Sherlock Holmes example is that—how does he exist? Well, he clearly doesn’t exist as a real physical person, but he does exist as a real fictional character. You can make claims about Sherlock Holmes that are true and false. Sherlock Holmes lived at 22A Baker Street. That’s a real claim about the fictional character Sherlock Holmes. You know, you can describe actions that took place in the novel. So, he has a real form of existence —he just doesn’t exist as a real physical person. Different kinds of things in the world have different existence conditions attached to them. So things like apples and chairs—they have to have a real physical existence in order to count as an instance of an apple or a chair. But there are other things that don’t actually have to have a physical existence to count as a real thing, right? So that’s actually one of the points I make about Sherlock Holmes. You could have detectives that exist in purely computer simulated form, because what a detective is, is really just a functional thing. It solves crimes.
There are already people trying to create AI that can help in solving crimes. Are those AIs not real? Are they virtual, simply because they exist inside a computer? No, because they are functional objects. What they need to really exist is to perform the right function. Again, this gets back to the point that things that exist in computer simulated environments. Some of them are not real, some of them are purely virtual—but some of them are actually real, because they perform the functions that those things are supposed to perform.
Mason: They’re real insofar as they can have an effect on us, and our emotions and our experience.
Danaher: Yeah, so that’s another kind of reality. Yeah. So they do make a difference to the world in some way.
Mason: You use the author of Sapiens Yuval Noah Harari’s idea of how certain things that we perceive in real life, everyday reality as having some element of simulation to them. These fictions, these meta-fictions that we create, to us, become real—whether it’s religion or capitalism. These aren’t natural things. They’re artificial things that we’ve given a degree of agency. Therefore those fictions, again, become a real part of everyday lived reality.
Danaher: The Harari view is kind of the counterpoint to the stereotypical view. The stereotypical view of virtual reality is that we have this computer simulated thing. The Harari view—I refer to it in the book as the counterintuitive view—which is that actually pretty much large chunks of our lives are really all virtual. That’s his main claim, right. You know, there are two ways of making that claim. Harari makes it one way, but I’m going to make an adjacent claim that I think supports the same point, which is that actually a huge amount of our lives are lived in artificially constructed environments as is. Right now as we’re speaking, we’re having this conversation in a room that shields us from the external environment, has article lighting, artificial heating, and so forth. Humans have long been creating these artificial environments in which we can live out our lives, in which we are shielded from a lot of the consequences, a lot of the negative features of the real world.
You could argue that the long term trend for civilization is to have an increasingly virtual form of life, living inside increasingly artificial environments. So this is kind of a parallel to Andy Clark’s point about us being natural born cyborgs. What I’m suggesting here is that we’re kind of natural born virtual beings as well. Harari’s point is slightly different, which is that actually, in addition to the artificiality of the environments that we live in, a lot of the meaning and value that we attach to the activities we perform in these environments is a projection of our imagination. He uses this example of religion. He uses this illustration of if you look around Jerusalem, lots of people attach religious significance and meaning to artefacts in that physical environment, but that’s not actually intrinsic or inherent in the objects. If you invest into them scientifically, you wouldn’t find their holiness, so to speak. It’s something that we project onto the environment through our minds. This is a more general point that has been made by others in more or less radical forums.
I use a quote from Terence McKenna in the book—which is one of the most extreme illustrations—which is that reality is a collective hallucination. But you know, philosophers as respectable as Emmanuel Kant have essentially argued that a large part of what we experience in the world is something that we project onto that world. We’re running a kind of virtual reality simulator in our minds that we use to interpret our experiences. Harari goes a step further. When people are worried about what the future holds, does that mean the world is going to live inside virtual reality machines and play computer games all the time? He makes the claim that actually we’re already doing that, and he goes so far as to suggest that religion is itself a virtual reality game. He also uses consumerist capitalism as an illustration of this. Religion is a virtual reality game where you score points by performing the right behaviours, and you level up at the end by going to paradise. This is literally the claim he makes, right?
Yeah, as provocative as Harari is, I think he’s right to say that a large part of what we currently do and the way we currently live is virtually simulated in our minds. I think he goes a step too far, because I think if you asked religious believers whether what they’re doing is a virtual reality, they would say to us, “Absolutely not, I really believe that these things are holy, and what I’m doing really matters. I don’t think that what I’m doing is inconsequential or trivial. It’s not a game to me.” So what I argue for instead, is that we kind of embrace this Harari-like counterintuitive view of what virtual reality is, but we step back a little bit from his extreme interpretation, which is that everything is kind of a virtual reality game. We argue that there’s only certain kinds of things that are virtual reality games, and they are things that we know ourselves to be games. So we know that there is a kind of arbitrary set of rules that we’ve applied to the way in which we engage and perform activities.
All games to me are a form of virtual reality. Take the example of chess—there’s nothing in the laws of physics that dictates that you have to move pieces around the chessboard in a particular way. You don’t. We have constructed a set of rules that we apply to how we engage with the chessboard, and they constrain how we behave in the environment. We know that they are arbitrary rules. Nevertheless, people play these games, and there are good ways of playing them. There are ways of playing it skillfully and well, and people derive great meaning and satisfaction from playing these games. Some people dedicate their entire lives to doing so, right? But they know that they are games. Just to finish the point, that with the virtual utopia chapter is that we can use that as a model for a virtual utopia, where everything we do is, in a sense, a game.
Mason: When you set up the propositions at the beginning of the book, you’re talking about this virtual utopia. I wondered, “Is John suggesting that we will escape into virtual reality?”, but no—what you’re suggesting is something much more nuanced. You set up the qualities that a virtual utopia should have, which are very similar to rules of the game. I just wonder if you could share some of those qualities and why you think those are so important for creating this virtual utopia.
Danaher: My understanding of virtual utopia is technologically agnostic, in that I think you can realise a virtual form of existence in many different kinds of environments. You can do it in a computer simulated environment, and I don’t deny that. I’m open to that possibility, and I use examples of that in the book. You can also realise that in the real world—games are a way of doing this. So you know, I rely in the book on a theory from a philosopher called Bernard Suits about what a game is. Suits wrote this very odd book back in the 70s. It’s a dialogue about what a game is and what a utopia is. What he argues is that a game is something that has three properties. It has a pretty lusory goal. It has a lusory attitude and a set of constitutive rules. A prelusory goal is something that you do that can be identified before you know what the game is, that constitutes success in the game. In a sense, he argues that it’s kind of the scoring of points in a game.
To use the illustration I have in the book—the game of golf. The prelusory goal in golf is to get your ball into a hole, and that’s the end state that you want to reach. The constitutive rules are the way in which you have to go about achieving the prelusory goal. The constitutive rules—what they do is they set up arbitrary obstacles to achieving the goal in the most efficient, possible way. So the most efficient way to get a ball into a hole is just to pick it up, walk down the fairway and drop it in the hole. But that, of course, is not how you’re supposed to play golf. There’s limitations on what you can do, you have to use a club to hit the ball to get it in the hole. There are all sorts of other rules about when you’re not allowed to ground your club, when you’re in a hazard and you have to drop it out of a certain area. So there’s all these additional constitutive rules that place constraints on how we can get the ball into the hole. Those are the constitutive rules and the lusory attitude is just a positive orientation towards the game; that you accept the constituent rules as the constraints on how you achieve the goal.
The short way of expressing Suits’ view of what a game is, is that it is the voluntary triumph over arbitrary obstacles. That’s the essence of what a game is. That’s what I’m arguing for in the book—is that we can actually use this as a model for a utopian form of existence, where what we should try to do is to play games, create more games, and explore a landscape of different possible games. This holds within it the potential for utopia. But the key thing, then, about that understanding is that it doesn’t have to be computer simulated. We can be playing games in the real physical world, and that would count as a form of virtual existence, because—again to go back to the point I made about Harari—for me, what’s wrong with Harari is that he doesn’t acknowledge that some people don’t see the rules and constraints on their behaviour as purely arbitrary. Whereas when you’re playing a game, you are aware of the fact that they are arbitrary.
Mason: The way in which you discuss virtual utopia: one instance is like a game as you just described, but you also describe it as an opportunity for world building. I wonder if you could explain that second form of understanding of virtual utopia, and then bring those together to help us understand what a virtual utopia might actually look like in practice.
Danaher: So you’re right, there are two arguments that I have for a virtual utopia. One is based on this game-like model. The other one is a slightly more political understanding of what a virtual utopia is. I look at the work of the philosopher Robert Nozick, who wrote a famous book back in the 70s called Anarchy, State, and Utopia. That book is famous for the Anarchy and State parts, but most people ignore the last part of the book, which is the Utopian part—which to me is actually the most interesting part of the book because it’s the most novel part of it. He has this very interesting analysis of what a utopia is. What he says is that a utopian world is a world that is stable. And a world that is stable is a world in which every member of that world likes it more than any other possible world. Then he argues that you can’t possibly realise that utopia in the real world because everyone has different understandings of what an ideal form of existence would look like. They have different preferences, different ways in which they will order what is valuable and important to them.
Some people might prioritise playing—to use the game analogy—one kind of game over another kind of game. We can’t have a utopia in which everyone is forced to play chess. Or, to use a literary illustration, Hermann has this novel, The Glass Bead Game, where there’s this one single game that everyone is oriented towards playing in society. This is the source of meaning and value in that society. That doesn’t look utopian because some people have different preferences. So Nozick says, “Well, you can’t realise a stable world or utopian world. So what can you do?” He says, “What you can do is you can try to create a meta utopia.” What that means is you create a world building mechanism; a way in which people can create the kind of world that they prefer that matches their preferences, and then somehow they’re kept isolated from people with competing preferences. He argues that a libertarian, minimal state is the meta-utopia, A minimal state allows people to create these different associations that have whatever value structure they prefer, and they can live within those associations, and they can migrate between different associations if they like. All the state does is it just tries to keep the peace between the different associations. That’s what a meta-utopia is. It’s just it’s a world building mechanism for people to create the associations that they prefer.
What I argue in the book is that I think that’s an interesting proposal and model of what utopian existence would look like, but it faces some practical limitations, particularly if we’re going to try and realise it in the real world, in the physical world. Because there are geographical limitations of space—how are we going to create all these different worlds? These different associations? How are you actually going to please the boundaries between the different associations? And what if one association prefers to convert everybody else to their calls, their missionaries or imperialists? That’s the language that Nozick uses in analysis. It seems it’s gonna be very practically difficult to do this. What I do suggest—and this is where I do rely heavily on the notion of a computer simulated model of utopia, virtual utopianism—is that what we could do is that we could create different worlds in a computer simulated environment, and then we don’t face the same kinds of physical constraints and concerns or practical difficulties that we would face in Nozick’s vision of utopia. So I don’t see those two different utopias—utopia games and the utopia of the virtual meta-utopia—as two different things. I think they’re complementary visions of what virtual utopia is. You can play the games, you can also create these different virtual computer simulated associations in which you can consort with like minded people.
I should also add, though, that when I argue for this utopian vision—one in which we can build different worlds, and we can play different games—I don’t mean by that, that those are the only things that we do. It’s not that we only ever play games. There’s still lots of other things that are open to you, in life. You can have friendships, you can have families, you can have different kinds of social organisations, you can perform good moral deeds towards your neighbours. These things are all still accessible to us in this model. It’s just that instead of work being the main focus or traditional political structures being the main focus of our attention, we focus on games instead.
Mason: If these are possible utopias, then why don’t we start them right now, here, on terra firma? Here on terrestrial Earth? There are so many problems that we could solve through gamifying certain things, such as climate change, that would enable us to continue to live on this planet, rather than go off and live our Cyborg future out in space. I wonder, could what you’re proffering in the book be applied to the real world as we live in it now with the challenges that we’re facing on the horizon? The biggest one being climate.
Danaher: To some extent, I think that what I’m proposing in the book is already happening. I use some examples that suggest that the amount of time that people spend on leisure—playing computer games is one illustration of this—has increased, particularly in young people over time, because they find it more difficult to find employment. So it’s already the case that there’s this kind of gamification of life taking place. Whether it can be used to solve existential risks, like climate change? You know, there are people who are experimenting with ways of harnessing collective intelligence and artificial intelligence to solve some of these problems. I think Thomas Malone from MIT wrote this interesting popular book last year called Superminds, where he talks a lot about some of the ways in which his lab is trying to create these games that enable people to come up with policy proposals to solve real world problems, which have a gamified structure to them. I think those proposals are interesting.
One of the assumptions that I do have in the book is that I think we’re going to increasingly rely on artificial intelligence, and machines and automating technologies to address some of these problems over time. I spoke to a guy called Miles Brundage about this, actually—on my own podcast. He has this interesting paper, he wrote The Case for Conditional Optimism about AI. It’s very conditional, but the one of the main points he makes is that AI can actually help to solve global coordination problems that we have, including problems around arms control and climate change. We can use gamified structures to address some of these problems, but I think it’s going to be partly a collaboration between humans and machines, and also increasingly something that we outsource to machines.
Mason: In that case, Cyborg utopia or virtual utopia? If you had to pick, which one would you choose, John?
Danaher: I come down in favour of the virtual utopia, because I think it’s more practically achievable in the short run. I think it also does contain something that is something genuinely post-work, and also allows for a serious kind of human flourishing. That’s not something that we’ve addressed in this conversation. So let me just briefly say that when I initially present this notion of a utopia of games to people, they recoil from it because they think it’s something trivial about that existence. But I try to point out that actually, there’s lots of good things that you can achieve within a game. You can perform moral acts within a game-like structure. You can achieve mastery over certain skill sets. There are intrinsic goods associated with the activities that you perform in a game. It also provides this infinite landscape of possibility for us to explore, so it fits with this horizonal model of utopianism that I was outlining earlier on.
I’m not, however, completely opposed to the cyborg utopia, as it has come out in this conversation. There are certain ways of becoming Cyborg-like that, I think, feed into this kind of virtual model of utopia. It’s about new kinds of entertainment, as we were saying, and new forms of existence, and not about doubling down on the worst features of human existence. On balance, though, I think that the cyborg utopia is less likely in the medium term, and so that’s why I favour the virtual utopia.
Mason: I mean these are the two things that link these forms of utopia. Is it really the fact that a post-work society is going to give us so much more opportunity to explore a spectrum of difference in the ways in which we live in the future?
Danaher: Yeah, I think I think that’s right. I like the way that you framed it—which I wish I now used in the book—which is that we have two possibilities: just experimenting with our bodies and minds and experimenting with the environments in which we live. One corresponds to the cyborg utopia, and one corresponds to the virtual utopia. Even though I am sceptical about the medium term prospects of the cyborg utopia, that doesn’t mean that we shouldn’t pursue it. It’s partly an issue of prioritisation of resources over time, and where we put things…so it can be put on the back burner to some extent.
Mason: How confident do you feel that either of these utopias will ever be achieved?
Danaher: Yeah, look, that’s a great question. So, I don’t necessarily feel confident that either of them will be achieved. One thing I say in the book—and I’ve said a lot in interviews that I’ve given—is that I’m not a technological determinist, or fadeless. I don’t think these things are just naturally going to happen. These are things that will require political effort and collective efforts. It’s not something that’s going to happen as a matter of course. We’ll have to agitate for it, reform our societies in favour of it. I had a very specific aim in this book, which was to evaluate the different possible post-work utopias, because I felt that this was something that was not being done in the literature on automation in the human future. There’s kind of an assumption that these things will be great and there are implied principles of ethical principles and value principles that guide that claim, but they’re not made explicit and they’re not subjected to a kind of rigorous analysis. That was what I was aiming to do in the book. The hope is that by articulating a vision of what would be a good post-world utopia, this will provide the motivation to think about how we can really, practically implement it.
Mason: So really, this is a book that is there to inspire a multitude of possibilities for a post-work future, to encourage people not to be so pessimistic of the idea of human obsolescence in the workplace?
Danaher: Yeah, that’s exactly right. So it’s a book that’s trying to motivate and inspire people towards a positive vision of the future.
Mason: John Danaher, thank you for your time.
Danaher: Thank you.
Mason: Thank you to John for sharing his insights into the developments that might massively transform the world of work. You can find out more by purchasing his book Automation and Utopia: Human Flourishing in a World Without Work, available now.
If you like what you’ve heard, then you can subscribe for our latest episode. Or follow us on Twitter, Facebook, or Instagram: @FUTURESPodcast.
More episodes, transcripts and show notes can be found at Futures Podcast dot net.
Thank you for listening to the Futures Podcast.
Further Reference
Episode page, with introductory and production notes. Transcript originally by Beth Colquhoun, republished with permission (modified).