Luke Robert Mason: You’re listening to the Futures Podcast with me, Luke Robert Mason.
On this episode I speak to philosopher of science Professor Steve Fuller.
So let’s say somebody undergoes a transhumanist-style treatment. They die in the process. But if they die, we probably—those of us who remain living—probably learn a lot from that.
Steve Fuller, excerpt from interview
Steve shared his insights on transhumanism, how scientists should approach risk, and what it means to be human in the 21st century. This episode was recorded on location at the University of Warwick in Coventry, England, where Steve is a Professor in the Department of Sociology.
Luke Robert Mason: Okay, so Professor Steve Fuller. I have known your work for a long while now, but some of the key focus of the last couple of years has been on this thing called Humanity 2.0. Could you explain what Humanity 2.0 is?
Steve Fuller: Okay. Well, first of all let’s start with Humanity 1.0 to make it simple. Humanity 1.0 is basically the conception of the human condition that you might say is enshrined in the UN Universal Declaration of Human Rights. Which is to say it’s an understanding of homo sapiens as a kind of living, flourishing creature, but one who has certain kinds of limitations. For example the human being will eventually die. The human being in a sense needs to be part of a larger social arrangement. And even though the human being is very much part of the world of science and technology, it is also part of a kind of natural world in a pre-scientific, pre-technological world. That’s Humanity 1.0. And it’s what we normally call a human being, actually.
However, Humanity 2.0 starts to challenge a lot of the assumptions of Humanity 1.0, especially in terms of issues having to do with limitations. So in other words, you might say there are two ways to go on Humanity 2.0. And in my writing, I associate these with the transhuman and the posthuman, respectively.
The transhuman condition basically wants Humanity 2.0 to in a sense explode a lot of the boundaries that have held back Humanity 1.0. So if we’re talking about things for example like extending life expectancy double, triple, quadruple, maybe indefinitely the amount of time that people normally have on Earth, that is quite a challenge to our normal understanding of humanity. Because for example, just to give you a simple example of what challenge that provides… If you don’t believe that death is any longer a necessity or it’ll happen in the indefinite future, the whole idea of giving meaning to your life will start to acquire a new kind of significance. Because if you look at history of philosophy, when people talk about the meaning of life they often talk about it in the context of being the being-unto-death. The fact that you’re only on Earth for a limited amount of time so you better get your act together and figure out what you think’s important to do. However, if you have all the time in the world—literally almost, which some exponents of transhumanism believe, then the whole sense of what the meaning of life is changes. So that’s kind of one way you might say that Humanity 2.0 might go.
But there’s another sense of Humanity 2.0, and that’s one that you know, if the first one wasn’t radical enough, the second one is. And that has to do with the fact that we might actually want to in some sense abandon our biological bodies. And in that respect, we talk about uploading the mind and consciousness. And of course we can’t do this right now. But nevertheless there is an enormous amount of research and funding that is being dedicated to try to bring this about. Through artificial intelligence research, through various human-computer interfaces, where even if we can’t upload our minds directly into machines we in some sense might be able to merge with the machines in something like a cyborg existence. Which to a large extent already exists among people who would otherwise be regarded as disabled.
And this could nevertheless be very much part of our future in a much more robust way. And one of the consequences of that would be that the kinds of powers that human beings would start to acquire would be quite unlike those of our biological ancestors. And that would have some very interesting implications for how we relate to the world as a whole.
And here not only are we talking about humans living forever, but also being able to compute and to reason about things in an enormous kind of large-scale fashion that’s never been done before. Perhaps being able to create massive forms of technology that might be able to have us travel throughout the universe. All of these things could well happen if we got to that kind of version of Humanity 2.0.
And the point I want to make about all these different versions of Humanity 2.0 is that it is true, none of them are here now. They all sound like science fiction. But nevertheless, there is a lot of momentum heading in this direction. So that if we do not reach Humanity 2.0, there will be a lot of people with egg on their face. Not only with regard to the Silicon Valley billionaires who are putting all their money into this, but also large segments of the scientific and technological community as it exists now. And not to mention Hollywood, which has invested enormous amounts in the idea that some version of Humanity 2.0 is bound to be realized.
Mason: I mean, where do you think some of these motivations come from? I know you’ve equated transhumanism to having an almost theological element to it.
Fuller: Yes, I think— To my mind. I think if you want to make transhumanism in particular sound reasonable, in the sense that it sounds grounded in the Western intellectual tradition and just doesn’t sound like some kind of selfish indulgence on the part of Silicon Valley billionaires, which it often does sound like, then I think you have to go back to certain theological ideas about the way in which human beings have thought about themselves in relationship to God.
And this is especially true in the Abrahamic religions, by which I mean Judaism, Christianity, and Islam. Because in these religions, and especially in the case of Christianity, there is this definition of the human being as having been created in the image and likeness of God. You find this in the book of Genesis. And so then there’s this question about what does that mean exactly?
Well see, Christianity’s quite interesting in this regard, because it puts forward the idea that there is a God-man called Jesus, okay. A being that is human but at the same time also godlike, and both at the same time. And you see, when you get those kinds of figures—and nobody denies that Jesus has been an incredibly influential figure in the history of culture and intellectual activity. Yet that’s the kind of being Jesus was—a transhumanist being, okay. Jesus was a Humanity 2.0 being, right? And at least these are the claims that are made for him. And it’s on that basis that he’s ended up having the significance he has. If he were just an ordinary human, being he wouldn’t be nearly as important as he is.
So there’s a sense in which this idea of the transhuman, Humanity 2.0, isn’t just some sort of science-fictional fantasy that got cooked up up, but rather is kind of continuing a kind of line of self-understanding that human beings have had for thousands and thousands of years.
Mason: When transhumanists mention some of the ideas that they have, whether it’s extending lifespan or enhancing the body, it creates a very sort of visceral reaction from the general public. It’s also sometimes perceived—if we get past the sort of technofetishism—it’s almost perceived as something grotesque to be doing, to extend the lifespan indefinitely. I wonder where that initial gut response comes from.
Fuller: Well, you know, in terms of what I said in the beginning about this business about the meaning of life, I think that’s kind of where that kind of concern or even revulsion at transhumanism is coming from. One of the people who’s very anti-transhumanist was the chair of the George W. Bush bioethics panel, which over ten years ago banned federally-funded stem cell research. And he has this phrase called the “wisdom of repugnance”. In other words, the kinds of prospects for human beings that you sort of recoil from, you pull away from because you find them disgusting in some kind of way, that tells you something deep about what it means to be a human being. And the thing that I think causes this kind of repugnance when one thinks about living forever and so forth, is that the meaning of life as a human being has been historically and culturally, and there’s no doubt about that either, to mortality. The being-unto-death, right. And the point about having a meaning to your life is that you have to have a point to it because it’s going to end in any case. And if you don’t have a point to it, then your life is meaningless. And that presupposes limitations, finitude, termination, okay. And a lot of our philosophy, our deep philosophy about how you should conduct yourself in the world, is very much part of that.
In addition, of course, and connected to this from a biological standpoint, is the fact that the way in which the species survives is through reproduction, right. In other words biological longevity is primarily understood as a matter of successive generations. Not in terms of one generation extending indefinitely, but successive generations each replacing the other, and each one lasting a finite period of time.
And if you think about it, the way which our social structures are organized and the way our affective bonds are organized with regard to our care for our children—why do we invest so much care in our children, why do we want to have children—is because in some sense we recognize our own limitations and we see that in these other beings, their lives may be able to carry on what we regard as worth carrying on and maybe even do better at things that we have not been able to do well.
And this kind of relationship, which is very fundamental to ideas of the family and even fundamental to larger-scale notions of social structure. So if you talk about the fact that you have things like elections for office. You have—you know, one of the reasons why something like dynasties in politics are so frowned upon, is the whole idea of having even just one family rule forever is considered repugnant, right.
So this idea of bringing in mortality and finitude as a way of bringing in fresh blood, bringing in new perspectives, that’s often been seen as part of what gives the human condition as a whole its meaning. And it seems to me that transhumanism does challenge all that, right?
Mason: I’ve heard you say that we require new generations for new thinking.
Fuller: Well, yes. This is in fact a historical point, right. And it’s not so hard to understand. And in fact one of the places where we actually see this is in the history of science itself. And that’s a very interesting example to look at, because of course we normally think about scientists as being these very rational beings. And so if someone comes up with a new idea, in a sense it doesn’t matter how old or how young they are. If the idea works they’ll just believe it, right. So there shouldn’t be this kind of problem.
But the problem is that even scientific ideas, often outstay their welcome. Largely because the people who have been taught a certain kind of way of doing science, certain sorts of theories, certain sorts of perspectives, they’ve invested their entire lives in it. So in other words, they have no incentive to actually change their minds, right. And imagine if those people never had to leave their jobs or never had to die, and could just continue in place holding those views that they were taught as students, and hold them indefinitely, you would end up having a completely ossified scientific community.
So Max Planck, who was great physicist in the early 20th century, one of the founders of quantum mechanics, he was the guy in fact who ran the journal that first published Einstein’s early papers. And you probably know Einstein was a young man in his twenties when he published his revolutionary papers. And Planck, who was an older guy helped him do it. And the point that Planck made in his autobiography when he was thinking about this episode again at the end of his own life, he said look, the reason why the Einstein revolution occurred was not because he managed to persuade all those old guys that they should be changing the foundations of physics. But rather they just died. They disappeared and the new people were open-minded because they had not yet invested all those years in the old ideas. And because they hadn’t invested in the old ideas, they had no particular reason to accept them. They could sort of think about things fresh. They could think about things in their own terms.
And I think there’s an important lesson here. So that if you’re going to have people who are planning to live forever, I think they’re going to need to have their memories rebooted from time to time. I think the worst thing that could happen—and I say this in the context of the way in which people normally talk about extending life indefinitely—the worst thing you would want is a perfect memory, right. Because a perfect memory will mean you are locked in the past forever, okay. And that one of the things that in fact enables human progress is the fact that we forget stuff. We leave stuff behind.
You should read Nietzsche on this. Nietzsche’s very good about this, about the liberating effects of forgetting. And the problem with the way in which people are conceptualizing living forever is that you kind of live forever—and I’ve heard Aubrey de Grey, the great guru of this say it this way—like a vintage car. So in other words, you remain in this kind— You know, so let’s say a vintage car from the 1950s or something like this, some kind of convertible, the kind of thing Elvis might been driving in. Let’s say that’s the kind of being you imagine you can be forever, right. You’re stuck like that forever. You never change. You just remain that way forever.
And this is kind of like people who end up getting locked into their perfect memories, and all they can do is add to that. They have no way of subtracting. Now, it seems to me that those people are eventually going to suffer from some kind of cerebral ossification.
Mason: Well, I’ve heard Max More the transhumanist philosopher argue the exact opposite. The ability to have morphological freedom means we will constantly reinvent ourselves.
Fuller: Reinvent ourselves…materially. But if you look at the thing that actually gets transferred over, right, because you know, you could say, “Okay, I have morphological freedom. Today I’m an upright ape. Tomorrow I’m a silicon chip,” right. That’s morphological freedom. Fine. I understand that.
But what is it that makes the two things me? What is being transferred? And my guess it’s going to be the memories. The memories is going to be the thing held constant in this kind of understanding. That is my understand— That is my sense of this. Otherwise, there is no morphological freedom. You just disappear and then something else comes about.
Mason: Well, there seems to be these two factions within transhumanism. The ones that want to preserve the body, the human bipedal, breathing body. And the ones who really only care about the mind, you know. The body is just a transportation system for this thing called the brain. I mean, why do you think that those fractions exist? Is it just differentiated interests in different forms of technology or something else?
Fuller: I think it reflects a real interesting kind of divide with regard to what is essential to be a human. What is it that you really need and what can you get rid of? I mean, that’s kind of what the question boils down to, right. And the people who think that you could be a silicon chip and still be a human are operating with a very, as you say, a very mental maybe even spiritual kind of conception of a human being, that maybe a kind of unique digital code could be the human, right. And that digital code could be instantiated in the chip or in a piece of DNA. And then you could grow a person that way, right. You could do both of those things.
But it seems to me that if you’re talking in those terms, that then the continuity, what is the principle of continuity, becomes important. And this is where the memory thing actually becomes quite an important issue here, right. Because I think everybody’s kinda presupposing that you retain, and if anything enhance, the memories that you start out with, whatever form you’re in.
By the way, this kind of distinction you raise between being embrained and embodied versus being in a silicon chip, is of course the— You know, when we talk about the standing of the human in Christianity, you’ve got all these kinds of debates going on. And there’s a whole branch of theology in fact called Christology. Christology is about the metaphysics of Jesus. In other words, what makes Jesus Christ? Is it the whole thing, right? In other words, do you actually need the human body? Or is the human body just for the benefit of dumb humans who weren’t able to see God otherwise? You see what I mean? That Jesus in a sense, in his human form, is an avatar of the god, and the god doesn’t need to be like this. The god could be something else.
Or other people say no, actually the human body is essential for what Jesus is, in that the god and the human are literally the same. And that sounds you know, in the transhumanist argument, a bit like Aubrey de Grey, in a sense, right. Where in a sense that the real victory of transhumanism is to enable people to be as they are indefinitely and enhanced but quite recognizably as they are now.
Mason: I think that’s one of the things that scares people about transhumanism, when transhumanists themselves claim that we are now gods. We have the ability to create life, to create robots, to create new forms of fleshly experience. That creates a very visceral response from the general public.
Fuller: Well yes, I think so. And it’s interesting because, um…
Mason: They mean in a liberating sense…
Fuller: Yes—
Mason: They’re getting excited about it because it’s the ability for us to control technology to our own means. But then there’s also this challenge of do humans have the foresight to manage it in the right way?
Fuller: Well, this is a good point. Now, you see you raised a good point. It’s one that I think is an important one for transhumanists to take on board in a much more serious and explicit manner. Namely that…let’s say that the game plan of transhumanism is to turn us into gods. I’m willing to grant this, and as I was saying I think there’s theological reasons for thinking that this kind of thing is not crazy. And scientifically, increasingly there’s reasons for thinking it might not be crazy, either.
But it’s not going to happen overnight in some seamless fashion, right. In other words, lives will be lost along the way. And I think this is the issue, right, that… Sometimes transhumanist talk as if the main obstacle to this transhumanist divinity from coming about is the fact that you’ve got people preventing it from happening, right. Religious people, or very small-minded politicians and so forth. And that in fact we already know how to do this, we’re on the verge, and all we need is a little more freedom, right. You give us enough freedom and we’ll be able to get this off the ground immediately.
Mason: Well the issue is a lot of these technologies they’re talking about probably wouldn’t be ethically approved to be tested on humans.
Fuller: Exactly. Exactly. And the reason why they’re not ethically approved is because it could harm somebody, right. We don’t actually know yet what are the relevant genetic treatments, what are the relevant drugs to take, what is the relevant way of uploading a mind, right. We’ve got a lot of theories about these things. But until we actually test these things on real people, right, rather than rats or in computer models as we typically do, we’re not going to get that far, okay.
So you do have to suspend the ethics codes. But I think that the point, that the consequence of suspending the ethics codes, is that you have got to be prepared for death, damage, and harm along the way before you get to the transhumanist paradise. So we actually need a culture that is prepared to accept that as a cost. And transhumanists do not want to talk about this. They make it seem like the ethics codes are just superstitious or something. But they’re not superstitious. They are actually protecting people from things that could cause them harm. And what the transhumanists ought to be saying is the harm’s worth it. That should be the transhumanists’ line. And so there should be discussions about compensation for harm. There should be [inaudible] insurance policies. Who pays for something if some some terrible thing goes wrong in some transhumanist experiment? This is what we should be talking about.
Mason: Because potentially in the long term there could be great benefits—
Fuller: Exactly. So you need actually a culture that is willing to engage in a certain level of self-sacrifice, effectively, okay. And I think that needs to be made much more explicit. Of course I’m not talking about everyone doing this. Because of course most people would not want to put themselves under such risk. But clearly there are people who would put themselves under such risk, and those people I believe first should be allowed to do it, but there should also be some compensation, some recognition, right.
So we should— The rest of the society that will benefit— So let’s say somebody undergoes a transhumanist-style treatment. They die in the process. But if they die, we probably—those of us who remain living—probably learn a lot from that. That’s how it always works, right? That’s what happens when we kill animals, too, in labs, right? We learn a lot from the stuff—the animal’s dead, unfortunately. Well, the same thing could happen to humans. In which case, then, the people who are benefiting, the people who remain living, ought to insure this, they ought to subsidize this, they ought to provide money for this, they ought to be paying for this.
So the point is it’s not that everybody ought to be undergoing risky treatments, but we should allow it to happen. And then there should be some compensation, either to the families of these people or however you want to construct this. I mean, I see this very much on the model of how we deal with military things, okay. Because the military…we operate on the assumption that we do have all of these potential foes out there. We actually need to have people who are willing to put their lives on the line for this. We don’t need everyone to do it. In fact the only kind of people we need to do it are the people who could do it well, actually. But everybody else benefits from this. And of course when you win major wars, there’s usually lots of casualties, okay. And the families of those soldiers feel like their lives were well-spent in sacrificing themselves for their country. Now see, we need a kind of culture—
Mason: Where you sacrifice yourself for your humanity.
Fuller: Something of that kind. Well yes, this is exactly right. I mean, in fact one of the things that inspires me along these lines is a famous essay by William James, the philosopher, the American philosopher-psychologist, from 1906 called “The Moral Equivalent of War.” And so, ironically… I mean, people forget this now. But before World War I began by accident essentially, in 1914, there was this general kind of view in the air at the turn of the century that we were going to enter into a period of peace. Because imperialism was pretty stable— I know it’s hard to believe all this stuff, but there was a sense in which you know, there was kind of Western domination of the world… It was a Pax Britannica, okay. You may have heard about this, the British Empire when the sun never set on the British Empire, all that stuff from the latter days of Queen Victoria and the early days of Edward VII. Pax Britannica, this was a peaceful period.
So William James, in this context, is imagining well you know, the thing about war is that it always tapped into something quite noble about the human spirit, about self sacrifice and being able to as it were see the value of one’s life above self-interest and being able to see a larger kind of species-based interest, or national interest. And so his question in his essay is, what’s going to replace that in the future once we end war, right? That’s the predicate of the essay. “We’re going to end war soon, guys. How’re we gonna remain noble and not just be these animals?” right. You know, this is a bit like the “end of history” thing of Fukuyama from twenty-five years ago at the end of the Cold War, where he said, “Okay, we’ve now had all the great ideological struggles. Now what’re we gonna do with our time?” right.
Well, William James was asking a similar question in 1906. And so he talks a lot in this essay about the kind of moral virtues that war brings out in people, in terms of thinking, in terms of the species, and the larger-scale interests. And I think you need to bring in something of that sensibility to justify this kind of transhumanist sense of self-sacrifice that I’m talking about.
Mason: So we virtuously sign ourselves over to have the first experiments and the first explorations for life extension, is it?
Fuller: Yes, exactly. And you would you would have a kind of a routine like you do when you go into to the military. So like, the military does reject people, right? I mean it’s not like any old person can go into the military. You have to have a physical examination, you have the mental examination. All these things you have to go through. But certain people then get brought in, and they are trained, and they become the front line.
Mason: I’d say it’s closer to what’s happening with Mars right now. “We’ll send you, but we’re not going to bring you back.” You’re going to be sent there, but you’re going to die on Mars. It’s probably similar to that process, and thousands and thousands of people signed up to that.
Fuller: Sure, exactly. That’s right. And I think this should be allowed, but it needs to be properly supported, you might say. That’s the thing. And that’s why I think studying the military culture, and how that has developed, and how societies come to accept that kind of— You know, you might say we have come to accept a certain kind of periodic form of self-sacrifice as part of the national interest. I think we need to figure out how we can bring the transhumanist experimentation in to that kind of mindset.
Mason: The problem is when people hear these sorts of things, they immediately think of eugenics and the Nazi experiments that were done on certain communities—
Fuller: Yeah, but those were forced. I mean, this is the problem, right? See, we’re in a di— Because we’re beyond— The ethics issue with us now is really quite different than it was before the Nazi Holocaust. Because the Nazi business made all the difference in the world, in that that’s in fact how we got our ethics codes, okay. And in a sense what I think has happened is that in response to something like the Nuremberg trials where people were forced to undergo sterilization and tortured and so forth in the name of genetics research— And of course it wasn’t just happening in Germany, it was happening throughout the Western world—actually throughout the world. That we ended up overreacting. This is my point.
So in other words, if you look at the way in which the research ethics codes are constructed now, in light of the Nazi experience, it is actually impossible to give informed consent in research that is deemed to be intrinsically highly risky. In other words, even if you want to do it, and you know what’s involved and what the chances are that you’ll have a brain hemorrhage or whatever, you’re not allowed to do it. So in other words, they’re just prohibited, right. And ethics codes often function in this fashion, to basically rule out entire classes of experiment where it is believed that it is impossible for any sane individual to grant informed consent.
Mason: Well, do we know if those experiments are being done? I know another interest of yours is seasteading.
Fuller: Yeah, okay. So, well, we’ve got two issues here. I’ll get to seasteading in a moment. But the first issue, which I think in a way is a more realistic issue, is of course not every country in the world subscribes to the same stringent research ethics codes that we find common in the West. And I’m thinking of China. So China, if you want to know a real place where this might be happening, is China. We don’t actually know what are the conditions under which research of this kind might be done on humans.
And so it is possible right now that this research could be done. I think the problem here, though, if that is what’s happening, is that it runs into problems of publication. Because most peer-reviewed publications in the sciences, I think virtually all of them if they’re “respectable,” and and this is something that the publishers would insure, also abide by the research ethics codes. In other words you actually have to say in your article… If you look at these scientific articles, especially in the biomedical fields, you actually have to say not only where your money’s coming from but you also have to say you didn’t torture anybody, or even small animals. And that is a condition of publication, okay. So it actually would be quite difficult to get stuff published if it were not done in an ethically sound way.
Now, the reason I mention this is because the thing you lead on was seasteading. And seasteading, as I understand, runs into this problem, even if it works as an idea. So let me explain. Seasteading is the idea that actually’s been around in a general sense among libertarian thinkers for a long time. And the idea is that if you look at the kind of the jurisdiction that covers the laws of a country, there’s a kind of area where it covers let’s say twelve miles outside of the coast of the country. And then outside of that, let’s say the Pacific Ocean or the Atlantic Ocean which is a lot larger than twelve miles, it’s a free zone in terms of what kind of rules apply.
And so if you were to park a ship, let’s say, outside of the territorial waters of the United States or the United Kingdom or Europe or whatever, then you could set up your own laws. And people talk about cities perhaps being organized this way. But the whole seasteading project as it’s been coming out of Silicon Valley in recent years has been about having a kind of floating laboratory where you could actually start to do this very adventurous kind of research that we’ve been talking about that the transhumanists are keen on.
And what would happen would be that people would volunteer to move to this big ship. So the ship would obviously have to have like an apartment complex on it or something, not just laboratories, because people would have to be there for the time of the experiments and all the rest of it. And they would engage in private contracts. You know, lawyers and all the rest would typically be involved. But it would just be a private arrangement. You know, what are the terms under which you will undergo experimentation? How much you might get paid for it. What would be your compensation? What kind of insurance would need to be taken out? Blah blah blah, all these things would be negotiated privately by the parties involved.
So there wouldn’t be any kind of overarching kind of legal arrangement that would kind of limit what exactly you could agree to. So in principle if you’re a guy who wants to take a lot of risks, you might get a scientist on board say, “Okay, guy. If you want to do this, we could set it up.” And so this is the idea, right. So this is seasteading.
The problem is let’s say you do get— In principle I think this could work at the practical level. In the sense that I think it could be done. You could set up such legal contracts. There would be people interested in doing this. And there would be scientists interested in doing this, in principle. All of that is true. What is hard is the final hurdle. Namely okay, you do the research, you’ve got results, how do you publish it? How do you publish it? Who’s going to touch this stuff? Who’s going to touch this stuff?
Mason: Or what happens when you go back to land?
Fuller: Exactly. You get arrested, right? You get put in jai— I mean, this could happen, right? And so there is kind of issue about how this would translate outward. Because if this seasteading stuff is actually meant to be a genuine contribution to science, then it’s going to be important that it get absorbed by the rest the scientific community and the rest of humanity in some way. But at the moment, the obstacle is that it wouldn’t be legal to publish it. So even if you can get away…you know, you’re outside of the long arm of the law in the sense of they can’t stop you from doing it, you would never get recognized for doing it. So it’s that part of the law that you have a problem with.
Mason: But it’s not just an issue with the science, there’s an issue politically to allow a lot of this stuff to happen.
Fuller: Yeah yeah, that’s right. I mean, I’ve talked about what’s going on here is that we’re moving from left/right to up/down. So we’re talking about the way which politics has been organized since the French Revolution, basically, from the National Assembly after the French Revolution, has been on a left/right bases. Where the party of the right are basically the people who look to the past as providing the foundation for things—what we call conservatives, normally. And they’re the people who would defend the church and the king and all that kinda stuff back in the time of the French Revolution.
And on the left was a kind of…back then an amalgam of liberals and socialists as we would now recognize them. And these were people who were basically anti-tradition for different reasons, perhaps. Either to open up markets or to create greater equality in the society. There are all kinds of motivations for being against tradition. And that was the left.
And of course those ideologies played themselves out over the course of the 19th and 20th centuries. They sort of distinguished themselves a bit more, especially liberal versus socialist with regard to the left. That became clearer as time went on.
But what all of these ideologies had in common, and this is why they’re kind of in question now, is that they were all about taking power over the states. So in other words they they were about controlling the state in some fashion, where the state was understood as the seat of power in society. And that’s why left and right really manifest themselves most clearly in terms of political parties going for elections to run the government, right.
But we’re now living in a world, and I think this is very much true to especially I think younger people who don’t feel so invested in politics in its conventional sense, right. Where they get meaning from their lives, where they get some kind of direction, isn’t necessarily from any kind of left/right divide of who’s going to control the budget of the government next year. This doesn’t get younger people excited, right? They’re more concerned about issues that I think older generations would associate more with lifestyle issues. About what kind of world do you want to live in? What kind of being do you want to be? And where the state is kind of neither here nor there with regard to their locus of concern.
And so the up-wingers are the people who in a way have a kind of libertarian tendency because they want to push the boundaries of the human condition altogether. And so what they want to do is they want to explore all these new possibilities—the morphological freedom, but also the possibility of blasting off into space and inhabiting other planets and space stations, and all of that kind of stuff. Where in a sense the sky’s the limit in a very literal way for these people. This is why they’re called “up-wingers.”
But then there are the down-wingers. And the down-wingers are people like environmentalists, a lot of the people who I would call post-humanists in the specific sense of believing that the locus of value in the world should not be just the human. That there is a kind of larger sense of value where the human is not so important, and you might talk about this in terms of the value of life itself. So you’re interested in biodiversity. You’re interested in having a lot of species around. You get very worried about extinction and about the way in which the planet, the climate is changing so much that in fact a lot of species aren’t able to survive and all of this kind of stuff.
And so the Anthropocene, as it’s often called, where human beings are now seen as the biggest cause of physical change on the planet. All of this is down-winging stuff, okay. And what it does is it kind of basically gets humans to think about themselves in a much more grounded fashion, a much more limited fashion, in a fashion that makes them perhaps think that we have gone too far with science and technology rather than not far enough. So that up and down are really pulling in quite opposite directions with regard to science and technology.
And interestingly, I would say science and technology end up occupying the place that in the left/right divide the state occupied. So in other words the thing that you’re really fighting about is what are you going to do with science and technology? That’s where I think the up-wingers and down-wingers really are disagreeing.
Mason: And it seems, though, in the last—at least the last couple of months since the American election, that those things…it’s not so binary anymore. So Trump is… Arguably, people think that Trump is very anti-science and yet he seems to be obsessed with Mars.
Fuller: Yes.
Mason: I mean, how can you be both?
Fuller: Well, and remember it was Peter Thiel from Silicon Valley who was Trump’s big early supporter and managed to organize this whole roundtable of Silicon—
Mason: I mean, what do you think’s going on there. The fact that Thiel is very progressive when it comes to technology. But then—
Fuller: And also lifestyle as well.
Mason: And lifestyle, but was aligned with Trump. And then Trump’s anti-science but pro-Mars. And then Elon wants to terraform Mars but also wants to save this planet through Tesla. [crosstalk] It just seems like…the [interactions?] are on the table right now.
Fuller: Well, the thing is— Well I think look, the attractive feature about Trump I think to a lot of these transhumanists is his Promethean character. Like, there is no limit, right. Trump leaves all the options open. And I think that’s a very attractive— This is the libertarian streak in transhumanism coming out, right. That in some sense you don’t imagine that there’s some limit already there. So not even the laws of the government can stop me, right. This is why Trump in the beginning got into all this trouble with the judiciary in the United States. Because he was constantly just making laws up on the hoof through executive orders.
But I think transhumanists kind of like this kind of way of operating. Because what it means is it cuts through all the red tape, it cuts through the bureaucracy, it open spheres of freedom. At least that’s the theory in terms of backing Trump. And so Trump does seem in that respect to be very open-minded. I mean, what people often see as his inability to settle on a policy, constantly changing, is also a sign of his open-mindedness.
And I think the feeling was, among a lot of these Silicon Valley guys, was that Hillary Clinton for all of her— And I speak as someone who voted for Hillary Clinton, and even voted for her in 2008 against Obama. That whatever her strengths—and there are many strengths, many competences—it is quite predictable where she is going to operate. She is going to be operating in the normal political space. She is not going to be the person to turn to to open up new opportunities and to rethink radically our relationship to the planet or all the rest of it. That’s not where she’s at, okay. She is more kind of a high-grade version of business as usual. And the transhumanists you know, are not that, right. And so there’s a sense in which it isn’t that Trump is such an attractive figure intrinsically but because he sets himself off against someone like Hillary Clinton so clearly as being not Clinton, I think that makes a big difference.
Mason: Do you think there’s something both in politics and science and technology whereby just things are…the future is so uncertain and we’re kinda having to deal as individuals with uncertainty being the norm?
Fuller: Yes. I think that’s right. And that’s why we need to formally recognize that in a productive way. Because if you live in a world where uncertainty is the norm, you’ve got to ways to go on this. One way, and I’ve spoken about this a lot in my writing…what’s your attitude toward risk? Because when you say that there’s uncertainty in the world, that means you’re admitting there’s risk. There’s no way of avoiding risk. Risk’s there. It’s just there. There’s no way of—
And so the question then, do we act cautiously? Which is the so-called precautionary principle, which in fact gets used to restrict innovation or at least to stagger the way in which it gets introduced into society. That’s one way to deal with uncertainty. You know, you recognize it’s there and then you kind of move more cautiously than you have in the past.
Or, do you take the opposite perspective, which is the proactionary principle, which is this term coined by Max More, we talked about earlier from the transhumanist movement, which I’ve written about a lot. And that’s a different notion. That’s looking at uncertainty and risk as offering opportunities for new things to happen. That the past does not have to repeat itself. That we can in fact move to a different kind of world. It’s a world where we don’t know what all the consequences are going to be. But the one thing we do know is it’s probably going to be different. And that we could take advantage of what those new opportunities are if we are open to them. And I think that is a much more appropriately transhumanist attitude, and it’s one that admits at the outset “yes, the world is uncertain.” And that uncertainty is not going to go away. That’s the thing. It’s not going to go away. The question is how do you roll with it?
And see, capitalism is very interesting as a kind of backdrop for thinking about this issue. Because if you look at something like the theory of entrepreneurship, right. In capitalism the entrepreneur is the guy with the innovation, the guy who comes up with the big idea that ends up transforming the market and so forth. The key thing, the assumption you might say that’s sort of built into the idea of entrepreneurship, is that markets are by nature unstable. In other words, just because things have been done a certain way or people have been buying a certain product for a long period of time, that doesn’t mean this can never change. You have to just figure a clever way of leveraging the market.
But the point is there’s no reason to think that the future will reproduce the past. And an entrepreneur always comes in there. And so the example, you look at somebody like Henry Ford with the automobile in a world that was already saturated with horses and where all the roads were pretty much organized around horses. And yet in a very short period of time, within a little over ten years, he wiped all the horses off of the roads, and all the roads got repaved and became automobile-friendly, because he knew how to kind of talk about, how to present, the new values that were being introduced by his innovation that he presented as overshadowing whatever values the old dominant product had.
And I think that this is kind of…transhumanism is playing this kind of game, right, where in a sense—and this explains a lot of I think the rhetoric of transhumanism, which is very much a rhetoric of innovation, where we’ve got a new and improved human being that you could become. You don’t have to be the same human being that we’ve been for the last 40,000 years. We don’t need to have those 40,000 year-old brains anymore. We could have really jacked-up brains. We could do all kinds of wonderful things. And this is the—
Mason: But then you feel like then… In that case we’re just sitting in the passenger seat while scientists kind of look after our future. Do you think that uncertainty element that comes naturally with science… You know, what was proven yesterday was proven wrong tomorrow—
Fuller: Yeah.
Mason: Do you think that’s where some of the…both the joy and mistrust—recent mistrust of science is coming from?
Fuller: Well, I think— See, what you’re bringing up there actually plays into the kind of issues that transhumanism has with the scientific community. Because remember, I think one thing that’s really important for listeners is that even though transhumanism is a very strongly pro-science and pro-technology ideology—in fact you will never find another ideology more pro-science or pro-technology—nevertheless, the scientific community doesn’t endorse it. Okay. This is a very important point.
They don’t necessarily trash it. I’m not saying that. But it is quite striking that for an ideology that really claim— You know, sort of ties its colors to the mast of science and technology, there isn’t this reciprocal love going on. Scientists for the most part keep a certain distance from this. Some don’t. Of course. Some embrace it. But considering the size of the scientific community, considering the range of issues where transhumanist arguments are being made that involve science and technology, it is striking how relatively few scientists have felt compelled or interested, even, in endorsing this.
And I think what this goes to is not that they’re against transhumanism. But what they’re more in favor of is protecting their authority as scientists. And so, as we’ve been discussing, there is a good chance that a lot of these transhumanist things—these treatments, whatever we’re talking about—as they are tried out will be shown to fail. They will be shown to fail, a lot of this stuff. This is not going to be a seamless ride into Utopia. There’s going to be a lot of…like I say, a lot of self-sacrifice. There’s going to be a lot of that.
There is a question the scientific community has, does it want to have that blood on its hands? And this is one of the reasons why the scientific community doesn’t kick up a bigger fuss about those research ethics codes. Because that actually protects them. That protects them.
Mason: It does feel to a degree like the more popular, media-savvy transhumanists are almost waiting around for scientists to kind of prove their predictions. They kinda sit there twiddling their thumbs until they can point at something and go, “Oh this is what I said was gonna happen in the mid-80s or mid-90s.”
Fuller: In fact. In fact that is correct. And of course they’re impatient by this happening, which is why they’re always interested in expediting the course of research. But I don’t think the scientific community itself feels they’re in such a particular hurry. Let’s put it that way. Because if you get enough failure, you know… So here’s the thing, right. Elon musk. He’s sending all these things off to the moon or whatever, and they almost all fail it seems, right. He rarely has a success.
Mason: He’s had successes recently. There have been successes—
Fuller: Yeah. But he’s got an enormous amount— My point is if this were a state-run agency, he would never have had this run.
Mason: Because it was taxpayer’s money.
Fuller: Yeah exactly, exactly. They would have stopped this immediately. They wouldn’t have allowed him to go on as long as he did. It’s only because this is his own money that he’s able to take these risks, okay. And this is the way you have to think, this is how the scientific community thinks about this, right. They’re not going to do this on their own nickel.
And so it strikes me that scientists are in fact quite sensitive to the issue you’re raising about the uncertainty and that there might be harms and things might not work and so forth. And that’s why they’re not pushing this transhumanist agenda. Because the reputation of the scientific community, which as you know is a very volatile thing already, for reasons not relating to transhumanism, could even become more volatile if they started jumping on a ship that ended up sinking.
I mean you know, scientists already are dealing with things like creationism and climate change denial and you know, they’re dealing with all these issues already on the table. They had the March for Science last week, right. Which was a little bit like a raindance as far as I’m concerned. But the point was that reflects the extent to which scientists are very concerned about their reputation in society. That they feel— And so that’s going to make them err on the side of caution with regard to transhumanism.
Mason: And yet it’s interesting, you spoke about Elon and his ability to fund certain research. Companies like Facebook and Google, they all seem to be employing, from science, individuals to work specifically on products. And I just wonder, just more generally, how that’s changing the scientific world and how they have freedoms to explore certain things. I mean, the wants and outcomes of Elon is essentially a product at the end of the day. And Google is a product at the end of the day. And they’re poaching the best guys from MIT and from Harvard and—
Fuller: Elon Musk’s great desire in life seems to be to be a travel agent, right. To bring people up into space. That’s what he’s going after—interstellar travel agency.
Mason: No, but what I mean is what happens when we start poaching some of the best scientists doing the most interesting research and then taking them—
Fuller: Oh, but this has been the history of privately-funded research from day one, okay. And so that part of the story doesn’t strike me as so surprising.
Mason: But the rate at which it’s happening. I mean—
Fuller: Well, let me tell—
Mason: I mean, Facebook yesterday announced they’re going to do a brain-computer interface—
Fuller: Let me tell you something. Before the US gover— Let’s look at the United States for a second. But the same also applies to Britain. That before the end of World War II, when the National Science Foundation got established and it was actually a formal government agency funding scientific research, which ended up becoming very dominant in the Cold War era, scientific research in America was always privately funded. This is the point, okay. Rockefeller, Ford, Carnegie, all the big kind of original industrial kinda guys, were the ones with the big foundations who were funding the science.
And this was true even to a large extent after World War II. And in Britain it was the same way. So it was really— I would say it’s only only been in the Cold War period—because in a sense the dominance of the state as the funder of science is declining now. But it was only in the Cold War period where it looked as though there was a state/science connection that was extremely strong. But otherwise there is a long history of private funding, often taking people off university campuses, putting them in special research parks… You know, Bell Labs for example was a very famous one in the early 20th century. To get people to work on stuff.
And you know, to give you an example, the Rockefeller Foundation funded the Cavendish Laboratory in Cambridge, which is where the DNA double helix was discovered in the 1950s. They basically hired all these scientists from Britain and the United States to come over. Watson of Watson/Crick is an American. He was brought over. And they just said, “Work on this.” That was how we got DNA, okay. And so this strategy that you’re talking about with Google and the rest of these guys, that itself is not unusual.
However, what is interesting in terms of the way in which these Silicon Valley companies are investing is… As you know, at least if you look at the full portfolio of things that transhumanists are interested in, there is a very strong bias in this funding that we’re talking about to artificial intelligence stuff. Much more so than to the biotech stuff, actually. Much more, it’s in artificial intelligence.
And why is that? It’s the ethics codes thing again. In other words, there are fewer ethical restrictions in getting involved in advanced artificial intelligence research than there is getting involved in advanced biotechnology research. And that’s one of the reasons why this whole fascination that transhumanists after Nick Bostrom have about existential risk, right. Why they want to put this kind of on the table is because at the moment artificial intelligence is not sufficiently regulated in the way that let’s say biotechnology is. The ethics codes governing artificial intelligence research are not nearly as restrictive as in biotechnology. And so as a result it is possible to do all kinds of crazy things, at least in principle, in artificial intelligence that you would not be able to do in biology, at least legally.
And this is why the existentialist risk thing has orig— Because on the one hand, Google and the rest of these companies are investing a lot of money in trying to advance artificial intelligence and to really you know…have to take various step changes of all sorts to get closer to the Singularity or whatever. But at the same time they also realize oh my god, we’re letting genie out of the box and we don’t have any way regulating this. And hence we have all these institutes all over the world devoted to existential risk. Because just in case these guys at Google do come up with something, who’s going to save the world from it?
Mason: That’s what fascinates me. It’s interesting to watch transhumanists who are kind of very pro the idea of robots and merging with machines being also very anti AI.
Fuller: Well, this is the Nick Bostrom thing, you know. I mean, among those of us who are involved in the transhumanist community there was a little email exchange a few months ago when Nick Bostrom was being brought over to the United Nations to talk about existential risk and Davos and all these other places. And so there was somebody who was arguing in the transhumanist community, “Isn’t this great now? Transhumanism is finally getting the kind of visibility it deserves. Because look at Nick Bostrom, he’s like traveling around the world talking to all these big deals.”
And I pointed out to this person that look, yeah he’s big. Why’s he being brought out to talk about all this stuff? Because he’s talking about the potential risks and harms of it. Not because he’s telling you to invest in it. Come on, guys, get real. This is not the message transhumanism wants to send, namely, “Hey guys, we’re here and we’re your biggest nightmare.” No! But this is what Nick Bostrom is doing. And this is kind of the way in which people are coming to know about transhumanism in the popular media, is through this concept of existential risk.
Mason: Well, it goes back to our first point of this discussion, is probably where the fear for a lot of these technologies comes from. Because it only takes one thing and there’s an expectation, whether it’s because of the aesthetics of science fiction where one thing goes rogue! and then suddenly the whole thing collapses on itself.
Fuller: Sure! We’ve got some movies. We got all kinds of movies about this already. So it’s already feeding into a kind of cultural imaginary. But to my mind it seems to me this is not doing transhumanism any favors at all. Because it’s making people fear this stuff.
Mason: So how does transhumanism fix the PR problem?
Fuller: Well, this is where I think transhuman— You know, I believe this is why I always talk about the proactionary stuff. And I talk about this need that we’re going to have to think about self-sacrifice. We have to get sober about this. We have to say yes, we really do want this stuff to happen. But it’s not going to be some straight-arrow way of getting there. It’s going to take a lot of blood, sweat, and tears. And there there seems to be a lot of interest in doing it. There’s a lot of money backing it. And all of that is cool. But we have to see it in realistic terms.
And so yes, there are risks. They’re risks we should embrace. But we should also provide adequate support, adequate compensation, all the rest of it. What I think we cannot do is deny them. Deny them.
See, at the moment, I think we are in a kind of polarized position with regard to transhumanism. On the one hand, you’ve got guys like Nick Bostrom running around basically scaring people, right. And I know he doesn’t mean to do that. But certainly the kinds of things that people are interested in having him talk about move in that direction. They wouldn’t be interviewing him if they thought AI was so cool. They’re interviewing him because of superintelligence and its paperclip-collecting habits, right? This is why they’re interviewing him.
So we’ve got that kind of scaremongering side of transhumanism, which is getting a lot of public visibility. But then on the part of a lot of rank and file transhumanists—the kind of normal transhumanists—they’re just in denial that there’s any risk at all. They just think the only problem is lack of freedom. This kind of mindless libertarian response that you get from transhumanists sometimes. And what we need is a kind of grounded position that basically says yeah, this is risky shit but we ought to be taking the risks.
Mason: Thank you to Professor Steve Fuller for sharing his thoughts on how we might navigate an increasingly technologized future.
If you like what you’ve heard, then you can subscribe for our latest episode. Or follow us on Twitter, Facebook or Instagram: @FuturesPodcast.
More episodes, transcripts and show notes can be found at futurespodcast.net.
Thank you for listening to the Futures Podcast.