Luke Robert Mason: You’re lis­ten­ing to the Futures Podcast with me, Luke Robert Mason.

On this episode I speak to philoso­pher of sci­ence Professor Steve Fuller.

So let’s say some­body under­goes a transhumanist-style treat­ment. They die in the process. But if they die, we probably—those of us who remain living—probably learn a lot from that.
Steve Fuller, excerpt from interview

Steve shared his insights on tran­shu­man­ism, how sci­en­tists should approach risk, and what it means to be human in the 21st cen­tu­ry. This episode was record­ed on loca­tion at the University of Warwick in Coventry, England, where Steve is a Professor in the Department of Sociology.


Luke Robert Mason: Okay, so Professor Steve Fuller. I have known your work for a long while now, but some of the key focus of the last cou­ple of years has been on this thing called Humanity 2.0. Could you explain what Humanity 2.0 is?

Steve Fuller: Okay. Well, first of all let’s start with Humanity 1.0 to make it sim­ple. Humanity 1.0 is basi­cal­ly the con­cep­tion of the human con­di­tion that you might say is enshrined in the UN Universal Declaration of Human Rights. Which is to say it’s an under­stand­ing of homo sapi­ens as a kind of liv­ing, flour­ish­ing crea­ture, but one who has cer­tain kinds of lim­i­ta­tions. For exam­ple the human being will even­tu­al­ly die. The human being in a sense needs to be part of a larg­er social arrange­ment. And even though the human being is very much part of the world of sci­ence and tech­nol­o­gy, it is also part of a kind of nat­ur­al world in a pre-sci­en­tif­ic, pre-technological world. That’s Humanity 1.0. And it’s what we nor­mal­ly call a human being, actually.

However, Humanity 2.0 starts to chal­lenge a lot of the assump­tions of Humanity 1.0, espe­cial­ly in terms of issues hav­ing to do with lim­i­ta­tions. So in oth­er words, you might say there are two ways to go on Humanity 2.0. And in my writ­ing, I asso­ciate these with the tran­shu­man and the posthu­man, respectively. 

The tran­shu­man con­di­tion basi­cal­ly wants Humanity 2.0 to in a sense explode a lot of the bound­aries that have held back Humanity 1.0. So if we’re talk­ing about things for exam­ple like extend­ing life expectan­cy dou­ble, triple, quadru­ple, maybe indef­i­nite­ly the amount of time that peo­ple nor­mal­ly have on Earth, that is quite a chal­lenge to our nor­mal under­stand­ing of human­i­ty. Because for exam­ple, just to give you a sim­ple exam­ple of what chal­lenge that pro­vides… If you don’t believe that death is any longer a neces­si­ty or it’ll hap­pen in the indef­i­nite future, the whole idea of giv­ing mean­ing to your life will start to acquire a new kind of sig­nif­i­cance. Because if you look at his­to­ry of phi­los­o­phy, when peo­ple talk about the mean­ing of life they often talk about it in the con­text of being the being-unto-death. The fact that you’re only on Earth for a lim­it­ed amount of time so you bet­ter get your act togeth­er and fig­ure out what you think’s impor­tant to do. However, if you have all the time in the world—literally almost, which some expo­nents of tran­shu­man­ism believe, then the whole sense of what the mean­ing of life is changes. So that’s kind of one way you might say that Humanity 2.0 might go.

But there’s anoth­er sense of Humanity 2.0, and that’s one that you know, if the first one was­n’t rad­i­cal enough, the sec­ond one is. And that has to do with the fact that we might actu­al­ly want to in some sense aban­don our bio­log­i­cal bod­ies. And in that respect, we talk about upload­ing the mind and con­scious­ness. And of course we can’t do this right now. But nev­er­the­less there is an enor­mous amount of research and fund­ing that is being ded­i­cat­ed to try to bring this about. Through arti­fi­cial intel­li­gence research, through var­i­ous human-computer inter­faces, where even if we can’t upload our minds direct­ly into machines we in some sense might be able to merge with the machines in some­thing like a cyborg exis­tence. Which to a large extent already exists among peo­ple who would oth­er­wise be regard­ed as disabled.

And this could nev­er­the­less be very much part of our future in a much more robust way. And one of the con­se­quences of that would be that the kinds of pow­ers that human beings would start to acquire would be quite unlike those of our bio­log­i­cal ances­tors. And that would have some very inter­est­ing impli­ca­tions for how we relate to the world as a whole.

And here not only are we talk­ing about humans liv­ing for­ev­er, but also being able to com­pute and to rea­son about things in an enor­mous kind of large-scale fash­ion that’s nev­er been done before. Perhaps being able to cre­ate mas­sive forms of tech­nol­o­gy that might be able to have us trav­el through­out the uni­verse. All of these things could well hap­pen if we got to that kind of ver­sion of Humanity 2.0.

And the point I want to make about all these dif­fer­ent ver­sions of Humanity 2.0 is that it is true, none of them are here now. They all sound like sci­ence fic­tion. But nev­er­the­less, there is a lot of momen­tum head­ing in this direc­tion. So that if we do not reach Humanity 2.0, there will be a lot of peo­ple with egg on their face. Not only with regard to the Silicon Valley bil­lion­aires who are putting all their mon­ey into this, but also large seg­ments of the sci­en­tif­ic and tech­no­log­i­cal com­mu­ni­ty as it exists now. And not to men­tion Hollywood, which has invest­ed enor­mous amounts in the idea that some ver­sion of Humanity 2.0 is bound to be realized.

Mason: I mean, where do you think some of these moti­va­tions come from? I know you’ve equat­ed tran­shu­man­ism to hav­ing an almost the­o­log­i­cal ele­ment to it.

Fuller: Yes, I think— To my mind. I think if you want to make tran­shu­man­ism in par­tic­u­lar sound rea­son­able, in the sense that it sounds ground­ed in the Western intel­lec­tu­al tra­di­tion and just does­n’t sound like some kind of self­ish indul­gence on the part of Silicon Valley bil­lion­aires, which it often does sound like, then I think you have to go back to cer­tain the­o­log­i­cal ideas about the way in which human beings have thought about them­selves in rela­tion­ship to God.

And this is espe­cial­ly true in the Abrahamic reli­gions, by which I mean Judaism, Christianity, and Islam. Because in these reli­gions, and espe­cial­ly in the case of Christianity, there is this def­i­n­i­tion of the human being as hav­ing been cre­at­ed in the image and like­ness of God. You find this in the book of Genesis. And so then there’s this ques­tion about what does that mean exactly?

Well see, Christianity’s quite inter­est­ing in this regard, because it puts for­ward the idea that there is a God-man called Jesus, okay. A being that is human but at the same time also god­like, and both at the same time. And you see, when you get those kinds of figures—and nobody denies that Jesus has been an incred­i­bly influ­en­tial fig­ure in the his­to­ry of cul­ture and intel­lec­tu­al activ­i­ty. Yet that’s the kind of being Jesus was—a tran­shu­man­ist being, okay. Jesus was a Humanity 2.0 being, right? And at least these are the claims that are made for him. And it’s on that basis that he’s end­ed up hav­ing the sig­nif­i­cance he has. If he were just an ordi­nary human, being he would­n’t be near­ly as impor­tant as he is.

So there’s a sense in which this idea of the tran­shu­man, Humanity 2.0, isn’t just some sort of science-fictional fan­ta­sy that got cooked up up, but rather is kind of con­tin­u­ing a kind of line of self-understanding that human beings have had for thou­sands and thou­sands of years.

Mason: When tran­shu­man­ists men­tion some of the ideas that they have, whether it’s extend­ing lifes­pan or enhanc­ing the body, it cre­ates a very sort of vis­cer­al reac­tion from the gen­er­al pub­lic. It’s also some­times perceived—if we get past the sort of technofetishism—it’s almost per­ceived as some­thing grotesque to be doing, to extend the lifes­pan indef­i­nite­ly. I won­der where that ini­tial gut response comes from.

Fuller: Well, you know, in terms of what I said in the begin­ning about this busi­ness about the mean­ing of life, I think that’s kind of where that kind of con­cern or even revul­sion at tran­shu­man­ism is com­ing from. One of the peo­ple who’s very anti-transhumanist was the chair of the George W. Bush bioethics pan­el, which over ten years ago banned federally-funded stem cell research. And he has this phrase called the wis­dom of repug­nance”. In oth­er words, the kinds of prospects for human beings that you sort of recoil from, you pull away from because you find them dis­gust­ing in some kind of way, that tells you some­thing deep about what it means to be a human being. And the thing that I think caus­es this kind of repug­nance when one thinks about liv­ing for­ev­er and so forth, is that the mean­ing of life as a human being has been his­tor­i­cal­ly and cul­tur­al­ly, and there’s no doubt about that either, to mor­tal­i­ty. The being-unto-death, right. And the point about hav­ing a mean­ing to your life is that you have to have a point to it because it’s going to end in any case. And if you don’t have a point to it, then your life is mean­ing­less. And that pre­sup­pos­es lim­i­ta­tions, fini­tude, ter­mi­na­tion, okay. And a lot of our phi­los­o­phy, our deep phi­los­o­phy about how you should con­duct your­self in the world, is very much part of that. 

In addi­tion, of course, and con­nect­ed to this from a bio­log­i­cal stand­point, is the fact that the way in which the species sur­vives is through repro­duc­tion, right. In oth­er words bio­log­i­cal longevi­ty is pri­mar­i­ly under­stood as a mat­ter of suc­ces­sive gen­er­a­tions. Not in terms of one gen­er­a­tion extend­ing indef­i­nite­ly, but suc­ces­sive gen­er­a­tions each replac­ing the oth­er, and each one last­ing a finite peri­od of time.

And if you think about it, the way which our social struc­tures are orga­nized and the way our affec­tive bonds are orga­nized with regard to our care for our children—why do we invest so much care in our chil­dren, why do we want to have children—is because in some sense we rec­og­nize our own lim­i­ta­tions and we see that in these oth­er beings, their lives may be able to car­ry on what we regard as worth car­ry­ing on and maybe even do bet­ter at things that we have not been able to do well. 

And this kind of rela­tion­ship, which is very fun­da­men­tal to ideas of the fam­i­ly and even fun­da­men­tal to larger-scale notions of social struc­ture. So if you talk about the fact that you have things like elec­tions for office. You have—you know, one of the rea­sons why some­thing like dynas­ties in pol­i­tics are so frowned upon, is the whole idea of hav­ing even just one fam­i­ly rule for­ev­er is con­sid­ered repug­nant, right. 

So this idea of bring­ing in mor­tal­i­ty and fini­tude as a way of bring­ing in fresh blood, bring­ing in new per­spec­tives, that’s often been seen as part of what gives the human con­di­tion as a whole its mean­ing. And it seems to me that tran­shu­man­ism does chal­lenge all that, right?

Mason: I’ve heard you say that we require new gen­er­a­tions for new thinking.

Fuller: Well, yes. This is in fact a his­tor­i­cal point, right. And it’s not so hard to under­stand. And in fact one of the places where we actu­al­ly see this is in the his­to­ry of sci­ence itself. And that’s a very inter­est­ing exam­ple to look at, because of course we nor­mal­ly think about sci­en­tists as being these very ratio­nal beings. And so if some­one comes up with a new idea, in a sense it does­n’t mat­ter how old or how young they are. If the idea works they’ll just believe it, right. So there should­n’t be this kind of problem. 

But the prob­lem is that even sci­en­tif­ic ideas, often out­stay their wel­come. Largely because the peo­ple who have been taught a cer­tain kind of way of doing sci­ence, cer­tain sorts of the­o­ries, cer­tain sorts of per­spec­tives, they’ve invest­ed their entire lives in it. So in oth­er words, they have no incen­tive to actu­al­ly change their minds, right. And imag­ine if those peo­ple nev­er had to leave their jobs or nev­er had to die, and could just con­tin­ue in place hold­ing those views that they were taught as stu­dents, and hold them indef­i­nite­ly, you would end up hav­ing a com­plete­ly ossi­fied sci­en­tif­ic community.

So Max Planck, who was great physi­cist in the ear­ly 20th cen­tu­ry, one of the founders of quan­tum mechan­ics, he was the guy in fact who ran the jour­nal that first pub­lished Einstein’s ear­ly papers. And you prob­a­bly know Einstein was a young man in his twen­ties when he pub­lished his rev­o­lu­tion­ary papers. And Planck, who was an old­er guy helped him do it. And the point that Planck made in his auto­bi­og­ra­phy when he was think­ing about this episode again at the end of his own life, he said look, the rea­son why the Einstein rev­o­lu­tion occurred was not because he man­aged to per­suade all those old guys that they should be chang­ing the foun­da­tions of physics. But rather they just died. They dis­ap­peared and the new peo­ple were open-minded because they had not yet invest­ed all those years in the old ideas. And because they had­n’t invest­ed in the old ideas, they had no par­tic­u­lar rea­son to accept them. They could sort of think about things fresh. They could think about things in their own terms.

And I think there’s an impor­tant les­son here. So that if you’re going to have peo­ple who are plan­ning to live for­ev­er, I think they’re going to need to have their mem­o­ries reboot­ed from time to time. I think the worst thing that could happen—and I say this in the con­text of the way in which peo­ple nor­mal­ly talk about extend­ing life indefinitely—the worst thing you would want is a per­fect mem­o­ry, right. Because a per­fect mem­o­ry will mean you are locked in the past for­ev­er, okay. And that one of the things that in fact enables human progress is the fact that we for­get stuff. We leave stuff behind. 

You should read Nietzsche on this. Nietzsche’s very good about this, about the lib­er­at­ing effects of for­get­ting. And the prob­lem with the way in which peo­ple are con­cep­tu­al­iz­ing liv­ing for­ev­er is that you kind of live forever—and I’ve heard Aubrey de Grey, the great guru of this say it this way—like a vin­tage car. So in oth­er words, you remain in this kind— You know, so let’s say a vin­tage car from the 1950s or some­thing like this, some kind of con­vert­ible, the kind of thing Elvis might been dri­ving in. Let’s say that’s the kind of being you imag­ine you can be for­ev­er, right. You’re stuck like that for­ev­er. You nev­er change. You just remain that way forever.

And this is kind of like peo­ple who end up get­ting locked into their per­fect mem­o­ries, and all they can do is add to that. They have no way of sub­tract­ing. Now, it seems to me that those peo­ple are even­tu­al­ly going to suf­fer from some kind of cere­bral ossification.

Mason: Well, I’ve heard Max More the tran­shu­man­ist philoso­pher argue the exact oppo­site. The abil­i­ty to have mor­pho­log­i­cal free­dom means we will con­stant­ly rein­vent ourselves.

Fuller: Reinvent our­selves…mate­ri­al­ly. But if you look at the thing that actu­al­ly gets trans­ferred over, right, because you know, you could say, Okay, I have mor­pho­log­i­cal free­dom. Today I’m an upright ape. Tomorrow I’m a sil­i­con chip,” right. That’s mor­pho­log­i­cal free­dom. Fine. I under­stand that. 

But what is it that makes the two things me? What is being trans­ferred? And my guess it’s going to be the mem­o­ries. The mem­o­ries is going to be the thing held con­stant in this kind of under­stand­ing. That is my under­stand— That is my sense of this. Otherwise, there is no mor­pho­log­i­cal free­dom. You just dis­ap­pear and then some­thing else comes about.

Mason: Well, there seems to be these two fac­tions with­in tran­shu­man­ism. The ones that want to pre­serve the body, the human bipedal, breath­ing body. And the ones who real­ly only care about the mind, you know. The body is just a trans­porta­tion sys­tem for this thing called the brain. I mean, why do you think that those frac­tions exist? Is it just dif­fer­en­ti­at­ed inter­ests in dif­fer­ent forms of tech­nol­o­gy or some­thing else?

Fuller: I think it reflects a real inter­est­ing kind of divide with regard to what is essen­tial to be a human. What is it that you real­ly need and what can you get rid of? I mean, that’s kind of what the ques­tion boils down to, right. And the peo­ple who think that you could be a sil­i­con chip and still be a human are oper­at­ing with a very, as you say, a very men­tal maybe even spir­i­tu­al kind of con­cep­tion of a human being, that maybe a kind of unique dig­i­tal code could be the human, right. And that dig­i­tal code could be instan­ti­at­ed in the chip or in a piece of DNA. And then you could grow a per­son that way, right. You could do both of those things.

But it seems to me that if you’re talk­ing in those terms, that then the con­ti­nu­ity, what is the prin­ci­ple of con­ti­nu­ity, becomes impor­tant. And this is where the mem­o­ry thing actu­al­ly becomes quite an impor­tant issue here, right. Because I think every­body’s kin­da pre­sup­pos­ing that you retain, and if any­thing enhance, the mem­o­ries that you start out with, what­ev­er form you’re in.

By the way, this kind of dis­tinc­tion you raise between being embrained and embod­ied ver­sus being in a sil­i­con chip, is of course the— You know, when we talk about the stand­ing of the human in Christianity, you’ve got all these kinds of debates going on. And there’s a whole branch of the­ol­o­gy in fact called Christology. Christology is about the meta­physics of Jesus. In oth­er words, what makes Jesus Christ? Is it the whole thing, right? In oth­er words, do you actu­al­ly need the human body? Or is the human body just for the ben­e­fit of dumb humans who weren’t able to see God oth­er­wise? You see what I mean? That Jesus in a sense, in his human form, is an avatar of the god, and the god does­n’t need to be like this. The god could be some­thing else.

Or oth­er peo­ple say no, actu­al­ly the human body is essen­tial for what Jesus is, in that the god and the human are lit­er­al­ly the same. And that sounds you know, in the tran­shu­man­ist argu­ment, a bit like Aubrey de Grey, in a sense, right. Where in a sense that the real vic­to­ry of tran­shu­man­ism is to enable peo­ple to be as they are indef­i­nite­ly and enhanced but quite rec­og­niz­ably as they are now.

Mason: I think that’s one of the things that scares peo­ple about tran­shu­man­ism, when tran­shu­man­ists them­selves claim that we are now gods. We have the abil­i­ty to cre­ate life, to cre­ate robots, to cre­ate new forms of flesh­ly expe­ri­ence. That cre­ates a very vis­cer­al response from the gen­er­al public.

Fuller: Well yes, I think so. And it’s inter­est­ing because, um…

Mason: They mean in a lib­er­at­ing sense…

Fuller: Yes—

Mason: They’re get­ting excit­ed about it because it’s the abil­i­ty for us to con­trol tech­nol­o­gy to our own means. But then there’s also this chal­lenge of do humans have the fore­sight to man­age it in the right way?

Fuller: Well, this is a good point. Now, you see you raised a good point. It’s one that I think is an impor­tant one for tran­shu­man­ists to take on board in a much more seri­ous and explic­it man­ner. Namely that…let’s say that the game plan of tran­shu­man­ism is to turn us into gods. I’m will­ing to grant this, and as I was say­ing I think there’s the­o­log­i­cal rea­sons for think­ing that this kind of thing is not crazy. And sci­en­tif­i­cal­ly, increas­ing­ly there’s rea­sons for think­ing it might not be crazy, either. 

But it’s not going to hap­pen overnight in some seam­less fash­ion, right. In oth­er words, lives will be lost along the way. And I think this is the issue, right, that… Sometimes tran­shu­man­ist talk as if the main obsta­cle to this tran­shu­man­ist divin­i­ty from com­ing about is the fact that you’ve got peo­ple pre­vent­ing it from hap­pen­ing, right. Religious peo­ple, or very small-minded politi­cians and so forth. And that in fact we already know how to do this, we’re on the verge, and all we need is a lit­tle more free­dom, right. You give us enough free­dom and we’ll be able to get this off the ground immediately.

Mason: Well the issue is a lot of these tech­nolo­gies they’re talk­ing about prob­a­bly would­n’t be eth­i­cal­ly approved to be test­ed on humans.

Fuller: Exactly. Exactly. And the rea­son why they’re not eth­i­cal­ly approved is because it could harm some­body, right. We don’t actu­al­ly know yet what are the rel­e­vant genet­ic treat­ments, what are the rel­e­vant drugs to take, what is the rel­e­vant way of upload­ing a mind, right. We’ve got a lot of the­o­ries about these things. But until we actu­al­ly test these things on real peo­ple, right, rather than rats or in com­put­er mod­els as we typ­i­cal­ly do, we’re not going to get that far, okay. 

So you do have to sus­pend the ethics codes. But I think that the point, that the con­se­quence of sus­pend­ing the ethics codes, is that you have got to be pre­pared for death, dam­age, and harm along the way before you get to the tran­shu­man­ist par­adise. So we actu­al­ly need a cul­ture that is pre­pared to accept that as a cost. And tran­shu­man­ists do not want to talk about this. They make it seem like the ethics codes are just super­sti­tious or some­thing. But they’re not super­sti­tious. They are actu­al­ly pro­tect­ing peo­ple from things that could cause them harm. And what the tran­shu­man­ists ought to be say­ing is the har­m’s worth it. That should be the tran­shu­man­ists’ line. And so there should be dis­cus­sions about com­pen­sa­tion for harm. There should be [inaudi­ble] insur­ance poli­cies. Who pays for some­thing if some some ter­ri­ble thing goes wrong in some tran­shu­man­ist exper­i­ment? This is what we should be talk­ing about. 

Mason: Because poten­tial­ly in the long term there could be great benefits—

Fuller: Exactly. So you need actu­al­ly a cul­ture that is will­ing to engage in a cer­tain lev­el of self-sacrifice, effec­tive­ly, okay. And I think that needs to be made much more explic­it. Of course I’m not talk­ing about every­one doing this. Because of course most peo­ple would not want to put them­selves under such risk. But clear­ly there are peo­ple who would put them­selves under such risk, and those peo­ple I believe first should be allowed to do it, but there should also be some com­pen­sa­tion, some recog­ni­tion, right. 

So we should— The rest of the soci­ety that will ben­e­fit— So let’s say some­body under­goes a transhumanist-style treat­ment. They die in the process. But if they die, we probably—those of us who remain living—probably learn a lot from that. That’s how it always works, right? That’s what hap­pens when we kill ani­mals, too, in labs, right? We learn a lot from the stuff—the ani­mal’s dead, unfor­tu­nate­ly. Well, the same thing could hap­pen to humans. In which case, then, the peo­ple who are ben­e­fit­ing, the peo­ple who remain liv­ing, ought to insure this, they ought to sub­si­dize this, they ought to pro­vide mon­ey for this, they ought to be pay­ing for this. 

So the point is it’s not that every­body ought to be under­go­ing risky treat­ments, but we should allow it to hap­pen. And then there should be some com­pen­sa­tion, either to the fam­i­lies of these peo­ple or how­ev­er you want to con­struct this. I mean, I see this very much on the mod­el of how we deal with mil­i­tary things, okay. Because the military…we oper­ate on the assump­tion that we do have all of these poten­tial foes out there. We actu­al­ly need to have peo­ple who are will­ing to put their lives on the line for this. We don’t need every­one to do it. In fact the only kind of peo­ple we need to do it are the peo­ple who could do it well, actu­al­ly. But every­body else ben­e­fits from this. And of course when you win major wars, there’s usu­al­ly lots of casu­al­ties, okay. And the fam­i­lies of those sol­diers feel like their lives were well-spent in sac­ri­fic­ing them­selves for their coun­try. Now see, we need a kind of culture—

Mason: Where you sac­ri­fice your­self for your humanity.

Fuller: Something of that kind. Well yes, this is exact­ly right. I mean, in fact one of the things that inspires me along these lines is a famous essay by William James, the philoso­pher, the American philosopher-psychologist, from 1906 called The Moral Equivalent of War.” And so, iron­i­cal­ly… I mean, peo­ple for­get this now. But before World War I began by acci­dent essen­tial­ly, in 1914, there was this gen­er­al kind of view in the air at the turn of the cen­tu­ry that we were going to enter into a peri­od of peace. Because impe­ri­al­ism was pret­ty sta­ble— I know it’s hard to believe all this stuff, but there was a sense in which you know, there was kind of Western dom­i­na­tion of the world… It was a Pax Britannica, okay. You may have heard about this, the British Empire when the sun nev­er set on the British Empire, all that stuff from the lat­ter days of Queen Victoria and the ear­ly days of Edward VII. Pax Britannica, this was a peace­ful period.

So William James, in this con­text, is imag­in­ing well you know, the thing about war is that it always tapped into some­thing quite noble about the human spir­it, about self sac­ri­fice and being able to as it were see the val­ue of one’s life above self-interest and being able to see a larg­er kind of species-based inter­est, or nation­al inter­est. And so his ques­tion in his essay is, what’s going to replace that in the future once we end war, right? That’s the pred­i­cate of the essay. We’re going to end war soon, guys. How’re we gonna remain noble and not just be these ani­mals?” right. You know, this is a bit like the end of his­to­ry” thing of Fukuyama from twenty-five years ago at the end of the Cold War, where he said, Okay, we’ve now had all the great ide­o­log­i­cal strug­gles. Now what’re we gonna do with our time?” right. 

Well, William James was ask­ing a sim­i­lar ques­tion in 1906. And so he talks a lot in this essay about the kind of moral virtues that war brings out in peo­ple, in terms of think­ing, in terms of the species, and the larger-scale inter­ests. And I think you need to bring in some­thing of that sen­si­bil­i­ty to jus­ti­fy this kind of tran­shu­man­ist sense of self-sacrifice that I’m talk­ing about.

Mason: So we vir­tu­ous­ly sign our­selves over to have the first exper­i­ments and the first explo­rations for life exten­sion, is it?

Fuller: Yes, exact­ly. And you would you would have a kind of a rou­tine like you do when you go into to the mil­i­tary. So like, the mil­i­tary does reject peo­ple, right? I mean it’s not like any old per­son can go into the mil­i­tary. You have to have a phys­i­cal exam­i­na­tion, you have the men­tal exam­i­na­tion. All these things you have to go through. But cer­tain peo­ple then get brought in, and they are trained, and they become the front line.

Mason: I’d say it’s clos­er to what’s hap­pen­ing with Mars right now. We’ll send you, but we’re not going to bring you back.” You’re going to be sent there, but you’re going to die on Mars. It’s prob­a­bly sim­i­lar to that process, and thou­sands and thou­sands of peo­ple signed up to that.

Fuller: Sure, exact­ly. That’s right. And I think this should be allowed, but it needs to be prop­er­ly sup­port­ed, you might say. That’s the thing. And that’s why I think study­ing the mil­i­tary cul­ture, and how that has devel­oped, and how soci­eties come to accept that kind of— You know, you might say we have come to accept a cer­tain kind of peri­od­ic form of self-sacrifice as part of the nation­al inter­est. I think we need to fig­ure out how we can bring the tran­shu­man­ist exper­i­men­ta­tion in to that kind of mindset.

Mason: The prob­lem is when peo­ple hear these sorts of things, they imme­di­ate­ly think of eugen­ics and the Nazi exper­i­ments that were done on cer­tain communities—

Fuller: Yeah, but those were forced. I mean, this is the prob­lem, right? See, we’re in a di— Because we’re beyond— The ethics issue with us now is real­ly quite dif­fer­ent than it was before the Nazi Holocaust. Because the Nazi busi­ness made all the dif­fer­ence in the world, in that that’s in fact how we got our ethics codes, okay. And in a sense what I think has hap­pened is that in response to some­thing like the Nuremberg tri­als where peo­ple were forced to under­go ster­il­iza­tion and tor­tured and so forth in the name of genet­ics research— And of course it was­n’t just hap­pen­ing in Germany, it was hap­pen­ing through­out the Western world—actually through­out the world. That we end­ed up over­re­act­ing. This is my point.

So in oth­er words, if you look at the way in which the research ethics codes are con­struct­ed now, in light of the Nazi expe­ri­ence, it is actu­al­ly impos­si­ble to give informed con­sent in research that is deemed to be intrin­si­cal­ly high­ly risky. In oth­er words, even if you want to do it, and you know what’s involved and what the chances are that you’ll have a brain hem­or­rhage or what­ev­er, you’re not allowed to do it. So in oth­er words, they’re just pro­hib­it­ed, right. And ethics codes often func­tion in this fash­ion, to basi­cal­ly rule out entire class­es of exper­i­ment where it is believed that it is impos­si­ble for any sane indi­vid­ual to grant informed consent.

Mason: Well, do we know if those exper­i­ments are being done? I know anoth­er inter­est of yours is seasteading.

Fuller: Yeah, okay. So, well, we’ve got two issues here. I’ll get to seast­eading in a moment. But the first issue, which I think in a way is a more real­is­tic issue, is of course not every coun­try in the world sub­scribes to the same strin­gent research ethics codes that we find com­mon in the West. And I’m think­ing of China. So China, if you want to know a real place where this might be hap­pen­ing, is China. We don’t actu­al­ly know what are the con­di­tions under which research of this kind might be done on humans. 

And so it is pos­si­ble right now that this research could be done. I think the prob­lem here, though, if that is what’s hap­pen­ing, is that it runs into prob­lems of pub­li­ca­tion. Because most peer-reviewed pub­li­ca­tions in the sci­ences, I think vir­tu­al­ly all of them if they’re respectable,” and and this is some­thing that the pub­lish­ers would insure, also abide by the research ethics codes. In oth­er words you actu­al­ly have to say in your arti­cle… If you look at these sci­en­tif­ic arti­cles, espe­cial­ly in the bio­med­ical fields, you actu­al­ly have to say not only where your mon­ey’s com­ing from but you also have to say you did­n’t tor­ture any­body, or even small ani­mals. And that is a con­di­tion of pub­li­ca­tion, okay. So it actu­al­ly would be quite dif­fi­cult to get stuff pub­lished if it were not done in an eth­i­cal­ly sound way.

Now, the rea­son I men­tion this is because the thing you lead on was seast­eading. And seast­eading, as I under­stand, runs into this prob­lem, even if it works as an idea. So let me explain. Seasteading is the idea that actu­al­ly’s been around in a gen­er­al sense among lib­er­tar­i­an thinkers for a long time. And the idea is that if you look at the kind of the juris­dic­tion that cov­ers the laws of a coun­try, there’s a kind of area where it cov­ers let’s say twelve miles out­side of the coast of the coun­try. And then out­side of that, let’s say the Pacific Ocean or the Atlantic Ocean which is a lot larg­er than twelve miles, it’s a free zone in terms of what kind of rules apply.

And so if you were to park a ship, let’s say, out­side of the ter­ri­to­r­i­al waters of the United States or the United Kingdom or Europe or what­ev­er, then you could set up your own laws. And peo­ple talk about cities per­haps being orga­nized this way. But the whole seast­eading project as it’s been com­ing out of Silicon Valley in recent years has been about hav­ing a kind of float­ing lab­o­ra­to­ry where you could actu­al­ly start to do this very adven­tur­ous kind of research that we’ve been talk­ing about that the tran­shu­man­ists are keen on.

And what would hap­pen would be that peo­ple would vol­un­teer to move to this big ship. So the ship would obvi­ous­ly have to have like an apart­ment com­plex on it or some­thing, not just lab­o­ra­to­ries, because peo­ple would have to be there for the time of the exper­i­ments and all the rest of it. And they would engage in pri­vate con­tracts. You know, lawyers and all the rest would typ­i­cal­ly be involved. But it would just be a pri­vate arrange­ment. You know, what are the terms under which you will under­go exper­i­men­ta­tion? How much you might get paid for it. What would be your com­pen­sa­tion? What kind of insur­ance would need to be tak­en out? Blah blah blah, all these things would be nego­ti­at­ed pri­vate­ly by the par­ties involved.

So there would­n’t be any kind of over­ar­ch­ing kind of legal arrange­ment that would kind of lim­it what exact­ly you could agree to. So in prin­ci­ple if you’re a guy who wants to take a lot of risks, you might get a sci­en­tist on board say, Okay, guy. If you want to do this, we could set it up.” And so this is the idea, right. So this is seasteading. 

The prob­lem is let’s say you do get— In prin­ci­ple I think this could work at the prac­ti­cal lev­el. In the sense that I think it could be done. You could set up such legal con­tracts. There would be peo­ple inter­est­ed in doing this. And there would be sci­en­tists inter­est­ed in doing this, in prin­ci­ple. All of that is true. What is hard is the final hur­dle. Namely okay, you do the research, you’ve got results, how do you pub­lish it? How do you pub­lish it? Who’s going to touch this stuff? Who’s going to touch this stuff?

Mason: Or what hap­pens when you go back to land? 

Fuller: Exactly. You get arrest­ed, right? You get put in jai— I mean, this could hap­pen, right? And so there is kind of issue about how this would trans­late out­ward. Because if this seast­eading stuff is actu­al­ly meant to be a gen­uine con­tri­bu­tion to sci­ence, then it’s going to be impor­tant that it get absorbed by the rest the sci­en­tif­ic com­mu­ni­ty and the rest of human­i­ty in some way. But at the moment, the obsta­cle is that it would­n’t be legal to pub­lish it. So even if you can get away…you know, you’re out­side of the long arm of the law in the sense of they can’t stop you from doing it, you would nev­er get rec­og­nized for doing it. So it’s that part of the law that you have a prob­lem with.

Mason: But it’s not just an issue with the sci­ence, there’s an issue polit­i­cal­ly to allow a lot of this stuff to happen.

Fuller: Yeah yeah, that’s right. I mean, I’ve talked about what’s going on here is that we’re mov­ing from left/right to up/down. So we’re talk­ing about the way which pol­i­tics has been orga­nized since the French Revolution, basi­cal­ly, from the National Assembly after the French Revolution, has been on a left/right bases. Where the par­ty of the right are basi­cal­ly the peo­ple who look to the past as pro­vid­ing the foun­da­tion for things—what we call con­ser­v­a­tives, nor­mal­ly. And they’re the peo­ple who would defend the church and the king and all that kin­da stuff back in the time of the French Revolution.

And on the left was a kind of…back then an amal­gam of lib­er­als and social­ists as we would now rec­og­nize them. And these were peo­ple who were basi­cal­ly anti-tradition for dif­fer­ent rea­sons, per­haps. Either to open up mar­kets or to cre­ate greater equal­i­ty in the soci­ety. There are all kinds of moti­va­tions for being against tra­di­tion. And that was the left. 

And of course those ide­olo­gies played them­selves out over the course of the 19th and 20th cen­turies. They sort of dis­tin­guished them­selves a bit more, espe­cial­ly lib­er­al ver­sus social­ist with regard to the left. That became clear­er as time went on. 

But what all of these ide­olo­gies had in com­mon, and this is why they’re kind of in ques­tion now, is that they were all about tak­ing pow­er over the states. So in oth­er words they they were about con­trol­ling the state in some fash­ion, where the state was under­stood as the seat of pow­er in soci­ety. And that’s why left and right real­ly man­i­fest them­selves most clear­ly in terms of polit­i­cal par­ties going for elec­tions to run the gov­ern­ment, right. 

But we’re now liv­ing in a world, and I think this is very much true to espe­cial­ly I think younger peo­ple who don’t feel so invest­ed in pol­i­tics in its con­ven­tion­al sense, right. Where they get mean­ing from their lives, where they get some kind of direc­tion, isn’t nec­es­sar­i­ly from any kind of left/right divide of who’s going to con­trol the bud­get of the gov­ern­ment next year. This does­n’t get younger peo­ple excit­ed, right? They’re more con­cerned about issues that I think old­er gen­er­a­tions would asso­ciate more with lifestyle issues. About what kind of world do you want to live in? What kind of being do you want to be? And where the state is kind of nei­ther here nor there with regard to their locus of concern.

And so the up-wingers are the peo­ple who in a way have a kind of lib­er­tar­i­an ten­den­cy because they want to push the bound­aries of the human con­di­tion alto­geth­er. And so what they want to do is they want to explore all these new possibilities—the mor­pho­log­i­cal free­dom, but also the pos­si­bil­i­ty of blast­ing off into space and inhab­it­ing oth­er plan­ets and space sta­tions, and all of that kind of stuff. Where in a sense the sky’s the lim­it in a very lit­er­al way for these peo­ple. This is why they’re called up-wingers.”

But then there are the down-wingers. And the down-wingers are peo­ple like envi­ron­men­tal­ists, a lot of the peo­ple who I would call post-humanists in the spe­cif­ic sense of believ­ing that the locus of val­ue in the world should not be just the human. That there is a kind of larg­er sense of val­ue where the human is not so impor­tant, and you might talk about this in terms of the val­ue of life itself. So you’re inter­est­ed in bio­di­ver­si­ty. You’re inter­est­ed in hav­ing a lot of species around. You get very wor­ried about extinc­tion and about the way in which the plan­et, the cli­mate is chang­ing so much that in fact a lot of species aren’t able to sur­vive and all of this kind of stuff.

And so the Anthropocene, as it’s often called, where human beings are now seen as the biggest cause of phys­i­cal change on the plan­et. All of this is down-winging stuff, okay. And what it does is it kind of basi­cal­ly gets humans to think about them­selves in a much more ground­ed fash­ion, a much more lim­it­ed fash­ion, in a fash­ion that makes them per­haps think that we have gone too far with sci­ence and tech­nol­o­gy rather than not far enough. So that up and down are real­ly pulling in quite oppo­site direc­tions with regard to sci­ence and technology.

And inter­est­ing­ly, I would say sci­ence and tech­nol­o­gy end up occu­py­ing the place that in the left/right divide the state occu­pied. So in oth­er words the thing that you’re real­ly fight­ing about is what are you going to do with sci­ence and tech­nol­o­gy? That’s where I think the up-wingers and down-wingers real­ly are disagreeing.

Mason: And it seems, though, in the last—at least the last cou­ple of months since the American elec­tion, that those things…it’s not so bina­ry any­more. So Trump is… Arguably, peo­ple think that Trump is very anti-science and yet he seems to be obsessed with Mars.

Fuller: Yes.

Mason: I mean, how can you be both?

Fuller: Well, and remem­ber it was Peter Thiel from Silicon Valley who was Trump’s big ear­ly sup­port­er and man­aged to orga­nize this whole round­table of Silicon—

Mason: I mean, what do you think’s going on there. The fact that Thiel is very pro­gres­sive when it comes to tech­nol­o­gy. But then—

Fuller: And also lifestyle as well.

Mason: And lifestyle, but was aligned with Trump. And then Trump’s anti-science but pro-Mars. And then Elon wants to ter­raform Mars but also wants to save this plan­et through Tesla. [crosstalk] It just seems like…the [inter­ac­tions?] are on the table right now.

Fuller: Well, the thing is— Well I think look, the attrac­tive fea­ture about Trump I think to a lot of these tran­shu­man­ists is his Promethean char­ac­ter. Like, there is no lim­it, right. Trump leaves all the options open. And I think that’s a very attrac­tive— This is the lib­er­tar­i­an streak in tran­shu­man­ism com­ing out, right. That in some sense you don’t imag­ine that there’s some lim­it already there. So not even the laws of the gov­ern­ment can stop me, right. This is why Trump in the begin­ning got into all this trou­ble with the judi­cia­ry in the United States. Because he was con­stant­ly just mak­ing laws up on the hoof through exec­u­tive orders. 

But I think tran­shu­man­ists kind of like this kind of way of oper­at­ing. Because what it means is it cuts through all the red tape, it cuts through the bureau­cra­cy, it open spheres of free­dom. At least that’s the the­o­ry in terms of back­ing Trump. And so Trump does seem in that respect to be very open-minded. I mean, what peo­ple often see as his inabil­i­ty to set­tle on a pol­i­cy, con­stant­ly chang­ing, is also a sign of his open-mindedness. 

And I think the feel­ing was, among a lot of these Silicon Valley guys, was that Hillary Clinton for all of her— And I speak as some­one who vot­ed for Hillary Clinton, and even vot­ed for her in 2008 against Obama. That what­ev­er her strengths—and there are many strengths, many competences—it is quite pre­dictable where she is going to oper­ate. She is going to be oper­at­ing in the nor­mal polit­i­cal space. She is not going to be the per­son to turn to to open up new oppor­tu­ni­ties and to rethink rad­i­cal­ly our rela­tion­ship to the plan­et or all the rest of it. That’s not where she’s at, okay. She is more kind of a high-grade ver­sion of busi­ness as usu­al. And the tran­shu­man­ists you know, are not that, right. And so there’s a sense in which it isn’t that Trump is such an attrac­tive fig­ure intrin­si­cal­ly but because he sets him­self off against some­one like Hillary Clinton so clear­ly as being not Clinton, I think that makes a big difference. 

Mason: Do you think there’s some­thing both in pol­i­tics and sci­ence and tech­nol­o­gy where­by just things are…the future is so uncer­tain and we’re kin­da hav­ing to deal as indi­vid­u­als with uncer­tain­ty being the norm?

Fuller: Yes. I think that’s right. And that’s why we need to for­mal­ly rec­og­nize that in a pro­duc­tive way. Because if you live in a world where uncer­tain­ty is the norm, you’ve got to ways to go on this. One way, and I’ve spo­ken about this a lot in my writing…what’s your atti­tude toward risk? Because when you say that there’s uncer­tain­ty in the world, that means you’re admit­ting there’s risk. There’s no way of avoid­ing risk. Risk’s there. It’s just there. There’s no way of—

And so the ques­tion then, do we act cau­tious­ly? Which is the so-called pre­cau­tion­ary prin­ci­ple, which in fact gets used to restrict inno­va­tion or at least to stag­ger the way in which it gets intro­duced into soci­ety. That’s one way to deal with uncer­tain­ty. You know, you rec­og­nize it’s there and then you kind of move more cau­tious­ly than you have in the past.

Or, do you take the oppo­site per­spec­tive, which is the proac­tionary prin­ci­ple, which is this term coined by Max More, we talked about ear­li­er from the tran­shu­man­ist move­ment, which I’ve writ­ten about a lot. And that’s a dif­fer­ent notion. That’s look­ing at uncer­tain­ty and risk as offer­ing oppor­tu­ni­ties for new things to hap­pen. That the past does not have to repeat itself. That we can in fact move to a dif­fer­ent kind of world. It’s a world where we don’t know what all the con­se­quences are going to be. But the one thing we do know is it’s prob­a­bly going to be dif­fer­ent. And that we could take advan­tage of what those new oppor­tu­ni­ties are if we are open to them. And I think that is a much more appro­pri­ate­ly tran­shu­man­ist atti­tude, and it’s one that admits at the out­set yes, the world is uncer­tain.” And that uncer­tain­ty is not going to go away. That’s the thing. It’s not going to go away. The ques­tion is how do you roll with it? 

And see, cap­i­tal­ism is very inter­est­ing as a kind of back­drop for think­ing about this issue. Because if you look at some­thing like the the­o­ry of entre­pre­neur­ship, right. In cap­i­tal­ism the entre­pre­neur is the guy with the inno­va­tion, the guy who comes up with the big idea that ends up trans­form­ing the mar­ket and so forth. The key thing, the assump­tion you might say that’s sort of built into the idea of entre­pre­neur­ship, is that mar­kets are by nature unsta­ble. In oth­er words, just because things have been done a cer­tain way or peo­ple have been buy­ing a cer­tain prod­uct for a long peri­od of time, that does­n’t mean this can nev­er change. You have to just fig­ure a clever way of lever­ag­ing the market.

But the point is there’s no rea­son to think that the future will repro­duce the past. And an entre­pre­neur always comes in there. And so the exam­ple, you look at some­body like Henry Ford with the auto­mo­bile in a world that was already sat­u­rat­ed with hors­es and where all the roads were pret­ty much orga­nized around hors­es. And yet in a very short peri­od of time, with­in a lit­tle over ten years, he wiped all the hors­es off of the roads, and all the roads got repaved and became automobile-friendly, because he knew how to kind of talk about, how to present, the new val­ues that were being intro­duced by his inno­va­tion that he pre­sent­ed as over­shad­ow­ing what­ev­er val­ues the old dom­i­nant prod­uct had.

And I think that this is kind of…transhumanism is play­ing this kind of game, right, where in a sense—and this explains a lot of I think the rhetoric of tran­shu­man­ism, which is very much a rhetoric of inno­va­tion, where we’ve got a new and improved human being that you could become. You don’t have to be the same human being that we’ve been for the last 40,000 years. We don’t need to have those 40,000 year-old brains any­more. We could have real­ly jacked-up brains. We could do all kinds of won­der­ful things. And this is the—

Mason: But then you feel like then… In that case we’re just sit­ting in the pas­sen­ger seat while sci­en­tists kind of look after our future. Do you think that uncer­tain­ty ele­ment that comes nat­u­ral­ly with sci­ence… You know, what was proven yes­ter­day was proven wrong tomorrow—

Fuller: Yeah.

Mason: Do you think that’s where some of the…both the joy and mistrust—recent mis­trust of sci­ence is com­ing from?

Fuller: Well, I think— See, what you’re bring­ing up there actu­al­ly plays into the kind of issues that tran­shu­man­ism has with the sci­en­tif­ic com­mu­ni­ty. Because remem­ber, I think one thing that’s real­ly impor­tant for lis­ten­ers is that even though tran­shu­man­ism is a very strong­ly pro-science and pro-technology ideology—in fact you will nev­er find anoth­er ide­ol­o­gy more pro-science or pro-technology—nevertheless, the sci­en­tif­ic com­mu­ni­ty does­n’t endorse it. Okay. This is a very impor­tant point.

They don’t nec­es­sar­i­ly trash it. I’m not say­ing that. But it is quite strik­ing that for an ide­ol­o­gy that real­ly claim— You know, sort of ties its col­ors to the mast of sci­ence and tech­nol­o­gy, there isn’t this rec­i­p­ro­cal love going on. Scientists for the most part keep a cer­tain dis­tance from this. Some don’t. Of course. Some embrace it. But con­sid­er­ing the size of the sci­en­tif­ic com­mu­ni­ty, con­sid­er­ing the range of issues where tran­shu­man­ist argu­ments are being made that involve sci­ence and tech­nol­o­gy, it is strik­ing how rel­a­tive­ly few sci­en­tists have felt com­pelled or inter­est­ed, even, in endors­ing this. 

And I think what this goes to is not that they’re against tran­shu­man­ism. But what they’re more in favor of is pro­tect­ing their author­i­ty as sci­en­tists. And so, as we’ve been dis­cussing, there is a good chance that a lot of these tran­shu­man­ist things—these treat­ments, what­ev­er we’re talk­ing about—as they are tried out will be shown to fail. They will be shown to fail, a lot of this stuff. This is not going to be a seam­less ride into Utopia. There’s going to be a lot of…like I say, a lot of self-sacrifice. There’s going to be a lot of that.

There is a ques­tion the sci­en­tif­ic com­mu­ni­ty has, does it want to have that blood on its hands? And this is one of the rea­sons why the sci­en­tif­ic com­mu­ni­ty does­n’t kick up a big­ger fuss about those research ethics codes. Because that actu­al­ly pro­tects them. That pro­tects them. 

Mason: It does feel to a degree like the more pop­u­lar, media-savvy tran­shu­man­ists are almost wait­ing around for sci­en­tists to kind of prove their pre­dic­tions. They kin­da sit there twid­dling their thumbs until they can point at some­thing and go, Oh this is what I said was gonna hap­pen in the mid-80s or mid-90s.”

Fuller: In fact. In fact that is cor­rect. And of course they’re impa­tient by this hap­pen­ing, which is why they’re always inter­est­ed in expe­dit­ing the course of research. But I don’t think the sci­en­tif­ic com­mu­ni­ty itself feels they’re in such a par­tic­u­lar hur­ry. Let’s put it that way. Because if you get enough fail­ure, you know… So here’s the thing, right. Elon musk. He’s send­ing all these things off to the moon or what­ev­er, and they almost all fail it seems, right. He rarely has a success.

Mason: He’s had suc­cess­es recent­ly. There have been successes—

Fuller: Yeah. But he’s got an enor­mous amount— My point is if this were a state-run agency, he would nev­er have had this run. 

Mason: Because it was tax­pay­er’s money.

Fuller: Yeah exact­ly, exact­ly. They would have stopped this imme­di­ate­ly. They would­n’t have allowed him to go on as long as he did. It’s only because this is his own mon­ey that he’s able to take these risks, okay. And this is the way you have to think, this is how the sci­en­tif­ic com­mu­ni­ty thinks about this, right. They’re not going to do this on their own nickel.

And so it strikes me that sci­en­tists are in fact quite sen­si­tive to the issue you’re rais­ing about the uncer­tain­ty and that there might be harms and things might not work and so forth. And that’s why they’re not push­ing this tran­shu­man­ist agen­da. Because the rep­u­ta­tion of the sci­en­tif­ic com­mu­ni­ty, which as you know is a very volatile thing already, for rea­sons not relat­ing to tran­shu­man­ism, could even become more volatile if they start­ed jump­ing on a ship that end­ed up sinking.

I mean you know, sci­en­tists already are deal­ing with things like cre­ation­ism and cli­mate change denial and you know, they’re deal­ing with all these issues already on the table. They had the March for Science last week, right. Which was a lit­tle bit like a rain­dance as far as I’m con­cerned. But the point was that reflects the extent to which sci­en­tists are very con­cerned about their rep­u­ta­tion in soci­ety. That they feel— And so that’s going to make them err on the side of cau­tion with regard to transhumanism.

Mason: And yet it’s inter­est­ing, you spoke about Elon and his abil­i­ty to fund cer­tain research. Companies like Facebook and Google, they all seem to be employ­ing, from sci­ence, indi­vid­u­als to work specif­i­cal­ly on prod­ucts. And I just won­der, just more gen­er­al­ly, how that’s chang­ing the sci­en­tif­ic world and how they have free­doms to explore cer­tain things. I mean, the wants and out­comes of Elon is essen­tial­ly a prod­uct at the end of the day. And Google is a prod­uct at the end of the day. And they’re poach­ing the best guys from MIT and from Harvard and—

Fuller: Elon Musk’s great desire in life seems to be to be a trav­el agent, right. To bring peo­ple up into space. That’s what he’s going after—interstellar trav­el agency.

Mason: No, but what I mean is what hap­pens when we start poach­ing some of the best sci­en­tists doing the most inter­est­ing research and then tak­ing them—

Fuller: Oh, but this has been the his­to­ry of privately-funded research from day one, okay. And so that part of the sto­ry does­n’t strike me as so surprising. 

Mason: But the rate at which it’s hap­pen­ing. I mean—

Fuller: Well, let me tell—

Mason: I mean, Facebook yes­ter­day announced they’re going to do a brain-computer interface—

Fuller: Let me tell you some­thing. Before the US gov­er— Let’s look at the United States for a sec­ond. But the same also applies to Britain. That before the end of World War II, when the National Science Foundation got estab­lished and it was actu­al­ly a for­mal gov­ern­ment agency fund­ing sci­en­tif­ic research, which end­ed up becom­ing very dom­i­nant in the Cold War era, sci­en­tif­ic research in America was always pri­vate­ly fund­ed. This is the point, okay. Rockefeller, Ford, Carnegie, all the big kind of orig­i­nal indus­tri­al kin­da guys, were the ones with the big foun­da­tions who were fund­ing the science. 

And this was true even to a large extent after World War II. And in Britain it was the same way. So it was real­ly— I would say it’s only only been in the Cold War period—because in a sense the dom­i­nance of the state as the fun­der of sci­ence is declin­ing now. But it was only in the Cold War peri­od where it looked as though there was a state/science con­nec­tion that was extreme­ly strong. But oth­er­wise there is a long his­to­ry of pri­vate fund­ing, often tak­ing peo­ple off uni­ver­si­ty cam­pus­es, putting them in spe­cial research parks… You know, Bell Labs for exam­ple was a very famous one in the ear­ly 20th cen­tu­ry. To get peo­ple to work on stuff. 

And you know, to give you an exam­ple, the Rockefeller Foundation fund­ed the Cavendish Laboratory in Cambridge, which is where the DNA dou­ble helix was dis­cov­ered in the 1950s. They basi­cal­ly hired all these sci­en­tists from Britain and the United States to come over. Watson of Watson/Crick is an American. He was brought over. And they just said, Work on this.” That was how we got DNA, okay. And so this strat­e­gy that you’re talk­ing about with Google and the rest of these guys, that itself is not unusual.

However, what is inter­est­ing in terms of the way in which these Silicon Valley com­pa­nies are invest­ing is… As you know, at least if you look at the full port­fo­lio of things that tran­shu­man­ists are inter­est­ed in, there is a very strong bias in this fund­ing that we’re talk­ing about to arti­fi­cial intel­li­gence stuff. Much more so than to the biotech stuff, actu­al­ly. Much more, it’s in arti­fi­cial intelligence. 

And why is that? It’s the ethics codes thing again. In oth­er words, there are few­er eth­i­cal restric­tions in get­ting involved in advanced arti­fi­cial intel­li­gence research than there is get­ting involved in advanced biotech­nol­o­gy research. And that’s one of the rea­sons why this whole fas­ci­na­tion that tran­shu­man­ists after Nick Bostrom have about exis­ten­tial risk, right. Why they want to put this kind of on the table is because at the moment arti­fi­cial intel­li­gence is not suf­fi­cient­ly reg­u­lat­ed in the way that let’s say biotech­nol­o­gy is. The ethics codes gov­ern­ing arti­fi­cial intel­li­gence research are not near­ly as restric­tive as in biotech­nol­o­gy. And so as a result it is pos­si­ble to do all kinds of crazy things, at least in prin­ci­ple, in arti­fi­cial intel­li­gence that you would not be able to do in biol­o­gy, at least legally.

And this is why the exis­ten­tial­ist risk thing has orig— Because on the one hand, Google and the rest of these com­pa­nies are invest­ing a lot of mon­ey in try­ing to advance arti­fi­cial intel­li­gence and to real­ly you know…have to take var­i­ous step changes of all sorts to get clos­er to the Singularity or what­ev­er. But at the same time they also real­ize oh my god, we’re let­ting genie out of the box and we don’t have any way reg­u­lat­ing this. And hence we have all these insti­tutes all over the world devot­ed to exis­ten­tial risk. Because just in case these guys at Google do come up with some­thing, who’s going to save the world from it? 

Mason: That’s what fas­ci­nates me. It’s inter­est­ing to watch tran­shu­man­ists who are kind of very pro the idea of robots and merg­ing with machines being also very anti AI.

Fuller: Well, this is the Nick Bostrom thing, you know. I mean, among those of us who are involved in the tran­shu­man­ist com­mu­ni­ty there was a lit­tle email exchange a few months ago when Nick Bostrom was being brought over to the United Nations to talk about exis­ten­tial risk and Davos and all these oth­er places. And so there was some­body who was argu­ing in the tran­shu­man­ist com­mu­ni­ty, Isn’t this great now? Transhumanism is final­ly get­ting the kind of vis­i­bil­i­ty it deserves. Because look at Nick Bostrom, he’s like trav­el­ing around the world talk­ing to all these big deals.”

And I point­ed out to this per­son that look, yeah he’s big. Why’s he being brought out to talk about all this stuff? Because he’s talk­ing about the poten­tial risks and harms of it. Not because he’s telling you to invest in it. Come on, guys, get real. This is not the mes­sage tran­shu­man­ism wants to send, name­ly, Hey guys, we’re here and we’re your biggest night­mare.” No! But this is what Nick Bostrom is doing. And this is kind of the way in which peo­ple are com­ing to know about tran­shu­man­ism in the pop­u­lar media, is through this con­cept of exis­ten­tial risk.

Mason: Well, it goes back to our first point of this dis­cus­sion, is prob­a­bly where the fear for a lot of these tech­nolo­gies comes from. Because it only takes one thing and there’s an expec­ta­tion, whether it’s because of the aes­thet­ics of sci­ence fic­tion where one thing goes rogue! and then sud­den­ly the whole thing col­laps­es on itself.

Fuller: Sure! We’ve got some movies. We got all kinds of movies about this already. So it’s already feed­ing into a kind of cul­tur­al imag­i­nary. But to my mind it seems to me this is not doing tran­shu­man­ism any favors at all. Because it’s mak­ing peo­ple fear this stuff.

Mason: So how does tran­shu­man­ism fix the PR problem?

Fuller: Well, this is where I think tran­shu­man— You know, I believe this is why I always talk about the proac­tionary stuff. And I talk about this need that we’re going to have to think about self-sacrifice. We have to get sober about this. We have to say yes, we real­ly do want this stuff to hap­pen. But it’s not going to be some straight-arrow way of get­ting there. It’s going to take a lot of blood, sweat, and tears. And there there seems to be a lot of inter­est in doing it. There’s a lot of mon­ey back­ing it. And all of that is cool. But we have to see it in real­is­tic terms.

And so yes, there are risks. They’re risks we should embrace. But we should also pro­vide ade­quate sup­port, ade­quate com­pen­sa­tion, all the rest of it. What I think we can­not do is deny them. Deny them. 

See, at the moment, I think we are in a kind of polar­ized posi­tion with regard to tran­shu­man­ism. On the one hand, you’ve got guys like Nick Bostrom run­ning around basi­cal­ly scar­ing peo­ple, right. And I know he does­n’t mean to do that. But cer­tain­ly the kinds of things that peo­ple are inter­est­ed in hav­ing him talk about move in that direc­tion. They would­n’t be inter­view­ing him if they thought AI was so cool. They’re inter­view­ing him because of super­in­tel­li­gence and its paperclip-collecting habits, right? This is why they’re inter­view­ing him.

So we’ve got that kind of scare­mon­ger­ing side of tran­shu­man­ism, which is get­ting a lot of pub­lic vis­i­bil­i­ty. But then on the part of a lot of rank and file transhumanists—the kind of nor­mal transhumanists—they’re just in denial that there’s any risk at all. They just think the only prob­lem is lack of free­dom. This kind of mind­less lib­er­tar­i­an response that you get from tran­shu­man­ists some­times. And what we need is a kind of ground­ed posi­tion that basi­cal­ly says yeah, this is risky shit but we ought to be tak­ing the risks.

Mason: Thank you to Professor Steve Fuller for shar­ing his thoughts on how we might nav­i­gate an increas­ing­ly tech­nol­o­gized future. 

If you like what you’ve heard, then you can sub­scribe for our lat­est episode. Or fol­low us on Twitter, Facebook or Instagram: @FuturesPodcast.

More episodes, tran­scripts and show notes can be found at future​spod​cast​.net.

Thank you for lis­ten­ing to the Futures Podcast.