Micah Saul: This project is built on a hypoth­e­sis. There are moments in his­to­ry when the sta­tus quo fails. Political sys­tems prove insuf­fi­cient, reli­gious ideas unsat­is­fac­to­ry, social struc­tures intol­er­a­ble. These are moments of crisis. 

Aengus Anderson: During some of these moments, great minds have entered into con­ver­sa­tion and torn apart inher­it­ed ideas, dethron­ing truths, com­bin­ing old thoughts, and cre­at­ing new ideas. They’ve shaped the norms of future generations.

Saul: Every era has its issues, but do ours war­rant The Conversation? If they do, is it happening?

Anderson: We’ll be explor­ing these sorts of ques­tions through con­ver­sa­tions with a cross-section of American thinkers, peo­ple who are cri­tiquing some aspect of nor­mal­i­ty and offer­ing an alter­na­tive vision of the future. People who might be hav­ing The Conversation.

Saul: Like a real con­ver­sa­tion, this project is going to be sub­jec­tive. It will fre­quent­ly change direc­tions, con­nect unex­pect­ed ideas, and wan­der between the tan­gi­ble and the abstract. It will leave us with far more ques­tions than answers because after all, nobody has a monop­oly on dream­ing about the future.

Anderson: I’m Aengus Anderson.

Saul: And I’m Micah Saul. And you’re lis­ten­ing to The Conversation.


Anderson: Did you get a chance to lis­ten to the con­ver­sa­tion with Reverend Fife?

Saul: I did. I’m impressed. I can’t imag­ine a bet­ter place to start this.

Anderson: For all that I was com­plete­ly ner­vous about going into this and doing the first inter­view, because this is such a sort of watery project… It’s like, what’s the first ques­tion to ask?

Saul: Right.

Anderson: But it felt like once we got warmed up, I was pret­ty hap­py with it.

Saul: You guys got to the real­ly impor­tant, big ques­tions real­ly quick­ly. You know, is the nation-state obsolete?

Anderson: Like, any one of these big fun­da­men­tal ideas is so inter­est­ing because you’re kind of going along and you’re talk­ing about it, you’re talk­ing about why this immi­gra­tion pol­i­cy feels wrong in a lot of ways. Intellectually, I can total­ly go, Yeah!” But I kind of think, what does say­ing yeah mean there? That’s when I sort of real­ized how big that idea is. I can’t imag­ine what does that world look like with­out the nation state? I don’t know any­thing else.

Saul: So let’s talk about tomor­row now.

Anderson: Yeah. And this is our first moment to real­ly fig­ure out how we bridge these conversations.

Saul: I think this is what’s going to make the project inter­est­ing, and I think we’re going to have to be learn­ing as we go.

Anderson: Yes. And for the peo­ple who are lis­ten­ing to us, I hope they sur­vive this tran­si­tion as we sort of get our sea legs and fig­ure out how to make this all work.

Saul: Even more impor­tant­ly than that, I hope they tell us where we’re screw­ing up.

Anderson: Yes. Absolutely. But in gen­tle terms.

Saul: So, tomor­row. You’re going to be meet­ing with Max More at the Alcor Life Extension Foundation.

Anderson: Yeah. So do you want to tell the peo­ple who aren’t famil­iar with Alcor what they are?

Saul: So this is, you die, you have you body sent down, and it gets cryo­geni­cal­ly frozen in these real­ly cool-looking stain­less steel tubes full of liq­uid nitro­gen, the idea being either your body or just your head is frozen in the intent that at some point in the future the tech­nol­o­gy will be there to either bring you back or to down­load your brain into a com­put­er, or some­thing along those lines. This is the way to pre­serve your consciousness.

Anderson: Right away, they’re doing some­thing that is fun­da­men­tal­ly very dif­fer­ent, and is also based on tech­nol­o­gy that does­n’t yet exist. It’s bank­ing on a cer­tain lev­el of devel­op­ment in the future. But the eth­i­cal ram­i­fi­ca­tions of what they’re doing, and more than what they’re doing but sort of what they’re hop­ing for, are real­ly big. And you can tell on their web site that they have had to deal with a lot of peo­ple not lik­ing them. I mean, they’ve real­ly thought about their posi­tion, and they frame it in good eth­i­cal argu­ments that are very per­sua­sive. And this is where I think it’s going to be incred­i­ble to talk to Dr. More tomor­row, because his back­ground is actu­al­ly in phi­los­o­phy, among oth­er things.

Saul: He actu­al­ly can claim cred­it for coin­ing the phrase tran­shu­man­ism, I believe in the ear­ly ear­ly 90s. There was an essay in which he sort of coined that phrase, at least in the way that it’s now understood. 

Anderson: So fin­gers crossed. Tomorrow should be good. Hopefully I don’t botch the con­ver­sa­tion, but I think Dr. More’s going to be amaz­ing and will prob­a­bly be very inter­est­ing despite all of my incom­pe­tent question-asking.

Saul: I guess we should prob­a­bly just put in a quick plug for the Kickstarter thing again. If this project seems inter­est­ing to you, it would be awe­some if you could kick down a few bucks to help us get this happening.

Anderson: Yeah. So let’s see where this goes.

Saul: Sounds good.

Anderson: Very cool

Saul: Alright. Take care, sir.

Anderson: Alright. Adios.

Saul: Vaya con carne.


Max More: I’ve been a mem­ber of the Alcor Life Extension Foundation for about 26 years, but I became CEO and President just about a year and a quar­ter ago. I’ve got a long his­to­ry with life exten­sion and tran­shu­man­ism and cry­on­ics, real­ly get­ting inter­est­ed in the idea of dras­ti­cal­ly extend­ing the human life even before I fin­ished grow­ing. I was still in my mid-teens when I got very seri­ous about this idea. Really, its roots go even fur­ther back than that because I’ve always been fas­ci­nat­ed with over­com­ing lim­its. When I was 5 years old I watched the Apollo moon land­ing and every one of them after that when peo­ple lost inter­est, I was still watch­ing. So this idea of get­ting off the plan­et, beat­ing the grav­i­ty well, extend­ing the human lifes­pan. I’m also inter­est­ed in increas­ing human intel­li­gence, being able to solve hard­er prob­lems and think bet­ter. So all this kind of com­mon theme of over­com­ing limits.

So life exten­sion and cry­on­ics is a nat­ur­al part of that. My main goal is not to die in the first place. I hope to keep liv­ing, hope­ful­ly long enough that sci­ence will have solved the aging prob­lem and I won’t have to die. But since I don’t know how long that’s going to take, cry­on­ics is the real back­up pol­i­cy for me. It’s like real life insur­ance in the true sense of the term. So if I don’t make it, it at least gives me a chance of com­ing back again in the future.

Anderson: What is tran­shu­man­ism? I real­ized we were talk­ing about that and peo­ple lis­ten­ing may not know.

More: Transhumanism is essen­tial­ly the idea that it is both pos­si­ble and desir­able to use advanc­ing tech­nolo­gies to fun­da­men­tal­ly alter the human con­di­tion for the bet­ter. Humanism had the same fun­da­men­tal val­ues of a belief in the pos­si­bil­i­ty of progress, that by our own efforts, regard­less of whether there’s a high­er pow­er or not, we could make the world bet­ter. The cham­pi­oning of sci­ence and rea­son to do that means a view that also requires good­will. It requires over­look­ing arti­fi­cial dis­tinc­tions among peo­ple and focus­ing on our com­mon humanity.

So tran­shu­man­ism has incor­po­rat­ed that and built on that, it just takes it fur­ther with the idea that we have new tech­no­log­i­cal tools that’re emerg­ing that can do that on a more fun­da­men­tal lev­el and alter the human con­di­tion itself. So that’s where the tran­shu­man­ism comes in. That real­ly is the idea that the human con­di­tion is not a fixed point. It’s some­thing we can alter, and we’re now begin­ning to decode our genome, under­stand our neu­rol­o­gy better. 

All those things that’ve been mys­ter­ies in the past, things we could­n’t change, we are now just at the begin­ning point of mak­ing mod­i­fi­ca­tions to those. We can extend our lifes­pan, we can maybe improve the func­tion of our brain, solve a lot of the prob­lems that evo­lu­tion­ary design has brought along. So it’s real­ly the idea that we’re at a pret­ty unique point in his­to­ry. We are now just begin­ning to take charge of our own evo­lu­tion and decide on our own constitution. 

Anderson: So this is a his­tor­i­cal­ly unique moment.

More: Yeah, and that moment of course is smeared over sev­er­al decades—

Anderson: Right.

More: But his­tor­i­cal­ly speak­ing, it’s a moment.

Anderson: Yeah. It’s just a point on the big­ger scale. I always try to look at the present and say, What is some­thing we want to improve about the present?” before mov­ing on to the ques­tion of, How do we real­ly want the future to look?” It’s fun­ny. It sounds like such a fun­da­men­tal thing, the idea of death. Is that the issue of the present, the thing that you are most inter­est­ed in addressing?

More: Yes. I think over­com­ing aging and death to me is the cen­tral issue. Because if we solve that one, we have time to solve all the others.

Anderson: Okay, so that’s more press­ing than chang­ing one­self in terms of intel­li­gence or…

More: I think they all mat­ter, and they’re not nec­es­sar­i­ly exclu­sive. I think these may need to go togeth­er. But yeah, extend­ing life seems to me a para­mount issue, oth­er­wise peo­ple are going to be lost for­ev­er. It’s in a sense a ser­i­al holo­caust. One by one, mil­lions of peo­ple are dying every year and that’s pret­ty appalling. I think peo­ple will look back from the future and say it was just hor­ri­fy­ing that peo­ple weren’t tak­ing this prob­lem more seriously. 

I think essen­tial­ly what we are is psy­cho­log­i­cal con­ti­nu­ity. I’m not real­ly my body. I mean, I have to have a body right now to exist because my per­son­al­i­ty resides in my brain essen­tial­ly, and that requires a body. But it’s not the par­tic­u­lar atoms I’m made of because those get changed over over time, any­way. So I’m not my atoms. I’m real­ly the way they’re struc­tured. But even that’s not fun­da­men­tal­ly true, because you can do var­i­ous thought exper­i­ments. What if I tried replac­ing my neu­rons with syn­thet­ic neu­rons, which is already start­ing to hap­pen right now. Then grad­u­al­ly you might end up with a brain that’s entire­ly syn­thet­ic, but the syn­thet­ic neu­rons do the same job as the bio­log­i­cal ones but they’re made out of dif­fer­ent material.

So I’m not even essen­tial­ly bio­log­i­cal. It’s real­ly the pro­cess­ing that goes on that sup­ports my mem­o­ry, my per­son­al­i­ty, my val­ues and so on. I think that’s the core of who I am. And so that poten­tial­ly can sur­vive chang­ing bod­ies. It could survive…possibly I won’t even be revived in a body through cry­on­ics. Maybe my brain will be scanned and a new copy will be made or a vir­tu­al self will be cre­at­ed. And I would con­sid­er that to be survival.

Anderson: That seems like that ulti­mate­ly rests on a world­view that’s very mate­ri­al­is­tic. Yesterday I was talk­ing to a rev­erend and he was real­ly excit­ed about devel­op­ments in tech­nol­o­gy. But for him the ques­tions of how tech­nol­o­gy will be used are ulti­mate­ly set­tled in a moral realm. And I know you can address that philo­soph­i­cal­ly. You can also address that the­o­log­i­cal­ly. But for him, he has a point where he can…the argu­ment stops when you get to this point of there are the­o­log­i­cal val­ues, and there is the idea of a soul. And so I’m won­der­ing, with­out a soul in a mate­ri­al­is­tic world­view, where do you get those val­ues about how we use technology?

More: Okay. I do have a soul. Actually, I have two soles, but they’re on the bot­tom of my feet. Those are the only soles I believe in. The term mate­ri­al­is­tic I would­n’t use because actu­al­ly in phi­los­o­phy the term phys­i­cal­ist” rather than mate­ri­al­ist is pre­ferred. Materialist of course has the oth­er mean­ing that—

Anderson: Of consumption.

More: Yeah, of con­sump­tion, mon­ey, that kind of thing. Whereas my view cer­tain­ly says noth­ing about lack of val­ues. It’s com­plete­ly com­pat­i­ble with hav­ing strong mean­ing in life and pur­pos­es and goals and val­ues and morals. But it’s fun­da­men­tal­ly a meta­phys­i­cal view that says I see no rea­son to believe in super­nat­ur­al enti­ties, super­nat­ur­al forces. I can’t prove there aren’t such things, but you real­ly can’t prove a neg­a­tive like that. But I don’t see any evi­dence for them. So I’m essen­tial­ly a phys­i­cal being, and if you destroy every copy of my phys­i­cal self then I’m gone. I don’t see any rea­son to think there is a soul that goes some­where else.

Values are extreme­ly impor­tant when it comes to think­ing about advanced tech­nolo­gies and where we’re head­ed. And cer­tain­ly in the tran­shu­man­ist move­ment, we do spend a lot of time not just cheer­ing on tech­nol­o­gy, although that needs to be done because there are a lot of anti-technology peo­ple around, but we also do a lot of crit­i­cal think­ing about the kinds of tech­nolo­gies we’d like, how to guide the devel­op­ment of tech­nolo­gies so that they actu­al­ly are ben­e­fi­cial rather than harmful.

Because obvi­ous­ly tech­nol­o­gy has harm­ful side-effects. Whenever we cre­ate something…the auto­mo­bile being a clas­sic exam­ple. It freed up a lot of peo­ple, allowed them to change their lives. But it kills an awful lot of peo­ple. So while I think in gen­er­al tech­nol­o­gy’s a good thing, it’s an exten­sion of human rea­son and cre­ativ­i­ty and pro­duc­tiv­i­ty, that does­n’t mean that any tech­nol­o­gy and any use of tech­nol­o­gy is good. So cer­tain­ly my views are we want to use tech­nol­o­gy to improve our health, to improve our intel­li­gence, to become bet­ter peo­ple, even to improve our emo­tions and the way we react. We’ve evolved a cer­tain way. Our bod­ies and brains pro­duce cer­tain hor­mones and aggres­sive reac­tions and ter­ri­to­r­i­al behav­iors, and we just nat­u­ral­ly have this in-group/out-group response. Those are all things that poten­tial­ly could be mod­i­fied, and we may do that in that future very cau­tious­ly. But we may become bet­ter peo­ple, per­haps in a way that’s not real­ly pos­si­ble with­out tech­no­log­i­cal intervention.

Anderson: So as we look for­ward and we look at maybe improv­ing as a species, how do we decide which attrib­ut­es are good or which attrib­ut­es are bad, and what do we want to cul­ti­vate in ourselves?

More: Well, that’s a very dif­fi­cult ques­tion. It’s a dif­fi­cult ques­tion to answer. I think the fun­da­men­tal answer is that we each have to think about that very care­ful­ly and make our own deci­sions. And to me it’s crit­i­cal that nobody make those deci­sions for us. If you go back ear­ly 20th cen­tu­ry and going up through the cen­tu­ry, you see a lot of tech­no­crat­ic peo­ple, start­ing with peo­ple like H.G. Wells, that had this view that the sci­en­tists should be in charge, they should make the deci­sions for every­body, they should decide how soci­ety is run. And you see even in the United States, eugen­ics move­ments were basi­cal­ly some elite group say­ing what kind of peo­ple they should be. I’m fun­da­men­tal­ly opposed to that approach. My approach is that it’s good to cre­ate these options, but then you have to let peo­ple choose which of those options they want.

And that’s very dif­fi­cult. There are some very touch ques­tions. There’s the exam­ple of some peo­ple in the blind com­mu­ni­ty who actu­al­ly want to have chil­dren who are blind, who would delib­er­ate­ly cre­ate blind chil­dren when they did­n’t have to. So that rais­es a very dif­fi­cult ques­tion. Is that some­thing where we could step in and say, You’re caus­ing harm. We could pre­vent that.” Or is that some­thing that should be their choice, as some­one bring­ing new life into being. That’s a very tricky issue. I’m not sure what my answer is on that one.

Anderson: So it seems like there does have to be some sort of con­ver­sa­tion about…actually like I was talk­ing with the rev­erend yes­ter­day, he was think­ing about, his char­ac­ter­i­za­tion was an umpire. Someone who can sort of on a glob­al lev­el think about things that are not per­mis­si­ble uses. I know there’s always that ten­sion between that indi­vid­ual lib­er­ty and col­lec­tive good. How do we have the con­ver­sa­tion about the umpire?

More: Well, I would hope it’s not actu­al­ly a glob­al umpire, because one rea­son we have the United States rather than the United State here is that we can actu­al­ly have dif­fer­ences. If you don’t like the way one state oper­ates, you can go to a dif­fer­ent state and there are some­what dif­fer­ent rules. 

Now again, there may have to be some kind of glob­al rules. You can’t allow peo­ple, per­haps, to pos­sess indi­vid­ual weapons that could destroy the entire plan­et very eas­i­ly. That may be some­thing you have to stop. But for the most part I think it’s good to allow diver­si­ty and have dif­fer­ent com­mu­ni­ties which set their own rules to var­i­ous degrees. So I think with­in those com­mu­ni­ties you’ve got to then decide what the rules are and how to enforce them and what your lim­its will be.

Anderson: I’ve read a bit about your think­ing about the pre­cau­tion­ary prin­ci­ple. Could you tell me a lit­tle bit more about that?

More: Yeah, I’ve cre­at­ed some­thing called the proac­tionary prin­ci­ple as an alter­na­tive to the pre­cau­tion­ary prin­ci­ple. The pre­cau­tion­ary prin­ci­ple comes in a num­ber of dif­fer­ent forms, but essen­tial­ly it says that before any new tech­nol­o­gy or process is allowed, you must be able to prove that it’s safe. Now, to me that’s kind of an insane require­ment. It’s an impos­si­ble requirement. 

Imagine apply­ing that to fire, the first time we had fire. Could fire cause prob­lems? Well, yes. You could burn your hand, you could burn your house down, you could have big prob­lems. Okay, so no fire. You could go through all the major tech­no­log­i­cal advances in his­to­ry and show the same thing. So basi­cal­ly it’s a recipe for pre­vent­ing tech­nolo­gies, and as such its pro­po­nents real­ly use it selec­tive­ly, because they don’t want to do it with every­thing but they want to be able to decide which tech­nolo­gies are okay. So if they don’t like genet­ic engi­neer­ing they’re going to say this fails the test, but oth­er things they do like they’re going to allow. So to me it’s very arbi­trary and real­ly allows ene­mies of var­i­ous tech­nolo­gies to claim a prin­ci­pled way of oppos­ing them that actu­al­ly is real­ly quite arbitrary. 

So the proac­tionary prin­ci­ple I devel­oped is an alter­na­tive which is a lot more objec­tive and bal­anced and basi­cal­ly con­sists of ten sub-principles which require you to think objec­tive­ly about the con­se­quences, not just look for the pos­si­ble down­sides, but also to look for the ben­e­fits and to bal­ance them. To use the best avail­able ratio­nal meth­ods that we know of instead of rely­ing intu­ition and pub­lic fears about what might hap­pen, use the best crit­i­cal and cre­ative methods.

Anderson: Can it be that with new­er tech­nolo­gies, because they are more and more pow­er­ful and they have greater impact on us, the deci­sions to use them are per­haps not in every­body’s hands? So are we get­ting to a point where the pre­cau­tion­ary prin­ci­ple becomes more sen­si­ble because you can maybe have a small group make a tech­no­log­i­cal deci­sion that has a large ram­i­fi­ca­tion that the peo­ple who are deal­ing with it maybe did not want?

More: I don’t think the pre­cau­tion­ary prin­ci­ple is ever a good deci­sion rule, because it’s so arbi­trary and is open to manip­u­la­tion and to emo­tion­al think­ing. A sep­a­rate issue is who makes the deci­sions? I mean, you can decide whether it’s going to be every­body as a whole, which is not real­ly fea­si­ble, or cer­tain gov­ern­ment groups or pres­sure groups, inter­na­tion­al pol­i­cy­mak­ers. Whatever the lev­el is, they get to choose between the pre­cau­tion­ary prin­ci­ple, the proac­tionary prin­ci­ple, or some­thing else. So it’s not real­ly a mat­ter of who’s decid­ing, it’s a mat­ter of which deci­sion rules they’re using, and I think some­thing like the proac­tionary prin­ci­ple struc­tures peo­ple’s think­ing in a way that is more like­ly to lead to good outcomes. 

So who makes the deci­sions is a whole sep­a­rate thing, and I’m gen­er­al­ly in favor of max­i­mum input but you also have to be care­ful that a lot of peo­ple express­ing opin­ions may know noth­ing at all about the tech­nol­o­gy. So it’s real­ly not real­is­tic to say every­body should have an equal say. I think every­body should have a say, but you do need some kind of way of putting those opin­ions togeth­er and actu­al­ly weigh­ing up the like­ly truth. And that’s a very tough thing to do.

Anderson: With a project like this, that’s some­thing I’m real­ly inter­est­ed in, because every­body has to live in what­ev­er future we’re cre­at­ing. I mean, this sounds sort of like you think some peo­ple should have more of a say in the future because they are more informed about the tech­no­log­i­cal choic­es we’ll be making.

More: Well, I think they will tend to. People who are more informed will tend to be more per­sua­sive than those that are not informed. But if it’s at the lev­el of sim­ply vot­ing in a democ­ra­cy, that’s kind of scary because every­body has an opin­ion, every­body gets one vote. That that does­n’t real­ly lead to ratio­nal out­comes. Now, if we could some­how encour­age politi­cians to base deci­sions not just on polit­i­cal pop­u­lar­i­ty but on some more struc­tured process, you might get bet­ter outcomes. 

Anderson: You men­tioned ratio­nal­i­ty, the idea that we can make more ratio­nal deci­sions or maybe that say, one per­son one vote will not lead to ratio­nal out­comes, but is there some­thing to be said for the irra­tional? If peo­ple want the irra­tional, say they want to gov­ern them­selves bad­ly or make deci­sions that hon­est­ly seem against their best inter­est, to what extent should we seek a future in which soci­ety can sort of make those irra­tional, maybe self-destructive, decisions?

More: Well, I would­n’t want to be in that soci­ety, but I’m quite hap­py if peo­ple want­i­ng to make real­ly irra­tional deci­sions, if they want to go off and form their own com­mu­ni­ty they’re wel­come to do so, as long as they don’t start send­ing bombs back my way or some­thing like that.

But sure, I’m all in favor of that kind of diver­si­ty and there are already plen­ty of com­mu­ni­ties that I think are quite crazy, based on crazy ideas, and I’m not going to inter­fere with their way of liv­ing. But if it’s some­thing they’re going to impose on me, then yeah that’s a prob­lem. In a real philo­soph­i­cal sense, I don’t think there real­ly is any place for the irra­tional. But I have to qual­i­fy that by say­ing that that does­n’t mean every­thing has to be ratio­nal, because they’re not exclu­sive. There’s also things that are aratio­nal, or non-ratio­nal, where it’s real­ly a mat­ter of taste or…where there’s no real objec­tive stan­dard. If I ask you what your favorite col­or is and you say, Oh, blue,” and I say, Wrong!” Well that does­n’t make any sense, right? It’s just pure­ly a preference.

But when there’s some­thing that you can actu­al­ly test, when some­one says, This ener­gy source will be less expen­sive,” or, This vac­cine will pro­duce more ben­e­fit than harm,” those are things that you can actu­al­ly test objectively.

Anderson: But there are moral assump­tions beneath them.

More: Sure, yeah.

Anderson: So rea­son is a tool lead­ing you towards an idea of the good.

More: It’s a way of test­ing your idea of the good. I don’t think rea­son can gen­er­ate the idea of the good. I think we have to start with what we want, much of which is com­plete­ly non-ratio­nal, it’s just based in the way we’ve evolved and our back­ground. Reason comes in by say­ing, okay giv­en that I have this desire, does it make sense? Let me ask some ques­tions about it. Let me con­sid­er alter­na­tive pos­si­bil­i­ties. Let me ask what kind of fac­tu­al assump­tions might influ­ence my belief. So rea­son could come in there. It can kind of test our beliefs. But you can’t just start from the thing and decide what val­ues are ratio­nal. I don’t think that’s possible.

Anderson: And that’s a real­ly intrigu­ing thing, the idea that we’re using rea­son as a tool to test how to achieve a goal that may actu­al­ly just be sort of non-rational…

More: Yeah. Like want­i­ng to live, that’s non-rational. I can’t give you some kind of deduc­tive argu­ment that you must want to live. Either you do or you don’t. 

Anderson: Is that the fun­da­men­tal desire guid­ing your vision of the future?

More: That’s hard to say because in some sense yes, but I’m not sure that that’s a desire that you can take on its own.

Anderson: Right.

More: It has to go along with oth­er things. Would I want to live under any cir­cum­stances? No, def­i­nite­ly not. If I thought the rest of my life, for how­ev­er long I was going to be, was going to be agony and pain and mis­ery and inabil­i­ty to do any­thing pro­duc­tive or cre­ative or enjoy rela­tion­ships, then no. I would see no point. 

So it’s got to be that I want to live because I see a life that has the pos­si­bil­i­ty of joy and plea­sure and pro­duc­tiv­i­ty and cre­ativ­i­ty and good rela­tion­ships and learn­ing and improving.

Anderson: Okay, so that’s sort of the good. Okay.

I know I’m going to be talk­ing to some deep ecol­o­gists down the line in this project, and I imag­ine that they would ask what we’ve lost in terms of the nat­ur­al world, which of course has always been changed by us as long as we have been in it. But is there some intrin­sic val­ue to a rel­a­tive­ly unmod­i­fied nat­ur­al sys­tem? Can that con­fer mean­ing in some way?

More: I don’t think think so. I don’t know what an intrin­sic mean­ing is. I think mean­ing is only rel­a­tive to con­scious beings, and so it has mean­ing but only in the sense that we choose to bestow mean­ing upon it or find mean­ing in it.

Anderson: I guess I’m think­ing because we were talk­ing about want­i­ng to live, that being a sub­jec­tive, ara­tional desire. I’m think­ing maybe here’s a deep ecol­o­gist who has a sub­jec­tive ara­tional desire to some­how exist in this sort of holis­tic ecosys­tem that is rel­a­tive­ly unchanged by man.

More: Again, that kind of thing to me is a per­son­al choice. I’m a mem­ber of the Nature Conservancy. I actu­al­ly do place a val­ue on hav­ing large areas of undis­turbed wilder­ness. I like that. I don’t think some­body else has to val­ue that them­selves, but it’s good that we have an orga­ni­za­tion that does­n’t force you to pay for it through your tax­es but actu­al­ly goes out and solic­its mon­ey and buys up areas of land and pro­tects them. I like that. I like to just know they’re there and per­haps occa­sion­al­ly go vis­it and go hike and enjoy nature. So it’s not that I see there’s an intrin­sic val­ue there, it’s just some­thing that I val­ue, and quite a few oth­er peo­ple val­ue and so we choose to sup­port it.

Anderson: Okay.

More: Fundamentally, I don’t see that there’s a val­ue in the nat­ur­al state as it is.

Anderson: As it is. 

If peo­ple have changed them­selves in some way, do they become dif­fer­ent as peo­ple, and do they apply that same atti­tude towards peo­ple who haven’t changed, in the same way that we maybe con­serve nature when we enjoy it but aren’t too wor­ried about it? when I think about the para­noia that I’ve encoun­tered a lot when I’ve read about futur­ist ideas, it seems like there’s a lot of wor­ry of that.

More: Oh, I guess you’re think­ing of the kind of wor­ry that a new species will emerge and look down upon what we left behind [crosstalk]

Anderson: Or maybe it’s even not quite that dra­mat­ic, but like say we have a high­er class of peo­ple who have greater intel­lect and greater abil­i­ty to maybe man­age and con­trol soci­ety, and there actu­al­ly is a real difference.

More: Yeah, that’s quite a com­mon theme. I know there’s one biol­o­gist who wrote a book actu­al­ly where he real­ly devel­oped that theme in detail, where some peo­ple genet­i­cal­ly engi­neer their chil­dren over a cou­ple of gen­er­a­tions, soci­ety kind of divides into two quite dif­fer­ent groups. I tend to think that’s not so like­ly to hap­pen. There might be some tran­si­tion­al issues there if peo­ple who are wealthy or more edu­cat­ed are the first to use these new tech­nolo­gies and they start off being expensive. 

But I think just as again with oth­er tech­nolo­gies, if that fol­lows the same trends we’ll tend to find peo­ple will catch up pret­ty quick­ly. It’s like with mobile phones, you could’ve said, Well, we real­ly should let peo­ple have mobile phones because the wealthy guys are going to have them first and they’re going to have all these advan­tages in terms of com­mu­ni­ca­tion and oth­er peo­ple will be screwed.” 

But what hap­pens is in a rather short peri­od of time, we go from a very few peo­ple car­ry­ing these suitcase-sized cell phones to every­body, it does­n’t mat­ter how poor they are. You can go to the poor­est parts of the city and you see peo­ple car­ry­ing cell phones. Maybe by actu­al­ly encour­ag­ing the accel­er­a­tion of that devel­op­ment, you can spread that tech­nol­o­gy. And I would expect and hope that advances in life exten­sion and intel­li­gence increase will go the same way.

Anderson: There’s sort of an eco­nom­ic theme that we haven’t real­ly talked about yet that seems to weave a lot of this stuff togeth­er in terms of per­son­al choice. And it seems to be very free mar­ket. I’m think­ing with the cell phone exam­ple, specif­i­cal­ly. That’s some­thing that tele­scopes out very quick­ly across the pop­u­la­tion because of the mar­ket incen­tive to have every­one have some kind of phone. Do you think that’s pos­si­ble with oth­er tech­nolo­gies, maybe that are more lucra­tive to keep with­in groups? It makes good com­mer­cial sense to give every­one a cell phone. Does it make good com­mer­cial sense to offer the sort of tech­nol­o­gy to extend life to everyone?

More: I think it clear­ly does. I think of a pop­u­la­tion of peo­ple who live longer and health­i­er and are smarter and more pro­duc­tive clear­ly is going to raise every­body’s lev­el of wealth. People who are smarter are going to be more fun to inter­act with. If you’ve made your­self super smart you don’t real­ly want to spend a lot of time talk­ing to some­one who seems very dull by com­par­i­son, you know. If you can say, Here. Here’s fund­ing for your own aug­men­ta­tion,” I think a lot of orga­ni­za­tions will sub­si­dize those, just as we had peo­ple like Bill Gates spend­ing many mil­lions of dol­lars, bil­lions of dol­lars, to bring clean water to dif­fer­ent parts of the world, which will improve their economies just because they won’t be dying so ear­ly and young. I think a lot of peo­ple will rec­og­nize that kind of almost Nietzschean approach to benev­o­lence, if you like. Nietzsche basi­cal­ly said that the pow­er­ful per­son who’s over­flow­ing with pow­er will give to oth­er peo­ple not out of oblig­a­tion but because they feel they ought to in some sense because they can.

Anderson: Are you opti­mistic about the future?

More: Yes. My view is if you look at the long run of human his­to­ry, things over­all tend to get bet­ter. It’s very pop­u­lar and fash­ion­able to com­plain about how awful the world is and how it’s going to hell. I’d like to take peo­ple who do that and just put them back in time a hun­dred years, two hun­dred years, a thou­sand years. At any point in the past, they’re going to find that they wish they could come back to the present. Even a sim­ple thing like the inven­tion of anes­the­sia I think has made a huge dif­fer­ence in life. It’s hard to imag­ine liv­ing with­out that now. That was every­body’s expe­ri­ence. A quar­ter of women dying in child­bear­ing. That was a com­mon expe­ri­ence. It’s pret­ty hard to imag­ine how hor­ri­ble the past was, frankly.

So yeah, we have these irri­tat­ing things. We have com­put­ers that break down and dri­ve us crazy and waste our time. But over­all we’re liv­ing longer, we’re health­i­er, we’re less vio­lent. In fact there’s been a cou­ple of inter­est­ing books come out recent­ly that look at that in detail. The lev­el of vio­lence in human soci­ety has gone down con­tin­u­ous­ly. I think many mea­sures of human well-being are improving. 

Even things like pol­lu­tion. People always pick on cer­tain areas and say, Oh it’s get­ting worse.” But over­all, if you actu­al­ly look sys­tem­at­i­cal­ly at the trends, things are get­ting bet­ter. Partly because as we get wealth­i­er and our tech­nol­o­gy improves we can afford to make it bet­ter. We can afford to have clean­er air. When you’re poor and starv­ing and just try­ing to get by, you’re not going to care about clean­ing up the air or pol­lu­tion. That’s not your top priority. 

So I think the bet­ter off we get the more we take care of our envi­ron­ment. The longer we live, hope­ful­ly the more fore­sight we devel­op. And I think if we start mak­ing some fun­da­men­tal changes in the human con­di­tion that make us more intel­li­gent and more refined in our emo­tions, then things can get bet­ter still.

If I was to wor­ry about the future, my main con­cerns are not that things will get worse, it’s that they could if we do stu­pid things. We have almost had some pret­ty big dis­as­ters in the past, with the nuclear com­plex and so on, which we’ve man­aged to avoid. It could be that we’re going to invent some hor­ri­ble pathogen that’s going to wipe out a large part of the species. One big con­cern that’s get­ting a lot of atten­tion right now is maybe we’ll devel­op a super­in­tel­li­gent arti­fi­cial intel­li­gence that will just kind of take over, and in the crude Terminator sce­nario just going to wipe us all out, or just take con­trol and make all our deci­sions for us in a way that we may not want. I think that kind of a thing is a real con­cern. We have to be quite care­ful about that. 

Anderson: That’s inter­est­ing, because I always asso­ciate those sort of crit­i­cisms with peo­ple who are kind of hav­ing a knee-jerk reac­tion most­ly based on watch­ing The Terminator.

More: Yeah. I think a lot of the sce­nar­ios are high­ly unlike­ly, but—

Anderson: But you do take those seriously.

More: Yeah. It’s some­thing I have to watch out for. So we look at how we design these arti­fi­cial intel­li­gences and try to make sure that they actu­al­ly are going to be benev­o­lent. Friendly” is kind of the com­mon term being used.

Anderson: So clos­ing in here on the idea of the Conversation, we’ve got some amaz­ing ideas on the table about tech­nol­o­gy and the future. Do you think we’re talk­ing about these ideas ade­quate­ly enough now?

More: Not real­ly, no. I think it’s start­ing to improve, but for the most part when peo­ple talk about future stuff it’s gen­er­al­ly in terms of fic­tion. It’s real­ly what some sci­ence fic­tion movie has said. Which is unfor­tu­nate because those tend to be very dystopi­an. They’re obvi­ous­ly writ­ten to be dra­mat­ic, not to be real­is­tic. So peo­ple tend to get a very fear­ful view of the future. I think we need a lot more properly-informed ratio­nal dis­cus­sions of future pos­si­bil­i­ties, both the pos­si­ble ben­e­fits and the dan­gers. And we’re begin­ning to see more of that. Back in the late 80s when I start­ed Extropy mag­a­zine, which is real­ly sort of the first com­pre­hen­sive pub­li­ca­tion about tran­shu­man­ist futures, that was very much all about the pos­i­tive pos­si­bil­i­ties because those weren’t being empha­sized so much. But that’s grad­u­at­ed to a more crit­i­cal con­ver­sa­tion. So that’s hap­pen­ing a lot more. I think peo­ple tend to be too polar­ized, still. They’re still too for or against.

Anderson: Do you think we’ve spe­cial­ized so much that it’s actu­al­ly impos­si­ble to have that sort of com­mon conversation?

More: It’s pret­ty tough, and I think one prob­lem is that even if you real­ly do iden­ti­fy an expert, the trou­ble is they’re going to be an expert in one spe­cif­ic area. And almost all the inter­est­ing ques­tions we can dis­cuss are nev­er lim­it­ed to that one nar­row area. I mean, even a ques­tion like What kind of ener­gy source should we be favor­ing right now?” Well, you may be an expert in physics. You might bet­ter know about the prop­er­ties of solar pan­els, but do you know your eco­nom­ics? Do you know inter­na­tion­al affairs and strate­gic con­sid­er­a­tions? Do you real­ly have a good idea of how to think about how things change in the future, which requires a dif­fer­ent method­ol­o­gy. So all the big inter­est­ing ques­tions real­ly require a multi-disciplinary focus, and most peo­ple don’t have that. And the more expert they are in one area, the less time they may have to be well-informed in others. 

So I think what we need is rather than find­ing just the right peo­ple, we actu­al­ly most­ly need to focus on the process. Even some­thing a sim­ple as if we could just insti­tu­tion­al­ize the dev­il’s advo­cate process, we’d be a lot bet­ter off. But in almost every gov­ern­ment deci­sion, every cor­po­rate deci­sion, every per­son­al, indi­vid­ual, fam­i­ly deci­sion, gen­er­al­ly we think we know what we want, we argue for it, and then we go for it. How often do we actu­al­ly delib­er­ate­ly invite some­one to make their best case against it? And to encour­age that, to hon­or the per­son who does that, sep­a­rat­ing our per­son­al­i­ties from our ideas. That’s a very sim­ple one.

Anderson: So I’m think­ing about our big hypo­thet­i­cal round table here about the future. How do we bring groups like tran­shu­man­ists, or Reverend Fife who I was speak­ing to yes­ter­day who’s real­ly net­worked in with faith com­mu­ni­ties? It seems like both have sort of meta­phys­i­cal­ly dif­fer­ent ways of look­ing at the world, and dif­fer­ent sort of val­ue schemes. Both are think­ing about the future in dif­fer­ent ways. How do we bro­ker a con­ver­sa­tion there, know­ing there are a bunch of oth­er com­mu­ni­ties that are sim­i­lar­ly off in dif­fer­ent direc­tions? Do you think there can be com­mon ground, or do you think that’s one of these things where there’s some­thing that’s so fun­da­men­tal­ly dif­fer­ent it’s going to be very dif­fi­cult to bridge?

More: I think that you can nev­er be too sure until you work at it. You may just assume from the begin­ning that there isn’t any com­mon ground, and some­times there won’t be. I mean, it’s very hard for me to find any com­mon ground with any kind of fun­da­men­tal­ist. But it’s not always clear who’s a fun­da­men­tal­ist. They may not use that term, they may not think of them­selves that way, but you may after a while of inter­act­ing and so on real­ize that they tru­ly are a fun­da­men­tal­ist, that there are things they just absolute­ly will not question.

So some­one who’s tru­ly a fun­da­men­tal­ist in the sense say, Christian or Islamic fun­da­men­tal­ism, it’s going to be very very hard for me to have any kind of use­ful, pro­duc­tive con­ver­sa­tion about any­thing of inter­est because their answer’s always going to be, Well, let’s see what it says in the holy book.” And that’s just not the way I’m going to work. I want to say, Well, let’s go look at real­i­ty. Let’s devise a test and see what real­i­ty says.” 

So that’s a pret­ty fun­da­men­tal dif­fer­ence. But hope­ful­ly that won’t usu­al­ly be the case. Usually while we seem to be rad­i­cal­ly dif­fer­ent, if we work at it a lit­tle bit, we can find some kind of com­mon­al­i­ty, some shared assump­tions, and then clar­i­fy where we do dis­agree and then try to work on those and see if there’s not some way of resolv­ing those differences.


Aengus Anderson: So that was the con­ver­sa­tion I had today.

Micah Saul: Wow. I envy you. That sound­ed just fantastic.

Anderson: It was an amaz­ing, amaz­ing talk. It kind of start­ed with Alcor and then sud­den­ly we were into a lot of philosophy.

Saul: Yeah. That was def­i­nite­ly some­thing I was hop­ing to get from him. It could’ve been also an inter­est­ing con­ver­sa­tion to just be talk­ing specif­i­cal­ly about Alcor, but I think both of you real­ly quick­ly got to the deep­er philo­soph­i­cal ques­tions that actu­al­ly in many ways made the specifics a lit­tle unnec­es­sary to even talk about.

Anderson: That’s kin­da what I was actu­al­ly hop­ing. There’s been so much writ­ten about Alcor, as with any thinker who’s doing some­thing that real­ly push­es bound­aries like this. 

Saul: Right. 

Anderson: There’s a lot of cir­cus around it. For me that was­n’t the con­ver­sa­tion to be having.

Saul: No, exactly.

Anderson: I want­ed to get into the impli­ca­tions of the ideas. So I was try­ing to steer clear of the specifics. But in terms of things that worked and things that did­n’t, there are a cou­ple of things that struck me—I real­ly felt like Dr. More had a lib­er­tar­i­an foun­da­tion to a lot of his stuff, and a lot of sort of the per­son­al empow­er­ment of choice. And I was kind of like, as we were going through the inter­view and actu­al­ly as I was rid­ing home from it, I was think­ing about, we real­ly need­ed to talk more about community.

Saul: Absolutely. That’s going to be a big theme run­ning through all of these, is the rela­tion­ship between the indi­vid­ual and com­mu­ni­ty. And espe­cial­ly when we’re talk­ing to the more indi­vid­u­al­ist, lib­er­tar­i­an thinkers, com­mu­ni­ty is some­thing that we need to push them on, in the same way that I think when we’re talk­ing to the more com­mu­nal­ists we should be push­ing them on indi­vid­ual rights.

Anderson: Yeah. If you’re kind of map­ping which side the scale is towards, gen­er­al­ly our soci­ety is feeling…we lean more towards the lib­er­tar­i­an right now than the communal.

Saul: I think so.

Anderson: I want to ask more hard ques­tions about the val­ue of com­mu­ni­ty. When we were talk­ing about the past, I kind of regret not try­ing to seek out if maybe the past had some kind of com­mu­ni­ty val­ue. Sure, mate­ri­al­ly much worse, short­er qual­i­ty of life, but maybe there is some­thing com­mu­nal there and maybe that’s some­thing we can talk to oth­er inter­vie­wees about later. 

Saul: I agree. A cou­ple things that jumped out at me. One of them, he actu­al­ly cor­rect­ed you on, which I thought was use­ful. In our con­ver­sa­tions and our plan­ning for this, we’ve sort of been using the word mate­ri­al­ist” and strip­ping away a lot of the bag­gage that word car­ries when you and I are talk­ing to each oth­er, but the seman­tics of some of these words still matter—

Anderson: Yeah. I had that sort of embar­rassed moment when I said mate­ri­al­ist and he was like, Well, you know, in phi­los­o­phy we don’t quite use that word,” and here I am hav­ing these visions of shop­ping. And I’m like, yeah, mate­ri­al­ist does con­jure to mind shop­ping. So phys­i­cal­ist, I think that makes more sense.

Saul: The oth­er one I was think­ing about is when you were talk­ing about the intrin­sic val­ue of nature and he was sort of push­ing back against the con­cept. The notion of intrin­sic val­ue is very often tied with a holdover from a reli­gious way of think­ing, because Western cul­ture is pre­dom­i­nant­ly Christian cul­ture. Those things still have weight. But if you’re talk­ing with athe­ists, intrin­sic val­ue is a loaded con­cept, and we need to come up with a bet­ter way to talk about that. Because there is a way to talk about it with some­one who does­n’t believe in any sort of intrin­sic val­ue in a spir­i­tu­al sense.

Anderson: Right. Let’s def­i­nite­ly think more about those and hope­ful­ly as we get these things post­ed our par­tic­i­pants online will help us think through them as well.

Saul: I def­i­nite­ly like that idea.

Anderson: So onwards and upwards. Next we’ll be doing Peter Warren.

That was Dr. Max More, record­ed May 3, 2012 at the Alcor Life Extension Foundation office in Scottsdale, Arizona.

Saul: This is The Conversation. You can find us on Twitter at @aengusanderson and on the web at find​the​con​ver​sa​tion​.com

Anderson: So thanks for lis­ten­ing. I’m Aengus Anderson.

Saul: And I’m Micah Saul.

Further Reference

This inter­view at the Conversation web site, with project notes, com­ments, and tax­o­nom­ic orga­ni­za­tion spe­cif­ic to The Conversation.