Micah Saul: This project is built on a hypothesis. There are moments in history when the status quo fails. Political systems prove insufficient, religious ideas unsatisfactory, social structures intolerable. These are moments of crisis.
Aengus Anderson: During some of these moments, great minds have entered into conversation and torn apart inherited ideas, dethroning truths, combining old thoughts, and creating new ideas. They’ve shaped the norms of future generations.
Saul: Every era has its issues, but do ours warrant The Conversation? If they do, is it happening?
Anderson: We’ll be exploring these sorts of questions through conversations with a cross-section of American thinkers, people who are critiquing some aspect of normality and offering an alternative vision of the future. People who might be having The Conversation.
Saul: Like a real conversation, this project is going to be subjective. It will frequently change directions, connect unexpected ideas, and wander between the tangible and the abstract. It will leave us with far more questions than answers because after all, nobody has a monopoly on dreaming about the future.
Anderson: I’m Aengus Anderson.
Saul: And I’m Micah Saul. And you’re listening to The Conversation.
Anderson: Did you get a chance to listen to the conversation with Reverend Fife?
Saul: I did. I’m impressed. I can’t imagine a better place to start this.
Anderson: For all that I was completely nervous about going into this and doing the first interview, because this is such a sort of watery project… It’s like, what’s the first question to ask?
Saul: Right.
Anderson: But it felt like once we got warmed up, I was pretty happy with it.
Saul: You guys got to the really important, big questions really quickly. You know, is the nation-state obsolete?
Anderson: Like, any one of these big fundamental ideas is so interesting because you’re kind of going along and you’re talking about it, you’re talking about why this immigration policy feels wrong in a lot of ways. Intellectually, I can totally go, “Yeah!” But I kind of think, what does saying yeah mean there? That’s when I sort of realized how big that idea is. I can’t imagine what does that world look like without the nation state? I don’t know anything else.
Saul: So let’s talk about tomorrow now.
Anderson: Yeah. And this is our first moment to really figure out how we bridge these conversations.
Saul: I think this is what’s going to make the project interesting, and I think we’re going to have to be learning as we go.
Anderson: Yes. And for the people who are listening to us, I hope they survive this transition as we sort of get our sea legs and figure out how to make this all work.
Saul: Even more importantly than that, I hope they tell us where we’re screwing up.
Anderson: Yes. Absolutely. But in gentle terms.
Saul: So, tomorrow. You’re going to be meeting with Max More at the Alcor Life Extension Foundation.
Anderson: Yeah. So do you want to tell the people who aren’t familiar with Alcor what they are?
Saul: So this is, you die, you have you body sent down, and it gets cryogenically frozen in these really cool-looking stainless steel tubes full of liquid nitrogen, the idea being either your body or just your head is frozen in the intent that at some point in the future the technology will be there to either bring you back or to download your brain into a computer, or something along those lines. This is the way to preserve your consciousness.
Anderson: Right away, they’re doing something that is fundamentally very different, and is also based on technology that doesn’t yet exist. It’s banking on a certain level of development in the future. But the ethical ramifications of what they’re doing, and more than what they’re doing but sort of what they’re hoping for, are really big. And you can tell on their web site that they have had to deal with a lot of people not liking them. I mean, they’ve really thought about their position, and they frame it in good ethical arguments that are very persuasive. And this is where I think it’s going to be incredible to talk to Dr. More tomorrow, because his background is actually in philosophy, among other things.
Saul: He actually can claim credit for coining the phrase transhumanism, I believe in the early early 90s. There was an essay in which he sort of coined that phrase, at least in the way that it’s now understood.
Anderson: So fingers crossed. Tomorrow should be good. Hopefully I don’t botch the conversation, but I think Dr. More’s going to be amazing and will probably be very interesting despite all of my incompetent question-asking.
Saul: I guess we should probably just put in a quick plug for the Kickstarter thing again. If this project seems interesting to you, it would be awesome if you could kick down a few bucks to help us get this happening.
Anderson: Yeah. So let’s see where this goes.
Saul: Sounds good.
Anderson: Very cool
Saul: Alright. Take care, sir.
Anderson: Alright. Adios.
Saul: Vaya con carne.
Max More: I’ve been a member of the Alcor Life Extension Foundation for about 26 years, but I became CEO and President just about a year and a quarter ago. I’ve got a long history with life extension and transhumanism and cryonics, really getting interested in the idea of drastically extending the human life even before I finished growing. I was still in my mid-teens when I got very serious about this idea. Really, its roots go even further back than that because I’ve always been fascinated with overcoming limits. When I was 5 years old I watched the Apollo moon landing and every one of them after that when people lost interest, I was still watching. So this idea of getting off the planet, beating the gravity well, extending the human lifespan. I’m also interested in increasing human intelligence, being able to solve harder problems and think better. So all this kind of common theme of overcoming limits.
So life extension and cryonics is a natural part of that. My main goal is not to die in the first place. I hope to keep living, hopefully long enough that science will have solved the aging problem and I won’t have to die. But since I don’t know how long that’s going to take, cryonics is the real backup policy for me. It’s like real life insurance in the true sense of the term. So if I don’t make it, it at least gives me a chance of coming back again in the future.
Anderson: What is transhumanism? I realized we were talking about that and people listening may not know.
More: Transhumanism is essentially the idea that it is both possible and desirable to use advancing technologies to fundamentally alter the human condition for the better. Humanism had the same fundamental values of a belief in the possibility of progress, that by our own efforts, regardless of whether there’s a higher power or not, we could make the world better. The championing of science and reason to do that means a view that also requires goodwill. It requires overlooking artificial distinctions among people and focusing on our common humanity.
So transhumanism has incorporated that and built on that, it just takes it further with the idea that we have new technological tools that’re emerging that can do that on a more fundamental level and alter the human condition itself. So that’s where the transhumanism comes in. That really is the idea that the human condition is not a fixed point. It’s something we can alter, and we’re now beginning to decode our genome, understand our neurology better.
All those things that’ve been mysteries in the past, things we couldn’t change, we are now just at the beginning point of making modifications to those. We can extend our lifespan, we can maybe improve the function of our brain, solve a lot of the problems that evolutionary design has brought along. So it’s really the idea that we’re at a pretty unique point in history. We are now just beginning to take charge of our own evolution and decide on our own constitution.
Anderson: So this is a historically unique moment.
More: Yeah, and that moment of course is smeared over several decades—
Anderson: Right.
More: But historically speaking, it’s a moment.
Anderson: Yeah. It’s just a point on the bigger scale. I always try to look at the present and say, “What is something we want to improve about the present?” before moving on to the question of, “How do we really want the future to look?” It’s funny. It sounds like such a fundamental thing, the idea of death. Is that the issue of the present, the thing that you are most interested in addressing?
More: Yes. I think overcoming aging and death to me is the central issue. Because if we solve that one, we have time to solve all the others.
Anderson: Okay, so that’s more pressing than changing oneself in terms of intelligence or…
More: I think they all matter, and they’re not necessarily exclusive. I think these may need to go together. But yeah, extending life seems to me a paramount issue, otherwise people are going to be lost forever. It’s in a sense a serial holocaust. One by one, millions of people are dying every year and that’s pretty appalling. I think people will look back from the future and say it was just horrifying that people weren’t taking this problem more seriously.
I think essentially what we are is psychological continuity. I’m not really my body. I mean, I have to have a body right now to exist because my personality resides in my brain essentially, and that requires a body. But it’s not the particular atoms I’m made of because those get changed over over time, anyway. So I’m not my atoms. I’m really the way they’re structured. But even that’s not fundamentally true, because you can do various thought experiments. What if I tried replacing my neurons with synthetic neurons, which is already starting to happen right now. Then gradually you might end up with a brain that’s entirely synthetic, but the synthetic neurons do the same job as the biological ones but they’re made out of different material.
So I’m not even essentially biological. It’s really the processing that goes on that supports my memory, my personality, my values and so on. I think that’s the core of who I am. And so that potentially can survive changing bodies. It could survive…possibly I won’t even be revived in a body through cryonics. Maybe my brain will be scanned and a new copy will be made or a virtual self will be created. And I would consider that to be survival.
Anderson: That seems like that ultimately rests on a worldview that’s very materialistic. Yesterday I was talking to a reverend and he was really excited about developments in technology. But for him the questions of how technology will be used are ultimately settled in a moral realm. And I know you can address that philosophically. You can also address that theologically. But for him, he has a point where he can…the argument stops when you get to this point of there are theological values, and there is the idea of a soul. And so I’m wondering, without a soul in a materialistic worldview, where do you get those values about how we use technology?
More: Okay. I do have a soul. Actually, I have two soles, but they’re on the bottom of my feet. Those are the only soles I believe in. The term materialistic I wouldn’t use because actually in philosophy the term “physicalist” rather than materialist is preferred. Materialist of course has the other meaning that—
Anderson: Of consumption.
More: Yeah, of consumption, money, that kind of thing. Whereas my view certainly says nothing about lack of values. It’s completely compatible with having strong meaning in life and purposes and goals and values and morals. But it’s fundamentally a metaphysical view that says I see no reason to believe in supernatural entities, supernatural forces. I can’t prove there aren’t such things, but you really can’t prove a negative like that. But I don’t see any evidence for them. So I’m essentially a physical being, and if you destroy every copy of my physical self then I’m gone. I don’t see any reason to think there is a soul that goes somewhere else.
Values are extremely important when it comes to thinking about advanced technologies and where we’re headed. And certainly in the transhumanist movement, we do spend a lot of time not just cheering on technology, although that needs to be done because there are a lot of anti-technology people around, but we also do a lot of critical thinking about the kinds of technologies we’d like, how to guide the development of technologies so that they actually are beneficial rather than harmful.
Because obviously technology has harmful side-effects. Whenever we create something…the automobile being a classic example. It freed up a lot of people, allowed them to change their lives. But it kills an awful lot of people. So while I think in general technology’s a good thing, it’s an extension of human reason and creativity and productivity, that doesn’t mean that any technology and any use of technology is good. So certainly my views are we want to use technology to improve our health, to improve our intelligence, to become better people, even to improve our emotions and the way we react. We’ve evolved a certain way. Our bodies and brains produce certain hormones and aggressive reactions and territorial behaviors, and we just naturally have this in-group/out-group response. Those are all things that potentially could be modified, and we may do that in that future very cautiously. But we may become better people, perhaps in a way that’s not really possible without technological intervention.
Anderson: So as we look forward and we look at maybe improving as a species, how do we decide which attributes are good or which attributes are bad, and what do we want to cultivate in ourselves?
More: Well, that’s a very difficult question. It’s a difficult question to answer. I think the fundamental answer is that we each have to think about that very carefully and make our own decisions. And to me it’s critical that nobody make those decisions for us. If you go back early 20th century and going up through the century, you see a lot of technocratic people, starting with people like H.G. Wells, that had this view that the scientists should be in charge, they should make the decisions for everybody, they should decide how society is run. And you see even in the United States, eugenics movements were basically some elite group saying what kind of people they should be. I’m fundamentally opposed to that approach. My approach is that it’s good to create these options, but then you have to let people choose which of those options they want.
And that’s very difficult. There are some very touch questions. There’s the example of some people in the blind community who actually want to have children who are blind, who would deliberately create blind children when they didn’t have to. So that raises a very difficult question. Is that something where we could step in and say, “You’re causing harm. We could prevent that.” Or is that something that should be their choice, as someone bringing new life into being. That’s a very tricky issue. I’m not sure what my answer is on that one.
Anderson: So it seems like there does have to be some sort of conversation about…actually like I was talking with the reverend yesterday, he was thinking about, his characterization was an umpire. Someone who can sort of on a global level think about things that are not permissible uses. I know there’s always that tension between that individual liberty and collective good. How do we have the conversation about the umpire?
More: Well, I would hope it’s not actually a global umpire, because one reason we have the United States rather than the United State here is that we can actually have differences. If you don’t like the way one state operates, you can go to a different state and there are somewhat different rules.
Now again, there may have to be some kind of global rules. You can’t allow people, perhaps, to possess individual weapons that could destroy the entire planet very easily. That may be something you have to stop. But for the most part I think it’s good to allow diversity and have different communities which set their own rules to various degrees. So I think within those communities you’ve got to then decide what the rules are and how to enforce them and what your limits will be.
Anderson: I’ve read a bit about your thinking about the precautionary principle. Could you tell me a little bit more about that?
More: Yeah, I’ve created something called the proactionary principle as an alternative to the precautionary principle. The precautionary principle comes in a number of different forms, but essentially it says that before any new technology or process is allowed, you must be able to prove that it’s safe. Now, to me that’s kind of an insane requirement. It’s an impossible requirement.
Imagine applying that to fire, the first time we had fire. Could fire cause problems? Well, yes. You could burn your hand, you could burn your house down, you could have big problems. Okay, so no fire. You could go through all the major technological advances in history and show the same thing. So basically it’s a recipe for preventing technologies, and as such its proponents really use it selectively, because they don’t want to do it with everything but they want to be able to decide which technologies are okay. So if they don’t like genetic engineering they’re going to say this fails the test, but other things they do like they’re going to allow. So to me it’s very arbitrary and really allows enemies of various technologies to claim a principled way of opposing them that actually is really quite arbitrary.
So the proactionary principle I developed is an alternative which is a lot more objective and balanced and basically consists of ten sub-principles which require you to think objectively about the consequences, not just look for the possible downsides, but also to look for the benefits and to balance them. To use the best available rational methods that we know of instead of relying intuition and public fears about what might happen, use the best critical and creative methods.
Anderson: Can it be that with newer technologies, because they are more and more powerful and they have greater impact on us, the decisions to use them are perhaps not in everybody’s hands? So are we getting to a point where the precautionary principle becomes more sensible because you can maybe have a small group make a technological decision that has a large ramification that the people who are dealing with it maybe did not want?
More: I don’t think the precautionary principle is ever a good decision rule, because it’s so arbitrary and is open to manipulation and to emotional thinking. A separate issue is who makes the decisions? I mean, you can decide whether it’s going to be everybody as a whole, which is not really feasible, or certain government groups or pressure groups, international policymakers. Whatever the level is, they get to choose between the precautionary principle, the proactionary principle, or something else. So it’s not really a matter of who’s deciding, it’s a matter of which decision rules they’re using, and I think something like the proactionary principle structures people’s thinking in a way that is more likely to lead to good outcomes.
So who makes the decisions is a whole separate thing, and I’m generally in favor of maximum input but you also have to be careful that a lot of people expressing opinions may know nothing at all about the technology. So it’s really not realistic to say everybody should have an equal say. I think everybody should have a say, but you do need some kind of way of putting those opinions together and actually weighing up the likely truth. And that’s a very tough thing to do.
Anderson: With a project like this, that’s something I’m really interested in, because everybody has to live in whatever future we’re creating. I mean, this sounds sort of like you think some people should have more of a say in the future because they are more informed about the technological choices we’ll be making.
More: Well, I think they will tend to. People who are more informed will tend to be more persuasive than those that are not informed. But if it’s at the level of simply voting in a democracy, that’s kind of scary because everybody has an opinion, everybody gets one vote. That that doesn’t really lead to rational outcomes. Now, if we could somehow encourage politicians to base decisions not just on political popularity but on some more structured process, you might get better outcomes.
Anderson: You mentioned rationality, the idea that we can make more rational decisions or maybe that say, one person one vote will not lead to rational outcomes, but is there something to be said for the irrational? If people want the irrational, say they want to govern themselves badly or make decisions that honestly seem against their best interest, to what extent should we seek a future in which society can sort of make those irrational, maybe self-destructive, decisions?
More: Well, I wouldn’t want to be in that society, but I’m quite happy if people wanting to make really irrational decisions, if they want to go off and form their own community they’re welcome to do so, as long as they don’t start sending bombs back my way or something like that.
But sure, I’m all in favor of that kind of diversity and there are already plenty of communities that I think are quite crazy, based on crazy ideas, and I’m not going to interfere with their way of living. But if it’s something they’re going to impose on me, then yeah that’s a problem. In a real philosophical sense, I don’t think there really is any place for the irrational. But I have to qualify that by saying that that doesn’t mean everything has to be rational, because they’re not exclusive. There’s also things that are arational, or non-rational, where it’s really a matter of taste or…where there’s no real objective standard. If I ask you what your favorite color is and you say, “Oh, blue,” and I say, “Wrong!” Well that doesn’t make any sense, right? It’s just purely a preference.
But when there’s something that you can actually test, when someone says, “This energy source will be less expensive,” or, “This vaccine will produce more benefit than harm,” those are things that you can actually test objectively.
Anderson: But there are moral assumptions beneath them.
More: Sure, yeah.
Anderson: So reason is a tool leading you towards an idea of the good.
More: It’s a way of testing your idea of the good. I don’t think reason can generate the idea of the good. I think we have to start with what we want, much of which is completely non-rational, it’s just based in the way we’ve evolved and our background. Reason comes in by saying, okay given that I have this desire, does it make sense? Let me ask some questions about it. Let me consider alternative possibilities. Let me ask what kind of factual assumptions might influence my belief. So reason could come in there. It can kind of test our beliefs. But you can’t just start from the thing and decide what values are rational. I don’t think that’s possible.
Anderson: And that’s a really intriguing thing, the idea that we’re using reason as a tool to test how to achieve a goal that may actually just be sort of non-rational…
More: Yeah. Like wanting to live, that’s non-rational. I can’t give you some kind of deductive argument that you must want to live. Either you do or you don’t.
Anderson: Is that the fundamental desire guiding your vision of the future?
More: That’s hard to say because in some sense yes, but I’m not sure that that’s a desire that you can take on its own.
Anderson: Right.
More: It has to go along with other things. Would I want to live under any circumstances? No, definitely not. If I thought the rest of my life, for however long I was going to be, was going to be agony and pain and misery and inability to do anything productive or creative or enjoy relationships, then no. I would see no point.
So it’s got to be that I want to live because I see a life that has the possibility of joy and pleasure and productivity and creativity and good relationships and learning and improving.
Anderson: Okay, so that’s sort of the good. Okay.
I know I’m going to be talking to some deep ecologists down the line in this project, and I imagine that they would ask what we’ve lost in terms of the natural world, which of course has always been changed by us as long as we have been in it. But is there some intrinsic value to a relatively unmodified natural system? Can that confer meaning in some way?
More: I don’t think think so. I don’t know what an intrinsic meaning is. I think meaning is only relative to conscious beings, and so it has meaning but only in the sense that we choose to bestow meaning upon it or find meaning in it.
Anderson: I guess I’m thinking because we were talking about wanting to live, that being a subjective, arational desire. I’m thinking maybe here’s a deep ecologist who has a subjective arational desire to somehow exist in this sort of holistic ecosystem that is relatively unchanged by man.
More: Again, that kind of thing to me is a personal choice. I’m a member of the Nature Conservancy. I actually do place a value on having large areas of undisturbed wilderness. I like that. I don’t think somebody else has to value that themselves, but it’s good that we have an organization that doesn’t force you to pay for it through your taxes but actually goes out and solicits money and buys up areas of land and protects them. I like that. I like to just know they’re there and perhaps occasionally go visit and go hike and enjoy nature. So it’s not that I see there’s an intrinsic value there, it’s just something that I value, and quite a few other people value and so we choose to support it.
Anderson: Okay.
More: Fundamentally, I don’t see that there’s a value in the natural state as it is.
Anderson: As it is.
If people have changed themselves in some way, do they become different as people, and do they apply that same attitude towards people who haven’t changed, in the same way that we maybe conserve nature when we enjoy it but aren’t too worried about it? when I think about the paranoia that I’ve encountered a lot when I’ve read about futurist ideas, it seems like there’s a lot of worry of that.
More: Oh, I guess you’re thinking of the kind of worry that a new species will emerge and look down upon what we left behind [crosstalk]
Anderson: Or maybe it’s even not quite that dramatic, but like say we have a higher class of people who have greater intellect and greater ability to maybe manage and control society, and there actually is a real difference.
More: Yeah, that’s quite a common theme. I know there’s one biologist who wrote a book actually where he really developed that theme in detail, where some people genetically engineer their children over a couple of generations, society kind of divides into two quite different groups. I tend to think that’s not so likely to happen. There might be some transitional issues there if people who are wealthy or more educated are the first to use these new technologies and they start off being expensive.
But I think just as again with other technologies, if that follows the same trends we’ll tend to find people will catch up pretty quickly. It’s like with mobile phones, you could’ve said, “Well, we really should let people have mobile phones because the wealthy guys are going to have them first and they’re going to have all these advantages in terms of communication and other people will be screwed.”
But what happens is in a rather short period of time, we go from a very few people carrying these suitcase-sized cell phones to everybody, it doesn’t matter how poor they are. You can go to the poorest parts of the city and you see people carrying cell phones. Maybe by actually encouraging the acceleration of that development, you can spread that technology. And I would expect and hope that advances in life extension and intelligence increase will go the same way.
Anderson: There’s sort of an economic theme that we haven’t really talked about yet that seems to weave a lot of this stuff together in terms of personal choice. And it seems to be very free market. I’m thinking with the cell phone example, specifically. That’s something that telescopes out very quickly across the population because of the market incentive to have everyone have some kind of phone. Do you think that’s possible with other technologies, maybe that are more lucrative to keep within groups? It makes good commercial sense to give everyone a cell phone. Does it make good commercial sense to offer the sort of technology to extend life to everyone?
More: I think it clearly does. I think of a population of people who live longer and healthier and are smarter and more productive clearly is going to raise everybody’s level of wealth. People who are smarter are going to be more fun to interact with. If you’ve made yourself super smart you don’t really want to spend a lot of time talking to someone who seems very dull by comparison, you know. If you can say, “Here. Here’s funding for your own augmentation,” I think a lot of organizations will subsidize those, just as we had people like Bill Gates spending many millions of dollars, billions of dollars, to bring clean water to different parts of the world, which will improve their economies just because they won’t be dying so early and young. I think a lot of people will recognize that kind of almost Nietzschean approach to benevolence, if you like. Nietzsche basically said that the powerful person who’s overflowing with power will give to other people not out of obligation but because they feel they ought to in some sense because they can.
Anderson: Are you optimistic about the future?
More: Yes. My view is if you look at the long run of human history, things overall tend to get better. It’s very popular and fashionable to complain about how awful the world is and how it’s going to hell. I’d like to take people who do that and just put them back in time a hundred years, two hundred years, a thousand years. At any point in the past, they’re going to find that they wish they could come back to the present. Even a simple thing like the invention of anesthesia I think has made a huge difference in life. It’s hard to imagine living without that now. That was everybody’s experience. A quarter of women dying in childbearing. That was a common experience. It’s pretty hard to imagine how horrible the past was, frankly.
So yeah, we have these irritating things. We have computers that break down and drive us crazy and waste our time. But overall we’re living longer, we’re healthier, we’re less violent. In fact there’s been a couple of interesting books come out recently that look at that in detail. The level of violence in human society has gone down continuously. I think many measures of human well-being are improving.
Even things like pollution. People always pick on certain areas and say, “Oh it’s getting worse.” But overall, if you actually look systematically at the trends, things are getting better. Partly because as we get wealthier and our technology improves we can afford to make it better. We can afford to have cleaner air. When you’re poor and starving and just trying to get by, you’re not going to care about cleaning up the air or pollution. That’s not your top priority.
So I think the better off we get the more we take care of our environment. The longer we live, hopefully the more foresight we develop. And I think if we start making some fundamental changes in the human condition that make us more intelligent and more refined in our emotions, then things can get better still.
If I was to worry about the future, my main concerns are not that things will get worse, it’s that they could if we do stupid things. We have almost had some pretty big disasters in the past, with the nuclear complex and so on, which we’ve managed to avoid. It could be that we’re going to invent some horrible pathogen that’s going to wipe out a large part of the species. One big concern that’s getting a lot of attention right now is maybe we’ll develop a superintelligent artificial intelligence that will just kind of take over, and in the crude Terminator scenario just going to wipe us all out, or just take control and make all our decisions for us in a way that we may not want. I think that kind of a thing is a real concern. We have to be quite careful about that.
Anderson: That’s interesting, because I always associate those sort of criticisms with people who are kind of having a knee-jerk reaction mostly based on watching The Terminator.
More: Yeah. I think a lot of the scenarios are highly unlikely, but—
Anderson: But you do take those seriously.
More: Yeah. It’s something I have to watch out for. So we look at how we design these artificial intelligences and try to make sure that they actually are going to be benevolent. “Friendly” is kind of the common term being used.
Anderson: So closing in here on the idea of the Conversation, we’ve got some amazing ideas on the table about technology and the future. Do you think we’re talking about these ideas adequately enough now?
More: Not really, no. I think it’s starting to improve, but for the most part when people talk about future stuff it’s generally in terms of fiction. It’s really what some science fiction movie has said. Which is unfortunate because those tend to be very dystopian. They’re obviously written to be dramatic, not to be realistic. So people tend to get a very fearful view of the future. I think we need a lot more properly-informed rational discussions of future possibilities, both the possible benefits and the dangers. And we’re beginning to see more of that. Back in the late 80s when I started Extropy magazine, which is really sort of the first comprehensive publication about transhumanist futures, that was very much all about the positive possibilities because those weren’t being emphasized so much. But that’s graduated to a more critical conversation. So that’s happening a lot more. I think people tend to be too polarized, still. They’re still too for or against.
Anderson: Do you think we’ve specialized so much that it’s actually impossible to have that sort of common conversation?
More: It’s pretty tough, and I think one problem is that even if you really do identify an expert, the trouble is they’re going to be an expert in one specific area. And almost all the interesting questions we can discuss are never limited to that one narrow area. I mean, even a question like “What kind of energy source should we be favoring right now?” Well, you may be an expert in physics. You might better know about the properties of solar panels, but do you know your economics? Do you know international affairs and strategic considerations? Do you really have a good idea of how to think about how things change in the future, which requires a different methodology. So all the big interesting questions really require a multi-disciplinary focus, and most people don’t have that. And the more expert they are in one area, the less time they may have to be well-informed in others.
So I think what we need is rather than finding just the right people, we actually mostly need to focus on the process. Even something a simple as if we could just institutionalize the devil’s advocate process, we’d be a lot better off. But in almost every government decision, every corporate decision, every personal, individual, family decision, generally we think we know what we want, we argue for it, and then we go for it. How often do we actually deliberately invite someone to make their best case against it? And to encourage that, to honor the person who does that, separating our personalities from our ideas. That’s a very simple one.
Anderson: So I’m thinking about our big hypothetical round table here about the future. How do we bring groups like transhumanists, or Reverend Fife who I was speaking to yesterday who’s really networked in with faith communities? It seems like both have sort of metaphysically different ways of looking at the world, and different sort of value schemes. Both are thinking about the future in different ways. How do we broker a conversation there, knowing there are a bunch of other communities that are similarly off in different directions? Do you think there can be common ground, or do you think that’s one of these things where there’s something that’s so fundamentally different it’s going to be very difficult to bridge?
More: I think that you can never be too sure until you work at it. You may just assume from the beginning that there isn’t any common ground, and sometimes there won’t be. I mean, it’s very hard for me to find any common ground with any kind of fundamentalist. But it’s not always clear who’s a fundamentalist. They may not use that term, they may not think of themselves that way, but you may after a while of interacting and so on realize that they truly are a fundamentalist, that there are things they just absolutely will not question.
So someone who’s truly a fundamentalist in the sense say, Christian or Islamic fundamentalism, it’s going to be very very hard for me to have any kind of useful, productive conversation about anything of interest because their answer’s always going to be, “Well, let’s see what it says in the holy book.” And that’s just not the way I’m going to work. I want to say, “Well, let’s go look at reality. Let’s devise a test and see what reality says.”
So that’s a pretty fundamental difference. But hopefully that won’t usually be the case. Usually while we seem to be radically different, if we work at it a little bit, we can find some kind of commonality, some shared assumptions, and then clarify where we do disagree and then try to work on those and see if there’s not some way of resolving those differences.
Aengus Anderson: So that was the conversation I had today.
Micah Saul: Wow. I envy you. That sounded just fantastic.
Anderson: It was an amazing, amazing talk. It kind of started with Alcor and then suddenly we were into a lot of philosophy.
Saul: Yeah. That was definitely something I was hoping to get from him. It could’ve been also an interesting conversation to just be talking specifically about Alcor, but I think both of you really quickly got to the deeper philosophical questions that actually in many ways made the specifics a little unnecessary to even talk about.
Anderson: That’s kinda what I was actually hoping. There’s been so much written about Alcor, as with any thinker who’s doing something that really pushes boundaries like this.
Saul: Right.
Anderson: There’s a lot of circus around it. For me that wasn’t the conversation to be having.
Saul: No, exactly.
Anderson: I wanted to get into the implications of the ideas. So I was trying to steer clear of the specifics. But in terms of things that worked and things that didn’t, there are a couple of things that struck me—I really felt like Dr. More had a libertarian foundation to a lot of his stuff, and a lot of sort of the personal empowerment of choice. And I was kind of like, as we were going through the interview and actually as I was riding home from it, I was thinking about, we really needed to talk more about community.
Saul: Absolutely. That’s going to be a big theme running through all of these, is the relationship between the individual and community. And especially when we’re talking to the more individualist, libertarian thinkers, community is something that we need to push them on, in the same way that I think when we’re talking to the more communalists we should be pushing them on individual rights.
Anderson: Yeah. If you’re kind of mapping which side the scale is towards, generally our society is feeling…we lean more towards the libertarian right now than the communal.
Saul: I think so.
Anderson: I want to ask more hard questions about the value of community. When we were talking about the past, I kind of regret not trying to seek out if maybe the past had some kind of community value. Sure, materially much worse, shorter quality of life, but maybe there is something communal there and maybe that’s something we can talk to other interviewees about later.
Saul: I agree. A couple things that jumped out at me. One of them, he actually corrected you on, which I thought was useful. In our conversations and our planning for this, we’ve sort of been using the word “materialist” and stripping away a lot of the baggage that word carries when you and I are talking to each other, but the semantics of some of these words still matter—
Anderson: Yeah. I had that sort of embarrassed moment when I said materialist and he was like, “Well, you know, in philosophy we don’t quite use that word,” and here I am having these visions of shopping. And I’m like, yeah, materialist does conjure to mind shopping. So physicalist, I think that makes more sense.
Saul: The other one I was thinking about is when you were talking about the intrinsic value of nature and he was sort of pushing back against the concept. The notion of intrinsic value is very often tied with a holdover from a religious way of thinking, because Western culture is predominantly Christian culture. Those things still have weight. But if you’re talking with atheists, intrinsic value is a loaded concept, and we need to come up with a better way to talk about that. Because there is a way to talk about it with someone who doesn’t believe in any sort of intrinsic value in a spiritual sense.
Anderson: Right. Let’s definitely think more about those and hopefully as we get these things posted our participants online will help us think through them as well.
Saul: I definitely like that idea.
Anderson: So onwards and upwards. Next we’ll be doing Peter Warren.
That was Dr. Max More, recorded May 3, 2012 at the Alcor Life Extension Foundation office in Scottsdale, Arizona.
Saul: This is The Conversation. You can find us on Twitter at @aengusanderson and on the web at findtheconversation.com
Anderson: So thanks for listening. I’m Aengus Anderson.
Saul: And I’m Micah Saul.
Further Reference
This interview at the Conversation web site, with project notes, comments, and taxonomic organization specific to The Conversation.