Aengus Anderson: You're listening to the conversation. I'm Aengus Anderson.
Micah Saul: I'm Micah Saul.
Neil Prendergast: And I'm Neil Prendergast. And if you're just tuning into the series you may want to check out an earlier episode where we lay out the whole premise of The Conversation.
Saul: Yeah, these are our final episodes. They were all recorded in 2013, and we've just been horrifically lazy about getting them packaged up for you. But, they are now ready and here they are.
Anderson: So, if you're a fan of science fiction, an introduction to Kim Stanley Robinson is probably unnecessary. But if you're not, we'll keep things short, and let's just say that he's one of the biggest names in current science fiction. We were really excited to get a chance to talk to him. I mean, he sells lots and lots of books. And he wins lots and lots of awards. Because in addition to writing books that are narratively engaging, he does a tremendously good job of exploring ideas about alternate political, environmental, and economic scenarios in the relatively near future.
Saul: Yeah. If you haven't read Robinson before, I would suggest checking out the Mars Trilogy. The first book is called Red Mars. It's a near-future scenario, and it's the first…really…plausible investigation of what it might be like if we were to attempt to colonize Mars.
Additionally, the Three Californias Trilogy is fantastic. It's three different near-future versions of the same small town in Orange County. Three different potential worlds after three different potential crises. They're both fantastic trilogies, highly recommend checking them out.
Prendergast: And I think one thing that's important to mention before listening to the interview is that his work touches on nearly every element that we've incorporated into the project. Because of that, we reference throughout a great variety of previous interviews as well. And the interview was very long, nearly three hours. But of course, as always we've edited down to a size that well, we'd want to listen to.
Kim Stanley Robinson: Well, okay the problem is a…can be described as ecological. That we are now seven billion people on the planet, and that pure number is already a question. It looks like we’re asking for more than the planet can give on a steady basis so we’re drawing down as with say, the aquifers of fossil water, using fossil fuels. And then also we’re creating wastes of various kind faster than the planet can recycle them. So there’s two problems of input and output, both overtaxing the system in ecological terms.
Some of these problems are becoming acute. And people are focusing in on the carbon problem, and I think that may be a good way to look at it. But it’s always I think important to remember that that’s just a kind of metonymy standing for all the rest of the problems that we’re creating. Because A, it seems the biggest, and B it also looks like it could be amenable to correction, as in a thermostat in a room. Like we could turn down the temperature again, having turned it up. So there’s this perhaps false idea that not only is this a really big and immediate problem, but we also could do something about it.
I still think that Paul Ehrlich’s formulation of impact on the environment is a multiplicative thing, with population times our desires as individuals, our appetites (he called that A), multiplied by technology. So that there are dirtier or cleaner technologies. And one could postulate it might be possible to create a technology so clean that a rather high population with a moderate set of desires could live in a sustainable balance with the biosphere.
So we’ve got the immediate problem, and then we’ve got the possibility of a solution. And they sit there together. And different people—I’m sure you’re finding this in who you talk to—have different assessments of how big and immediate the dangers are, and how powerful we might become in solving them. And I vary in both these assessments as I do my reading, as I listen to other people. I’m in a somewhat similar position to yourself in that I do a lot of listening and processing, and then write my books based on what I hear from others and try to judge as cleanly as I can. And since my judgment changes from time to time, I write different novels based on how I feel at the moment.
And there are some things that are so scary that you think well, the worst poll is almost inevitable. Then, looking at other sources, listening to other people, I began to think that the rapid shift to clean energy production and transport fueling is not at all impossible, it is just a economic problem—a problem with capitalism and its own badly-formatted algorithms that are imposed upon us.
And so I vacillate there, between thinking that we’re doomed because we have given ourselves over to a stupid system that’s now backed up by guns. And then a much more utopian view that we’ve always lived in stupid systems and that we’re always making them better. And that in this case, the power that the science, technology, engineering, mathematical complex is giving us— And then you have to add the social technologies, or the social sciences—the humanities and the arts, the increasing analytical power that we have and increasing philosophical power…might combine together to simply force a rapid change and get things going in a better direction, a cleaner technology, instituted faster than ordinary neoclassical economics would allow it to happen.
Anderson: So I kind of want to explore both of those scenarios,—
Anderson: —kind our worst-case and our…very conservatively best-case scenario. In the worst-case one, where the train is already off the rails ecologically?, is that sort of what we’re looking at? Like, so there’d be no way to turn the thermostat back down regardless of what changes you made, regardless of how clean your technology got?
Robinson: I don’t think we’re there yet. I don’t think that’s right. I think we have legitimate reasons to hope that we can avoid that kind of ecological crash. Life is robust. We’re going to have extinctions, there’s no doubt about it. But with some major attention given to avoiding extinctions, especially of the mammals, and then also the amphibians, we could go through a couple centuries of kind of rescue work, saving everything we can, and deliberately and on purpose like zookeepers, acknowledging that we’re now managers of a system bigger than we truly understand and yet we still have to manage. And we could go forward on that basis.
So I guess what I’m saying is the things that would mean that we were already irrevocably cast into a different planet would be the ocean acidification being sharply higher than it was. Or the parts per million CO2 in the atmosphere being already up about 500, now. If those two things were worse than they are, I think it’ll be truly frightening. It’d be interesting at that point to see if people will then try to react. It’ll be harder. Have to do things even more quickly. But I don’t think people will just throw up their hands and say, “Oh we’re doomed.” I think there will be a civilization-wide effort to recover.
And when I say recovery, what do I mean by that? What I’m thinking is that you can draw CO2 down from the atmosphere, turn it into frozen dry ice, and sequester it on the sea floor or underground. And this is a huge industrial project but it’s no huger than the civilization we’ve already put in place. So if it became a necessity, in other words, we could say that every windmill as it generates electricity would also contain a little vacuum cleaner that was vacuuming CO2 out of the atmosphere and draining it into a storage system that then would have to be taken to another place and it would be part of a huge industrial civilization that was sucking CO2 out of the atmosphere.
This is a geoengineering proposal that is technologically possible, and there’s no particular side-effect damage involved in it. It’s just that it’s massive. So that becomes interesting, I think.
Anderson: In your estimation, we have agency. Do you think that we can preemptively act? Or do you think we have to react?
Robinson: I think we’re at that galvanizing moment. Really this is a multi-decade project either way, but now we’re talking about it in a way that we’ve never talked about it before. And I’ve noticed a couple of shifts along the way, because I’ve been talking about it now for about fifteen years, following the scientific community that was interested in climate. And my work on Mars, my work in Antarctica, they all were making me aware this early on. And I’ve seen a big shift in how much this has become a major topic of our civilization’s conversation in these last ten years.
And then I’ve also seen a major shift in people’s attitudes towards capitalism since 2008. Which was kind of an “emperor has no clothes” moment, where suddenly government stepped in and had to “establish value,” as the economists put it, so that this free market mentality—which was a kind of religious sensibility, a fundamentalism, and what I’ve been calling monocausotaxophilia, which is a Karl Popper word, “love of single causes that explain everything.” And that we all have this love. But when we give into it, we become fundamentalists. And there was this one idea that supposedly explained everything, which is the free market, which is like a version of God, an invisible hand, and things were going to be alright.
Well that all shattered in 2008, and there’s hardly anybody that seriously believes it or tries to defend it anymore. So now capitalism itself is up for questioning in a way that was not true before 2008. And I know this because I have been questioning it since the late 80s. And I’m not alone in that, but the push for a post-capitalist economics that is more ecological has been in existence for a few decades now.
But it’s never been as strong as it is now because of these various forces bearing down on us that’re saying A, capitalism is a kind of multi-generational Ponzi scheme that will eventually crash. And B, the crash we may be in the beginning with, because we’re not paying the true costs of burning carbon. Part of the Ponzi scheme is that we’re consistently selling carbon for less than it costs to produce it in the first place, and that underpricing is that we don’t pay the cost for the wastage. We’re not putting the CO2 capture devices on our production plants, which people just immediately freak and say, “Oh, that would make energy twice as expensive.” Well, energy should be twice as expensive.
So, you can use economic terms to describe what we’re doing, but we don’t yet have a system of laws that would enforce the true costs of things being charged.
Anderson: So I’m thinking about that in kind of the popular mindset. It takes an openness to a certain type of knowledge to acknowledge that there is that waste that we’re going to have to deal with later. And because we’re part of a democratic electorate, we have to have a conversation with a lot of people who don’t believe in the science at all. So in that case, how do we begin to address the economic side of that?
Robinson: I don’t believe that these people don’t believe it. I think they are pretending. I think they’re lying. I think they know, but they don’t want to admit that they know because that would mean more government. That would mean that people they’ve been politically against were right and they were wrong, and nobody likes that. Nobody likes to admit they were wrong. So I think they ignore the data and pretend to be against science, but here’s where I think they’re pretending. When they are sick they run to a doctor, and hope that that doctor will cure them.
That doctor is a scientist. And that doctor could say to them, “Well, you’re asymptomatic now; you don’t feel a thing. But I’m gonna have to poison you within an inch of your life or else you’re going to be dead within five years.” And they believe it and take the poisons.
So these are people who believe profoundly in science. When their life’s on the line, when they’re scared for their life—for their kids’ lives—they believe in scientists and in science. They might be lying to themselves, too, and not quite realize the incoherency of their thought. Because they don’t realize all the sciences are consilient with each other. And there are areas where they get very problematic, like studying human beings, but they are still sciences that are consilient that everybody in the modern world believes in.
Anderson: Though I think of…if there’s one thing science seems to be revealing more and more it’s that we’re not…rational—
Anderson: —…necessarily, right, so we might not see that.
Robinson: Sure. Well, we often hold two ideas—contradictory ideas—in our head at once. And we’re not rational. But, even in the emotional part of ourselves, when we’re scared, we run to a scientist. There were very few people who don’t go to the doctor, and a lot of people who go to the doctor rightly think this is a scary thing to do, because we know that we’re more complicated than science has fully managed to understand but it’s a better bet than anything else.
So, I think the analogy is important and this is what I’m doing in my public speaking. I’m often talking about how medicine is a science, and tells us off and things that are invisible to us that we nevertheless act on. So I think the figure is this, that we can burn 500 more gigatons of carbon before we’ve really cooked ourselves and cast ourselves over the edge into what James Hansen calls “game over.” Although he’s a very politicized character at this point, he still is hard to refute. And yet we’ve identified 2,500 gigatons of carbon already accessible to us—
Anderson: Well, actually that jumps right to mind with the conversation I had with a guy named John Fullerton who was the head of JP Morgan. The conversation we had was…very disturbing because you know, he comes to this from a high finance perspective. And he was saying that if you look at the amount of carbon that’s in the ground and think well, all of that has value to a lot of companies on Wall Street. And that if you say, preemptively we’re just not gonna burn that because the cost of burning that is possibly the environment. Then you right off that value, and you end up with…a market catastrophe.
And if you don’t write off that value you do end up with an environmental problem which down the road leads to a different type of market catastrophe. You know, the question is of course, how do you write off that amount of value from the market without imploding this really tangled economic system now?
Robinson: Well but that’s future value, because that’s not a value until they’ve got it out of the ground and sold it and burned it. If that future value can be accounted, then you also ought to be able to calculate the future value of functioning foods. And then you would have much more than $160 trillion. Nature published a paper, Constanza, et al., about the $33 trillion a year of value out of the environment that isn’t usually costed or priced or valued, and yet it gives us that as just natural processes.
And what I’m saying is there’s no economics that does a decent job at this point because capitalist economics is used to having its externalities. And when you begin to count the externalities as costs…well yes, it crushes all of their value. But that doesn’t mean that that’s not still true. It means that they’re just that far out of whack.
The crucial missing piece in this is an actively ecological economics, a post-capitalist economics, or a reformed capitalism. I believe in capital, which is you know, the useful residue of human labor and we need it bad; it’s another name for technology and comfort. Capital in other words is great. And the market is probably something that we cannot escape in a modern, complicated civilization where everybody’s doing different things and there needs to be some way to trade and share the work that we do.
So when I speak against capitalism, I’m not speaking against capital, or even against the market, as a large thing. What I’m saying is that the rules that’re manipulating market and capital right now are hierarchical, unjust, inefficient…
Anderson: So thinking about the massive complexity of the ecological system and how we need a market system that can sort of start to deal with that—
Robinson: Mm hm.
Anderson: And it’s making me think of my conversation with George Lakoff last week. And something Lakoff talking about was, metaphor originates in the body. When it comes to systemic things, whether it’s economic or ecological, we don’t have the metaphorical language to deal with that. Like, the brain doesn’t have metaphors to work with abstractions in the same way that we work with simple tasks.
Robinson: Well…Lakoff is fantastic, and he’s taught all of us a lot. I’ve used his books my whole career, or his concepts to understand the world better. So I’m in total agreement with him on all but this final statement, because I think we have the metaphor sets, which are again as he said, the body, but also the forest. Everything is metaphorical to natural things around us. So it’s not just your body but also you’re comparing things to trees, to verticality, to landscapes. Then you can begin to use metaphors out of the forest and the body. Capitalism is like some invasive biology, like kudzu, that is a neat little plant until it takes over everything and strangles it.
So, I think he’s right in general but wrong in this particular. And what would be interesting is to start playing the game of making metaphors, of describing our civilization as something that can be comprehended by us by way of metaphor, and then worked on effectively after that. It would be a grand challenge for all storytellers. I think of it as part of what I do as a science fiction writer. We need everybody doing it, really. I mean, Fritjof Capra has been talking about this for a long time, saying we’ve got to go from the world as a machine to the world as an organism.
So I think all that’s right, but I would like to also get specific about economics. I’d like there to be a story that is mathematicized and quantized and turned into a set of prospective laws that would rectify economics as we’re running it right now. And what I’m seeing is that economics as a field of discipline is bereft of that, and is very chicken-hearted about doing projective economics, speculative economics, utopian economics. They merely describe it is. They analyze what is, and then say, “things can’t be other,” as if what is were the rules of nature rather than laws. So they do legal analysis and claim they’re doing physics, in a way.
When I do these speculations as a science fiction writer, I’m thinking why isn’t there somebody out there doing quantitative work on this and proposing the actual laws so that you can get from here to there. Because very often the weakness of utopian thinking is you set up a system—it looks grand. Well that’s fine. But we live in this world that we’re in right now, and it’s massively entrenched. A lot of laws, a lot of trade organizations, a lot of treaties, a lot of guns backing it all up. And so you need to reform it by way of persuasive stories, legal reforms, and then democratic action. Maybe the storytelling becomes crucial to this but it would be really nice to then be able to back it up with a complete legal package. And that’s I think a missing element right now.
One story that I try to tell, and when you talk about history it comes to my mind immediately, is that science and capitalism began together as kind of conjoined twins that’ve been supportive of each other throughout, but that they are actually very very different in their goals and their methods. And that science has been a underground or proto-utopian force all along, trying to make things better but always under the control of capitalist money and guns. So that it’s only been partially effective and yet it’s the most effective counterforce to capitalism in world history.
So what I’ve been saying in this story of the giant like, Hindu mythic battle between science and capitalism, that every time science gets a little clearage to do its own thing and pursue how nature really works and then also find out what we can do to manipulate nature (which are two slightly different questions), it has been working for human good.
And what’s interesting is to take a scientific view of well, what do humans really want. So then you get sociology, psychology, sociobiology. Science says, “Well, look. Everybody seems to be happiest when they make about $80,000 a year.” And you say well, oh, there’s a level beyond which wealth equals obesity and worry and ill health and unhappiness. Whereas if you stay at the mean, the kind of Goldilocks—not too little, not too much—you end up being happier, and health and happiness begin to look like synonyms. When you study the two scientifically, together, they’re almost just two ways of talking about the same thing in human terms.
Well at that point, economics, especially capitalist economics where more is always better and growth is always good, is actually simply wrong. In human terms. And it’s wrong in ecological terms, too, because the system can’t actually provide goods.
So then you look at human civilization, you say well look, what about if everybody had a sufficiency? Adequate food, water, shelter, clothing, healthcare and education…and employment, that work as a right, also—to to have some kind of work—
Anderson: And meaning, through work, right?
Robinson: Well, exactly. Because that’s what I’m thinking, is that meaning comes out of your project, and you need meaning as much as the rest of these things. So this is the utopian vision that I see that seems to me to be sort of scientifically based. And there, my dichotomy between capitalism and science is somewhat reinforced, saying if we just studied ourselves scientifically as primates that would like to live as long and as healthy and as happy a lifetime as possible, how would that look? Then it begins to look like this vision that I’ve been constructing.
Anderson: That’s also really interesting because it runs into another enormous myth, right, which is I think our sense of the individual. And I think here we’re talking about two very different types of freedom. One in that utopian myth, and one in this myth of individualism in which when you say, have a wealth cap or a wealth floor, you’re denied the freedom to achieve above and beyond, presumably to reach your full potential. And you’re also denied the risk of falling low. In a way, in that myth your agency is reduced by being constrained into this band of what someone else has told you is kind of the ideal. And in this case there’s a huge authority question in that well, science may say this, science may have empirical data, but shouldn’t I ultimately make this choice for myself.
Robinson: Well, I’ve thought about that, and that is a good question. And I’m just wondering if we democratically decided that this was the kind of situation that we’ve set. If that isn’t in fact the human band, and below that you get a kind of subhuman misery that not many people are going to voluntarily choose. But if they do, they probably could because that’s going downward.
And then what if there was the possibility for a kind of spike of people for whom their self-actualization just requires making billions? This part of freedom, of indulging the individual at the expense of everybody else and of the planet actually ends up with a lot of sick, crazy people as an end result, not with happy, fulfilled supermen who we look at and think, oh god I wish I could be like that.
Robinson: They’re actually dysfunctionals, and not anywhere near as self-actualized as your ordinary working scientist. There you see the happiest people on this planet, not amongst the rich and famous. Relatively undramatic activities are more satisfying than these big mysteries that we tell each other of what would be satisfying. And that’s the problem with addictions. You do something that’s supposed to be satisfying. It doesn’t satisfy you. Then you do more of it, because maybe more of it will actually make it work. And in a way, Western consumer society is addicted to consumption. Because it’s not actually satisfying, and so what we’re trying right now—some of us—is doing more of it, because then maybe that would be satisfying.
Anderson: And is that ultimately what science is doing, as an institution? Is it not an addiction itself?
Robinson: Well that’s a good question, but science is such a powerful tool that we can kind of aim it in what direction we want. And I think where science as it plunges forward under its own momentum is most impressive is in health, in medicine. And part of that is understanding what we are. So you get into behaviors, you get into these questions, you even get into economics as just simply a health question.
So the project is uncomplete, and it will maybe never be completed, but what science does is what it’s paid to do, in part. And then it’s a very powerful instrument. So if you say to science well, “You know, we need to go to Mars. That’s absolutely the salvation of humanity.” Well, we could pay for that. We could do that. And then we would find oh, oops, it’s like Antarctica. It’s not the salvation of humanity at all.
But if you said to science, “Let’s make up a viable economic system that we’ll all agree to live by that will maximize the health and happiness of the entire population,” and science when after that with its same intensity of study and of innovation…its powers are really quite fantastical at this point, as I think we all know. Then that too could be accomplished, I think. I don’t think we’ve fully explored this question of what we do in a truly civilized civilization, in a true high-tech that’s in balance with the planet and actually makes the human animal happy? We’re not really close yet.
So, I stab around in my guesswork on this. But I think I’m on to something here. I really think that the sciences are telling us that we are primates that evolved to do certain activities. And when we do them we’re healthy and happy. And when we don’t do them, thinking we’ve found something better, we fall away from health and happiness.
Anderson: In a way, there is a… You build this massive scientific institution that gets you to the point of having a scientific understanding of what you were before you built the institution.
Robinson: Yes, and then onward to longevity, onward to kind of the global village. I have to admit that it would be very cool to jump in a completely clean tech, maybe something like a dirigible that kind of floats across to go see Venice, without feeling like you’re burning an inappropriate amount of carbon and polluting the planet to do so. I’d love to talk with people all around the world. So the high-tech includes a lot of stuff that would necessitate mining, and stupendous material base. So therefore factories and production plants. Now, how that combines with the daily life of the sort of paleolithic is to me an open question. And maybe they don’t match up very well, but maybe they do.
Anderson: What if you just can’t have it without that pollution? I mean, what if the price of creating some of that new technology is always having a mine? What if you just can’t get it clean beyond a certain point? It’s always doing some sort of environmental damage…
Anderson: Is it worth it?
Robinson: Some things would be and some things wouldn’t. You know, an extra decade of life on average for the human population would be worth it. Extra amount of fidelity in your iPod headphones would not be worth it. And so you go on like that making these kinds of distinctions, I think.
But also I’m wondering, if the tech is clean enough that the planet can recycle the waste products. That you get a full-cycle, birth to grave, and then back-into-the-Earth type cycles for these things. There’s always going to be room for improvements. And so the whole project of civilization could be cleaner and cleaner tech, better and better priorities, which is a philosophy question as you’re pointing out. What do we want out of ourselves?
But then also balancing with the biosphere. Because I really think that the bulk of the story will always be here on Earth. That the rest of the solar system is all that’s within human reach. And the rest of the solar system is a little bit paltry in terms of resources that we’re going to need. So it keeps coming back to Earth.
Anderson: Is that taking into account other creatures as… Like, what is their philosophical place in this? What rights do they have? One of the most provocative conversations in the series was with Gary Francione, who’s a law professor at Rutgers and a really eloquent spokesman of veganism. For him that’s part of a much larger platform of non-violence towards other living things. One of his critiques of capitalism is that there’s just really no way to have this much growth and development. And even if you make it cleaner, you still have a big environmental footprint. You’re still destroying habitat. Your forms of transit still kill birds and animals on the roads. And that they matter, and that you need to not do that.
Robinson: Well, I think there should be no extinctions. That this is what we don’t have the right to do, is to take up so much of the planet that the other animals, our fellow creatures on the planet, go extinct because of us. That this is the moral wrong that I would object to. That we need to run the planet such that there is a cap on the number of humans and their impact on the planet so that we’re sharing it with all the rest of the biosphere. And that actually is for our own good. It’s not clear that we can live successfully without the rest of the biosphere. But the way the mammals are going, the way they amphibians are going, it’s a sign that we’re doing things wrong, and that we should not have extinctions.
Now, I would disagree with this person although I admire the impulses behind it. I think that humans have been omnivores. That we kill things and eat them. That this has been part of our species from its very beginning. You don’t have an either/or question, you have part of our self-actualization is the happiness of the rest of the mammals. I feel this very strongly.
At the same time if you had certain animals being treated as kind of crop that we killed as painlessly as possible then ate, I personally don’t see that as much of a problem as other problems we’re facing. It’s almost stepwise, a matter of scaffolding. If we got ourselves to a certain level of prosperity and health, then we could think again about these things and maybe many people would then say, “Look, we might soon have vat-grown meat, where there was never any animal grown, suffered, and killed.” And then you know, what’s the problem? At that point you have reduced a moral problem, and by way of technology.
Anderson: Which is a really fascinating thing.
Robinson: I don’t believe we have had many chances to think about that philosophically, what that means.
Anderson: There’s a guy want to bring in here who I think is interestingly germane. His name’s John Zerzan. He’s a neoprimitivist, he’s an antitechnologist. He threw out a couple of things that have stayed with me throughout this project, one of which is, is there a bias towards technology itself? Does it essentially lead you down this rat race where with every technological development you are one step behind, trying to ameliorate for its ramifications or repercussions? Each new technology, each atom bomb, each genetic engineering, each thing that heats the atmosphere. And you’re always trying to catch up to fix that. And his feeling is, why are you doing that?
Robinson: Mm hm. Well, I have a lot of sympathy for that view, except in the end I don’t agree. I think it’s a matter of individual choices as to what’s appropriate technology. So I think the word appropriate technology is important here as well as clean tech. There is this Jevon’s paradox, the idea that the better that we get at things, the more efficient our technologies get, the more bad we do with them. And so there’s never been an improvement in technology that hasn’t actually created more harm in the end. Does that mean that we don’t do medicine? Do we not do public health? Because those are technologies, too. And yet, I don’t think we would want to be in a world where an infectious disease could catch your six-year-old and then he’s dead two weeks later.
Anderson: And do you think there’s any going back, for someone like him who’s says you know, what’s really essential here is just an unmediated experience of the world, and all of the technology is a distraction?
Robinson: Well, that’s not right for humanity. That would be saying can I live like a chimpanzee? Because humanity has been high-tech from the very beginning. We’ve coevolved with our tech. Take this. This piece of technology was stable for about a half-million years. This is an Acheulean hand-axe that I’m holding here. Well, this is a rock, an oval rock, it’s been knocked in places until it’s sharp on one end and maybe sharper on the edges. There’s literally hundreds of thousands of them scattered around the Old World. And they know that they were pretty much like this from 500 thousand years ago to about 150 thousand years ago when they began to sharpen up and change and get more symmetrical and differentiated. It’s a mystery. But it’s also a piece of technology. And it was exactly when we’re evolving from them pre-human to homo sapiens that we were using techs. And as the techs got better we got more human.
So, what I think, though, is that it’s possible to say there are techs that really help me become more human, and there are other techs that distract me by being this pseudo-superhuman.
Anderson: And how do we differentiate those things?
Robinson: I guess you try them out, and see how they feel. It’s an emotional reaction. And you have to live them for awhile.
Anderson: Many people in this project have questioned…what are we going to unlock? And one of the ways that people have looked at that are between the proactionary principle and the precautionary principle. Should there be a bias on demonstrating safety beforehand, in terms of preserving the status quo? Or should there be a bias towards always the exploration because it will get us somewhere…better?
Robinson: I think that we need to have close collaboration between the scientific communities that are working on some of these transformative technologies (particularly I’m thinking of genetic engineering), and with the legal bodies that are establishing the rules so that we know fully, as a civilization, what the risks really are, not as sensationalism, but as true risk assessment as to how these things can spread, or go wrong, and then try to guard against that. So I believe in the precautionary principle, that there ought to be tested trials that are very limited and controlled before we do anything that we might later regret. But I’m also wondering if there is anything we can do in those regards that we are going to regret later.
I also think it’s really important to de-strand our worries about the technology with our worries about the ownership of the technology. And here again—
Anderson: That’s huge.
Robinson: —I’m thinking of genetic engineering.
Anderson: Mm hm.
Robinson: Because all of these objections to say, Monsanto’s seeds, these seeds oftentimes represent exactly what human beings have been doing all along with hybridization and making of new and more effective food seeds for ourselves. But what hasn’t been happening is for some company to claim that they own them and won’t give them to the rest of the populace without taking a massive cut of them. In other words, ordinary capitalist profit motive. Then that’s ugly. That’s Wells’ Eloi and Morlocks. That’s the beginning of speciation, where a certain part of the population gets to become a different species, more technologically boosted into certain abilities that might indeed be useful, like longevity or robustness or better health, whereas the rest are being consigned to some lesser fate. And this is not a scientific distinction being made. This is an economic distinction.
Anderson: Though what’s interesting there, and I think for a lot of people as they maybe conflate science and capitalism…
Anderson: …there’s a question of the inevitability of human nature. I mean, if you look at the historical record there’ve always been people who’ve grabbed power and have used tech, but at no point have we had the tech to split the species? And so maybe that’s part of where it becomes a fear of science, because they assume that there’s always going to be that human force for badness in the world, and that as progress goes forward that is amplified in a way that’s greater than the force for goodness.
Robinson: Well, but here’s where you have to have some kind of belief in democracy, and the sense that the self-management of the species by itself. If we govern ourselves of the people, by the people, and for the people, you put a damper on that kind of selfish profit-taking and exclusivity of the [goodness?] that we’re making for ourselves. And also the choice of which directions to go in terms of what gets researched and what gets worked on.
People can get very cynical about that, but at that point they are screwing us in terms of historical possibility. Because the more you say well, people will always be bad and that selfishness is inherent to the human character and there’s no way of stopping the powerful from being powerful, the more you enable that to be true. The less you believe in democracy and equality as being a thing that has risen over time, if you wanted things to go right you would do a Gramscian “pessimism of the intellect but optimism of the will” and you would will that people be better.
And you would also look to science again to say what are human beings? We evolved in a situation that was relatively egalitarian, and we succeeded in the world by being quite cooperative. I mean, there’s seven billion of us on this planet and very often I’m in crowds of thousands of people without a single policeman in sight. And not only is there no violence but there’s really very little in the way of public drunkenness or disorderliness of any kind. I mean, it’s quite an achievement what a peaceful and cooperative species we are.
Anderson: Do we risk running into the limits of what we are at some point? A lot of people will say in this project, “Don’t worry about the biological limits, we can solve this stuff as what we are.” Other people have said, “No, there are biological limits to what we can know and that’s part of the challenge that we have. Now, we need to simplify.” There are other people who’ve said, “The problems we face are so big we have to reengineer ourselves biologically to move forward.
So there’s sort of three courses of action that I’ve seen throughout the series in terms of moving forward and kind of the limits of the mind. What do you think about issues like that?
Robinson: Well, I think that it is hubris and actually a bad mistake to think that we can engineer the mind. We don’t understand the mind well enough to describe it, much less engineer it. So this is a terrible science fiction story; it’s basically a fantasy. And here’s why. The brain can be studied when it’s dead, with electron microscopes, right down to the cellular level. That’s fine. But it only does its thing in living bodies. And when the brain is living and acting, we can only study it indirectly by way of scans that show blood levels or electrical activities. Or maybe in certain brains a probe in there that is finding things a a little more local level.
The action of our thought is happening at a level that is magnitudes tinier than what we can study, than what we can ever study as a living system. And this is what all the brain engineers, transhumanists, [Singularitarians], etc. etc. are ignoring. The idea that we’re going to be able to study the brain at a more and more detailed level just because we have been making progress so far ignores asymptotic limits and physical limits of all kinds.
We don’t understand how the brain works on levels that are so profound. We don’t know what consciousness is. We don’t know what will is. We know a lot more than we used to?, but we’re going to run into certain limits that means that we’re never going to engineer it to be better. So that’s a kind of fantasy that is told, and a part of the science fiction field that I’m not sympathetic to because they are doing futurology. They’re claiming, “This will come,” and so it becomes like Scientology or scientism more generally. It’s somewhat of a scam. They’re either deluded or they’re actively scamming people.
Anderson: If we make an analogy there, and if we can’t understand the brain, if there’s some things that we can’t get to, why is there any hope that we can understand the environment with something like clean tech, you know, which is something that must be just as complicated, with so many variables. There’s been a big argument in this project about what we can know in an environmental sense, you know, with some people asserting…Robert Zubrin?…asserting that human creativity is limitless, and therefore there’s a lot of stuff that we can do environmentally because we can know it.
And kind of the interview immediately following upon the heels of his was with Wes Jackson from the Land Institute, who said “This is absurd. There’s no way you can know any of this,” you know. Recognize the limits and work within them.
Anderson: But those two episodes were like this…distilled point/counterpoint of what’s been happening in the whole series.
Robinson: I don’t mean to make any personal aspersions here, but I really think it’s Jackson who is right. We’re not limitless at all; that’s absurd. And yet the brain is much more complicated than many an environment on Earth. But the brain is also secreted away, so that it can’t be ripped apart and studied while it’s still working. Whereas an environment can be jumped into you know, with your Wellingtons on, and you can begin to take measurements and you can bend to study it. And you can also begin to go into the lab and look at what these bacteria do without, understanding everything at all about such complex systems. You might be able to get good enough to kind of coexist with it in a clean way, which I…
You know, this is kind of a Wes Jackson project. He’s doing high tech. He’s doing clean tech. But he’s saying, in such a complex system we need to pay closer attention and we need to think about us as expressions of the land. That we human beings are bubbles of Earth, and that the ecology throws us up and there’s this brain, which is evolved to the point where we can have abstract conversations like we’re having right now. But that it belongs to a mammal that has relationships with the rest of the biosphere and coexists with the 80% of DNA in our bodies that is bacterial. And that this ecology, this biome that we are as individuals, is complicated enough that all you can do is hope to get along and maybe extend your life a little bit further. So you’re not going to be immortal.
So I guess what I’d like to do is kind of be the person that bridges the gap between these views and speaks for high tech, clean tech, the possibility of a sophisticated utopian civilization that is still working within physical reality and limits, and never goes transcendent. And maybe has no need to want to go to transcendence, because we’re already in such a good space, you know. We have this animal reality that when you’re in good health, when you’re not in pain, it’s all you really need.
Anderson: And that kind of brings us to the good. You know, there’ve been lots of different goods advanced in this project, and it feels like if there’s anything you see from them you see the role of emotion in deciding how to apply reason.
Robinson: The two are just ways of naming parts of brain functions that coalesce into a whole being, a whole consciousness. It’s really whole thought that in the end we need to be concerned with. And that has aspects of reasonability and aspects of emotion. And so scientific study of emotion is great. We’re learning all kinds of good things, and things that will help us and will contribute to the human good.
I was thinking about what you were saying there about how various people define the good, and I was wondering if I…if being a utopian science fiction writer, that my work has been an attempt to define a good state. Individually, you’re not in a good state if you’re in a pocket utopia. In other words if it’s good for you, but other people are suffering, it’s not good anymore. And so then you have to get to this notion of everybody. And that’s why you come back to utopia, from my way of thinking. You say look, if everybody alive, and all the species, are doing a well, then you can enjoy your doing well in a way that you couldn’t if there’s suffering to create your doing well.
And that can be pushed in all kinds of different ways. Maybe that does make you a vegetarian or whatnot, but in human terms—just sticking with human beings—it means that you need a just society, where everybody’s got adequacy. When you get to that space, then what you want is that for everybody to have the good and then you can enjoy your good.
Anderson: And that’s what’s tricky, right? How do you have…like, a utopian vision…but still kind of acknowledge pluralism?
Robinson: Utopia is not an end state but a name for a particular kind of history. So that it’s a dynamic state. You’re never going to get to perfection. That’s not humanly, biologically, or in this universe possible.
Anderson: Does it become meaningless, then, because it’s so big?
Robinson: But we’re not there. It’s just global. All it is is talking about sustainability and about a civilization that can get by and give on to the next generation what it was given, and a not trash the planet or cause a mass extinction event. It’s big, but it’s not too big. It’s not physically impossible to create a decent life for everybody and for all the species on the planet, it’s just very difficult. It’s an engineering problem and a distribution problem and a economic problem.
I would be interested to see somebody who would disagree to the notion that everybody ought to have an adequacy or be able to live their own life freely and maximize their own ability to be themselves, whatever that might be. I mean, what’s the objection to that? And if that’s my definition of utopia, isn’t that big enough to encompass all these various objections?
Anderson: And there’s a big fundamental question beneath that, which makes me think of a conversation I had with a moral philosopher named Lawrence Torcello. And we talked about sort of the tension between pluralism, which you want; but not relativism, because some things are still bad. To what extent can you weld all of these different people and different desires into something that is…anything, anything utopian? And can you do that voluntarily?
Robinson: I think you can. It’s the global civilization. It’s democracy in action. There are certain cultures that are heavily unjust for certain of their members. There’s old patriarchies that are wickedly cruel to their women. But the interesting question is if they were democratic, and the oppressed second-class citizens were voting, and so then is democracy then just a trans value or is that just one culture trying to impose its values on others [crosstalk] and you get into those—
Anderson: Right. Does it have some status as the good?
Robinson: Yes, exactly. And I think to myself, is there any human being that you can justify having less representation over their own affairs than others? And I keep thinking no, there isn’t, so this is a trans value. And maybe it goes back to the Paleolithic, where in a group everybody had their role; there was not this hierarchy of power of one group over another.
Anderson: So that coul be one of your arational assumptions.
Robinson: Yes, although I would like to scientize it. I would like to say that it is more than that. But on the other hand, maybe not. Maybe this is just a matter of philosophy that’s emotional. But emotion— Um, emotions are crucial anyway. Meaning is a result of a certain feeling.
So this my ultimate gut feeling is that everybody is equal to me. This is a kind of novelist’s feeling, is that there’s not a single person anywhere that has more or less rights than me. Well, this does go very deeply, and that’s a sort of individualism, and yet I believe in the collective, also. So there are balances all over the place.
Anderson: Do you think that conversation is possible? You know, because we do exist in a pluralistic world where we’re going to have people who will always say, “I don’t believe you.” Is there conversation there, or is there sort of a battle royale in politics, and culture and media, that sort of has to…hash it out?
Robinson: Well I think it’s a little more the latter, especially at the crux points where cultural change happens. I think what you do is you try to make your stories plausible to the 51% or the 55% or the 60% that you need to convince in order to take political action, depending on your political setup. And then what happens is, if you can gather that many people together and you make political decisions and you just a override the minority that doesn’t believe you and say, “Look, this is for the greater good. We’re voting these into place,” to try to make a response, it also could happen that quickly the culture will change to where suddenly everybody believes that to be true.
And so, I guess the story that I tell is that we’re doing really well when we pay attention to science and use it, and yet keep it in its place and continue to tell our stories with more and more data to back them. And so I think of science as being the utopian effort. That back in the 1600s we began to try to make a better world and a more just world and a less-suffering…simply a more comfortable and less-suffering world for humans, and that that’s been a stepwise process with lots of setbacks and lots of successes that created worse problems. And moments of hubris where science tried to go too far and say, “What should be done?” and at that point it had to get slapped down and revise itself. And that that whole process has been so interesting and so powerful, so suggestive of further progress.
So I tell a science fiction. It makes my project cohere, so that I’m not just writing one book after another or telling stories randomly but trying to tell a bigger story that is the story of science and society together.
Aengus Anderson: So as Neil mentioned up front, this is an interview that does connect to everything. And that's really exciting but it's also a little bit daunting to talk about when you're at the end of the conversation. Before we record these things we always make our little bullet point lists of like, "well what are the things we want to talk about here," knowing full well that we don't really want to take more than a few minutes of your time. And with Kim Stanley Robinson that's really hard, because we kind of want to talk about all of this. This is a conversation you could talk about all day.
Micah Saul: Because it connects with everything, this…like, I'm really happy that this came so late in the project. You're able to get this like amazing synthesis of the project as a whole through this one conversation.
I'm just going to jump in and say like, what I enjoyed most about this conversation was that here's somebody willing to go to bat for utopia.
Anderson: [laughs] Yeah! I think between Kim and Claire Evans, those are our people. But it's not utopia…in the sense that other people in the project have dismissed offhand. I mean, he's redefining what utopia is into something that's not the island off the coast of nowhere…but it's something else.
Neil Prendergast: Right, and it doesn't have to be this perfect world. It's something that you work toward, it seems.
Anderson: And it's got this iterative quality.
Actually that made me think of Jason Kelly Johnson a little bit, with his sort of…his amazing architecture. And all of these architectural projects are iterative. And Kim Stanley Robinson is doing that with science fiction and all of society. He's just playing out these scenarios and tweaking stuff and it's like he runs the experiment again. And I think that's…really cool.
Prendergast: Yeah, one thing I want to mention too is, following on Micah's observation there about you know, the timing of this interview, that it's later in the project. And for me there's kind of a turn and it happens right around Charles Hugh Smith where…we're really kind of I think thinking a little bit more about well what's the action? What's the next step following the insight from whoever Aengus interviewed.
And there's something here that dovetails with Smith. Robinson's not so much concerned with some sort of like, grassroots like, everybody involved, everybody in agreement kind of change. You know, he's really talking about democracy being something where it's that majority that counts. And he has something in there where he says, oftentimes once a majority that's interested in the greater good establishes something, that it then becomes sort of an unquestioned truths following that. And so I think there's a real hope there? for change. Because the barrier is a little bit less high.
Anderson: Yeah, I can think of a couple people earlier in the project who touched on that, but it's cool to see this coming up again, and it's really cool to see it coming up in the context of a science fiction author. I think a lot of people think oh, science fiction you know. He's exploring scenarios that are so far beyond conception…what could be pragmatic about this? But it's deeply pragmatic.
Saul: As someone who's read a decent amount of Robinson that's something that's always…really jumped out at me about his writing. He's not writing space operas. He's not writing far future. In fact, his books sometimes read as like…I don't know, narrative histories of a future that's you know, happening next year. Pragmatic is exactly the way to to word it. His science fiction is pragmatic, and it seems like his philosophy is pragmatic. There's a really interesting sort of…pragmatic approach to that conflict between pessimism and optimism, I found here as well. Would you agree with me on that?
Anderson: Well that's kind of interesting because as I was thinking about pragmatism there, and I think pessimism and optimism, I think, can you be pragmatic and utopian?
Prendergast: I think he suggests you can.
Anderson: You probably hear this in my voice in the interview like, this is something that I went after a lot. Because instinctually, I have a hard time feeling like…someone who's really talking about utopia in a serious way can really be that pragmatic? He is…but, I want to pull that apart more. Because I feel like maybe if we were to put it on poles—like we've got tech utopian thinking on one end and we've got a belief in limits on the other end. And like, those are both poles of Kim's thought in this thing. And there aren't a lot of people in this project who've had both of those, right. We've had utopian thinkers; we've had Robert Zubrin. We've had people who are really interested in limits, like immediately after Robert Zubrin we get Wes Jackson, who talks about limits eloquently. And yet Kim has both of these things in his conversation. And I think I was having a hard time squaring them. And I think I still am. Like, I see were he's walking a line, but I also feel like I'm not really convinced that you can.
Prendergast: I think key to all that is a sense of humility, as the two of you discussed. And that comes right out of Wes Jackson's following in the tradition of a land ethic. Comes out of environmental thinking, United States. But that sense of I think humility says, "Yeah, I can be pragmatic things. I can use science. But I understand that there are other species that matter, too." And that's whre the limits to growth come.
Anderson: If I can really spell out the thing that like, didn't quite ring right for me, it was sort of…Kim's description of the fulfilled life. He talks about a lot of things that sound really good to me? and might sound really good to you guys and to our listeners. You know, a life which is not necessarily full of electronic doodads, but maybe is living in a world with a phenomenally advanced healthcare system, and a world but also gives you time and space to go play in nature. Which lets you be the most of what you are as an animal. There's actually a little reference there that takes us back to Rebecca Costa.
But, I keep thinking like, well yeah that does speak to me but I need to ask hard questions. What about the people for whom that doesn't work? What about the people who are like, "I want my own F-16 and I really wanna—I mean, I just want to bomb other countries with it? Like, that's my utopia." It's hard for me to ask the question from that guy's perspective? But I feel that that guy…is out there. In any utopia it's like, how do you really deal with that guy? And we talk a little bit about that. And we talked more about it in stuff that didn't make the final cut, but I don't think it settled the question, you know. And so what do you guys think about some of those things?
Saul: In many ways that comes back to the same sort of questions and problems we had with Torcello, right? It's what happens when there's that one person in the room that's just dedicated to throwing chairs. That is of course the major…failing…? (if you want to call it a failing) of talking about pluralism. One has to assume there's an answer?
Prendergast: Well maybe I'll try to bring us a little bit full circle, at least in my own thoughts. I think this is where you know, he does relying on social organization of democracy. Yeah, you're gonna have that one guy with the F-16s, right? Or wanting the F-16s. And what're you going to do about that guy?
Well, hopefully there are enough people in the democracy who are interested in a greater good, and they can form a majority, and they can use the power of government to make certain that that person doesn't get to have the F-16.
Saul: I think that's a great ending point? But I kind of feel like I need to just throw this out there for personal reasons? This might be my favorite conversation in the project. Because…I am a recovering tech utopianist. There's a— [Aengus laughs] I mean, you guys know this.
Anderson: [still laughing] It's like suddenly we stumbled into the AA meeting. [all laugh]
Saul: You know, but there our listeners, if we have any listeners left, years ago Aengus and I sat in my living room in San Francisco— Or actually I guess it was our living room in San Francisco at the time. And he interviewed me about what I was most excited about or something…I think that that was the prompt. And I started talking about tech, and started talking about the transformative power of technology and the Internet, in a way that now…after…growing? and after this project, I look back at and just sort of hang my head in shame. I sound…like a lot of the people that I…maybe beat up a little bit in this project.
But still, I work in tech. I now work for the government, in tech, trying to make things better for people. So I obviously still believe that tech has the power to make the world a better place. And Robinson finally gave me somebody in the project I can point to that feels like he's answering some of the questions that I have on a daily basis. And he's giving me a new way to think about how to balance my more recently-found distrust of scientism and tech, with my continued belief that these tools have the possibility of really helping.
So…yeah. Little personal note aside, I really really appreciated this.
Anderson: Well, if Neil gave us a good place to end I think that's an even better place to end, because that's a pretty [crosstalk] cool anecdote.
Prendergast: Agreed. Much agreed that's great.
Anderson: This is The Conversation, and that was Kim Stanley Robinson, recorded June 17th, 2013 in Davis, California.
This interview at the Conversation web site, with project notes, comments, and taxonomic organization specific to The Conversation.