Micah Saul: This project is built on a hypothesis. There are moments in history when the status quo fails. Political systems prove insufficient, religious ideas unsatisfactory, social structures intolerable. These are moments of crisis.
Aengus Anderson: During some of these moments, great minds have entered into conversation and torn apart inherited ideas, dethroning truths, combining old thoughts, and creating new ideas. They’ve shaped the norms of future generations.
Saul: Every era has its issues, but do ours warrant The Conversation? If they do, is it happening?
Anderson: We’ll be exploring these sorts of questions through conversations with a cross-section of American thinkers, people who are critiquing some aspect of normality and offering an alternative vision of the future. People who might be having The Conversation.
Saul: Like a real conversation, this project is going to be subjective. It will frequently change directions, connect unexpected ideas, and wander between the tangible and the abstract. It will leave us with far more questions than answers because after all, nobody has a monopoly on dreaming about the future.
Anderson: I’m Aengus Anderson.
Saul: And I’m Micah Saul. And you’re listening to The Conversation.
Micah Saul: So here we are, sitting in a apartment in Brooklyn together.
Aengus Anderson: We’re not squatting.
Saul: We’re not talking on the phone, and…
Anderson: No, we can actually hear each other, for the most part. Whether or not we listen to each other is of course…all bets are off there.
Saul: Basically. So, today’s episode is with Tim Cannon at Grindhouse Wetware in Pittsburgh, Pennsylvania. Grindhouse Wetware are a group of DIY basement biohackers. They build implantables. They are very much transhumanists, similar to Max More.
Anderson: And that’s a theme that actually I’m really glad we have back, because it’s informed a lot of our conversations. Maybe an inordinate number of our conversations, because it is one of the most fundamentally new ideas on the block.
Saul: But these guys are actually doing it. They are building things and putting them in their bodies that move them beyond the just human. They are adding extrasensory organs, basically.
Anderson: And we get into that a little bit in the beginning of the conversation. But something that we do need to sort of talk a little bit more about, that ties into broader transhumanist themes is the Singularity, which kind of pops up throughout this conversation, and I only realized as I was editing it that we didn’t define it anywhere.
Saul: So, in physics, a singularity is the center of the black hole. In math, a singularity is…basically it’s a vertical or horizontal asymptote on a graph. It’s where the slope of the graph reaches infinity. In technology, the technological Singularity is when the slope of technological progress reaches near-verticality.
Anderson: Right, and this is an idea that’s really been popularized by Ray Kurzweil.
Saul: Right. It originally came from some science fiction writing, but Kurzweil’s really taken it and run with it. So, the idea is that if you were to chart technological progress against time, it is not a linear curve. It is a rapidly increasing curve. And at some point, technological progress over time is happening so fast that we lose the ability to know what it is.
Anderson: Which ties in with a lot of other themes that we’ve seen. The idea of things picking up, the idea of so much information that it actually outstrips our ability to know. So, the Singularity as sort of an abstract concept actually ties into a lot of tangible ideas that we’ve been batting around in this project.
Saul: Right.
Anderson: So that’s something that you should definitely keep in mind as you go into this. Because Tim will mention it at a couple of points.
Saul: I think that’s the best point now to just give you Tim Cannon at Grindhouse Wetware.
Tim Cannon: My name’s Tim Cannon, and I’m a software engineer and designer, and I do biohacking and anything that is based on do-it-yourself human augmentation. I guess you’d call me a transhumanist. You know, I definitely believe that biology is something that we should be trying to transcend. Not even improve. Just get over it and that sort of thing and move on from it. And I think that that’s going to start in small steps, and it’s going to have two very different vectors, scientific and social. And I think that the social vector is going to be a lot more difficult to overcome than the scientific vector. So yeah, mainly my purpose is to kind of get this technology socially accepted, and get it into the hands of people, open the code, teach them to use it themselves. Really put their fate in their own hands. Because the concern of course is that these devices are coming, right, and you can have them brought to you by Apple and Halliburton, or you can have them brought to you by the open source community who encourages you to crack them open.
Anderson: How did you get interested in this?
Cannon: I would say that I have been interested in this since I was a child. I think I realized that I was probably living at the cusp of a time where, at some point in my life it would be that way. I thought it would be a little later. I thought I would be too old to get involved. But I thought I’d see it happen.
Cut to twenty years later and I see a video from Lepht Anonym, finger magnets. And I’d seen a TED talk from Kevin Warwick about cybernetics is within our grasp, and he said, “Don’t be a chicken. Get the implant.” And then Lepht Anonym said, “Hey, don’t be a chicken, get the implant.” No this was about April, and by May I had a finger magnet. I mean it was that— I mean I… Really? It’s started? Let’s go! You know what I mean? Like, I wish I’d known.
I had somebody ask me like, “Did you wanna start in baby steps with the magnet?” I said no if I had the devices I’ve got in my basement now, they’d be in me, too. It wasn’t about small starts, it was about that was what was available because I wanted it all.
Anderson: What is the finger magnet?
Cannon: Well, I have a neodymium finger magnet implanted in my left index finger. And when there are electromagnetic fields in my presence, I can feel them. And so microwaves kick these things off, high tension power lines, really powerful magnets are quite intimidating, actually, nowadays. It’s really funny how mundane you think these things are, and then you get this level of enhancement where you truly realize the power that’s in that little piece of metal. Like, what’s in there. And you know, I’ve held these hard drive magnets where I’ll put my finger over it and I’m whoa, okay I’m not getting any closer, that feels awkward.
Anderson: Really.
Cannon: Yeah, it really does, and it gives you a whole new respect. And it’s another layer of data. It’d be like if you just went into a place one day and got to see this extra color. Your whole world would be just slightly more colorful, but that’s enough for you to find different and unique patterns, you know.
Anderson: So, what are the projects you’re working on now?
Cannon: Well, our primary focus, we—
Anderson: And we should probably say who we is.
Cannon: Oh, I’m sorry. Grindhouse Wetware is the name we work under, basically. And it’s just a group of guys and girls. Right now we work on about three projects, and we have maybe another five devices in the chute.
So, we have one called Thinking Cap, and it passes voltage through your brain. Small amounts of voltage and amperage, two milliamp, which is like nothing. And basically this raises the potential firing rate of your neurons, so a lot of people call it overclocking your brain. And it causes a state of hyperfocus, and it’s been proven to increase memory retention, concentration, these sorts of things. So, as you can imagine, that device leads us to be able to create other devices because, you know, pop it on and then study, or whatever. The effect lasts about two hours. It’s pretty interesting.
And then we have a device called Bottlenose, which if you have a finger magnet, it has a range finder that converts the range data into a pulse delay so that when things are closer it’s pulsing faster, and when things are further away it’s pulsing slower. And it just lets out a little electromagnetic field, so it makes no sound, nobody unaugmented can use the device. It’s built by cyborgs for cyborgs. And it basically just allows you to use your finger magnet to get the sonar, so you can kind of close your eyes and navigate a room or something like that, just with your finger magnet as your only sensory organ.
And then the third device, which we haven’t released to the public yet, is called HeLED[?]. It’s going to be implanted in my arm. My model, which will be like the uber-prototype model, is going to have eight LEDs that shine up through my skin displaying the time in binary. Because I’m a giant nerd and we love binary because it makes us feel like we know things that other people don’t know. And so, it’ll be displaying the time in binary, but additionally it’s Bluetooth-enabled. It will have four gigs of storage. And then we have a temperature sensor and a pulse monitor, so that you can collect biomedical data on the device and then kick it up to an Android phone which will kick it up to servers, and then we can kind of analyze all this health data and quantify it, and these sorts of things. So it’s kind of a multi-purpose device to capture your body modification people, because they’re going to be like, “Can you turn that into a circle that looks like a gear and glows up to—” and then you’re going to have the quantified self people being like, “More sensors, more data!” And then you’re going to have the biohackers in general just being like, “Wow, I can’t wait to make this device way more awesome because you guys aren’t that smart.”
Anderson: So, where does all of this go? What’s the ideal state?
Cannon: I mean, the really long-range goal is to transcend humanity in general, transcend biology. I know there’s a lot of biohackers who want to do stuff with genetics. I tend to feel that that’s a waste of time. Basically to me genetics seems like you found a lump of mud that just happened from a storm, and now you’re going to try to turn it into a house instead of using building materials, you know what I mean. And it’s like…just use building materials. Well, you know of course there’s going to be problems no matter what vessel you choose to occupy. But I think that we can make a much more practical and durable and modular and easier to improve situation without biology. I think you cut that out of the equation.
So, I would imagine eventually we’ll just start offloading pieces of our brain and biology into electronic— integrating it slowly until no human remains. And then at that point I mean, you’ve got thousands of years to figure out how to live tens of thousands of years.
Anderson: So is the ultimate goal more life? I almost started this project— Second interview was Max More at Alcor Life Extension.
Cannon: Yeah
Anderson: And for him, life extension is sort of…that’s the end game. It’s being longer.
Cannon: Yeah, I mean… I don’t want to die, and I don’t have any intention of dying. I don’t like sleeping. Because I’m…I go out. You only get this tiny amount of life and so many people hit their death bed with regrets and wishes that they could fix things and… I think that the idea that you should accept death is just ridiculous. I mean like “Well of course you should accept death,” and you’re like, “Yeah, exactly. Just like we accept our poor eyesight. So we don’t ever get ourselves glasses, or…” We are always using technology to improve and enhance our experience, and we’re going to continue to do that. That’s definitely the goal, the long-term goal.
The short-term goal is probably to put human enhancement in the hands of the general populace. I don’t like the idea that some guy in the ghetto doesn’t get the artificial heart, but the rich asshole, he does get it. And why? It’s not because he’s a better person. It’s not because…any other vector other than the fact that he just managed to greedily collect more beans than the rest of the tribe.
Anderson: Is there a hurry to do this? Is this something we should be moving slowly into because it has massive ramifications for actually questioning what it means to be human?
Cannon: I wouldn’t say hurry, but I just think that it should be done with all deliberate haste. If you don’t transcend the problem, you’re going to… People are going to die and you don’t want to lose those minds.
I try not to get caught up with predicting what could happen, because I think the term Singularity is very appropriate. Because, if you think about what scientists and enthusiasts call the Singularity… You know, a lot of people view it as a messianic event, or like you know, “Oh, it’s going to be the golden age—the age of Aquarius!” you know.
Anderson: I’m glad you’re bringing that up, because I wanted that term specifically in this project.
Cannon: Yeah, and I don’t think people understand that a singularity in a black hole, which is from whence the idea comes, it’s where you’re at this event horizon and you can’t even predict what’s going on inside there. All physics is broken down, so there’s just absolutely— I mean, it could be clowns juggling bowling balls, you know what I mean, in the center of a black hole, and it’s just as likely as anything else. Because you just have no—there’s not enough data.
Anderson: Could there be danger in there?
Cannon: Absolutely. That’s what I’m getting at. But—
Anderson: I guess clowns juggling bowling balls is definitely dangerous.
Cannon: Clearly you’ve never seen Stephen King’s It, sir.
Anderson: Clearly.
Cannon: But, I just think that the world is a very dangerous place, and there are people with very dangerous ideas. And as good people, you just want to try as hard as you can to keep the balance in the favor of people who want to help the poor, advance humanity, educate themselves and others around them. And I think that’s the best you can do because not making the technology just means that Halliburton makes it first. And then they put it in a whole bunch of guys who then go kill people in other countries that we need their resources.
Anderson: So you see this technology as inevitable?
Cannon: Oh man, yeah. Yeah, I don’t even… It’s never even occurred to me that it wouldn’t be this way.
Anderson: A theme that comes up in this project a lot, because I’ve talked to people from all different walks of life—
Cannon: Right.
Anderson: —and one theme is sort of crisis and collapse. If we have some sort of gigantic social unraveling, does that put the brakes on technology?
Cannon: I see humans as a marvelously resilient creature. I think it would be impossible to now root it out of our culture. There are too many things that we know that are highly pervasive facts. And the more intelligent and educated you are, the easier it is to very quickly build those things back up. I’m not worried. I don’t worry about collapse, because even if it does come, I just know that they’ll be a rebuilding effort and that it’s completely out of our hands and it’s impossible to predict.
Anderson: These technologies, which you see as inevitable, a lot of people I think are afraid of them because it seems like there’s no way to opt out. And that’s something I talked to John Zerzan, who is kind of a neoprimitivist thinker. And he was talking about, people ask him, “Why do you use a computer?” And he’s like, “You can’t not use a computer if you want to get a message out. I would love to live in a cave, and I can’t opt out.” And it seems like you can make sort of a similar analogy with transhumanism. At some point, you have people who have made themselves better than other people—
Cannon: Right.
Anderson: —and it’s kind of like, the people who choose not to do that can’t really not choose. Because then they lose. A couple people can make a decision that everyone effectively has to follow. It’s like being the first on the block in the 1400s with a gun.
Cannon: Right. I mean, I would say that the Amish aren’t winning. But they seem happy. There is a decided cultural minimum, and that’s been forever, and it’s been unfair forever. I think that if neoprimitives want to be be neoprimitives, we should definitely set up a reservation for them, you know? Go! Be free range, Neo, and that sort of thing. And the people who want to live a slower lifestyle, like I said, watching Teen Mom on MTV and getting fat, great. Enjoy that, too. I need to hurtle myself out into space because I’ve got to know what the hell the center of the universe looks like, and I’d like you to stay out of my way, so here’s a bunch of free shit.
Anderson: But there’s a huge power dynamic there, right?
Cannon: Yeah, you definitely—
Anderson: Because they may have the material goods, but they don’t have the agency in that case.
Cannon: Well, right, and another problematic situation is that if they do get in your way, you know what I mean… Transhumanism, there is a problem, because you begin to leave your humanity behind—
Anderson: Right, and I think that’s a frightening thing for a lot of people, right, because you lose the definition?
Cannon: Well… They’ve all lost their humanity, given enough steps back. Most people go to zoos and they’re like, “This is fine. And you know, that monkey becomes a problem…we’re not going to hear it out, we’re just going to shoot it.” We put down dogs. And these are very close, clearly more fit, species, as their longevity implies.
Anderson: So there’s nothing lost if you change out of being the monkey, then? Like, there’s nothing intrinsically good about being a human?
Cannon: I can… I mean, intrinsically good? No, I mean it’s all relative. I mean, we are a communal animal that’s developed to believe that it’s the center of the universe. And we behave as such. You know, we want to conquer, because our brain is wired to want to eat and fuck another day, you know what I mean. That’s what we’re wired to do. That’s where our evil comes from. That’s our ambi— It’s our animal roots that cause us to need things, and desire things.
Anderson: So is overcoming those animal sorts of desires, is that part of this?
Cannon: I think so. I think it’d be great to be like, “You know, I don’t want to be hungry anymore.” So for example if I’m hungry, I may end up overeating or eating something that is bad for me. If I’m really hungry my blood sugar goes low and then I start making poor decisions and I am not as intellectually acute. And so I may end up behaving badly.
I don’t like the governance that my physical body has over my behavior. And so when I want to transcend humanity, I don’t necessarily think that I’m talking about the intellectual software which was born of the hardware. I’m talking about removing the problems that exist in the hardware that can override the software that we’ve developed. Because the bottom line is our current social software says, “Don’t murder people, and really don’t eat people,” right. And yet, if I’m hungry on a cold mountain, and it’s you and me… I mean, I am a realist enough to admit that I’m going to do what I gotta do, right. And that sucks, you know what I mean? Like, I hate that, you know. I’d rather just not need to eat, or not need to survive.
I see all of the good things about us, I see as the parts that we’ve already transcended.
Anderson: Hmm. What do you mean by that?
Cannon: In other words, our realization that you know hmm, maybe we should check if a woman gives consent before having sex with her. I think that’s a great advancement. Not how we’re naturally built. We are not built to know what’s right or wrong. We’ve developed what’s right or wrong. We’re built to be communal. Which is different. We’re built to skirt the edges of what is tolerated.
Anderson: Well, and that’s what I was wondering. Like, is part of being a communal animal, having kind of an inborn morality, almost?
Cannon: Yes. I believe altruism is an evolutionary characteristic. But… Okay see, I think that we had hardware forever. And we were trying to upgrade using hardware. And then language came. And then it became a software upgrade. And software is a lot easier and a lot faster to develop than hardware, right. I mean, I think people missed…like, could not understand the idea that there were humans without language, which means they had no inner monologue. Try to imagine nothing in there. I mean just nobody talking inside your head—
Anderson: I mean, that almost seems like an earlier Singularity.
Cannon: Right, exactly. It’s this major leap where we were able to go, “Hey guys, maybe we should think about this,” and you were able to break it into pieces, and conquer it, and that sort of thing. And I think, so for example, our bark, our lash, you know our lashing out. We’ve started to tamp that down. But when you snap at your spouse or girlfriend, you might go back and apply logic to that. And that’s good for you. But the fact is you just snapped like a dog. You got angry and you snapped. And you didn’t mean to do it, you couldn’t help it. And you’re so bound by that, and it’s frustrating. It’s frustrating to be bound by that. It is preventing me from always being tolerant, and loving, and you know, that sort of thing.
Cannon: The software upgrade was the beginning of transcending the hardware limitations. And that’s why I think that when people say “human” and is there anything intrinsically good about being a human, I would say no. Because that’s the beast. That’s the thing that commits incest and rape and murder—
Anderson: But isn’t it also the software, isn’t that part of what’s being human? I mean, that is now what’s human, right?
Cannon: I think it’s accidenta— I think it was a fortuitous event, you know what I mean, that took place because of the right things coming together. I don’t think it’s intentional. Like, if you look at—
Anderson: But it can still be accidental like evolution and still be what you are, right?
Cannon: Um… Let’s just put it this way. If we killed everybody and just left babies that were blank slates, I don’t know that communication would evolve again for quite some time. And I think that we would go on just fine being those base humans without that. And when I think of what it requires to be human, I think of what’s integral. You know, transcending your needs to do horrible things because you’re an animal responding to stimuli, it’s not requi— Enlightenment is not necessary. Goodness is not nec—
Anderson: But it seems like it is for some, right? Because if that wasn’t a requirement on some level, why would there be transhumanists? Like, why would you need to do that?
Cannon: You know what? You may have something there? Because I will say this: The chemical reaction that goes off in your brain when you’re inspired by complexity leads to curiosity, which then of course is allowing us to transcend these baser things. So perhaps, if there is one good thing about humanity, it’s that intrinsically we have accidentally been tuned towards awe and complexity.
Anderson: Hm. Let’s go back to the primitive world.
Cannon: Mm hm.
Anderson: These undomesticated people. Could they have similarly rich lives, even though they lack all of the sort of…the many many layers of civilization and thought that we have?
Cannon: I don’t think that that’s even quantifiable. But, that being said, I think that it’s just as likely that they can sit in a cave and transcend their worldly desires, and I think that they can make analogies and have these rich lives on simpler levels. I don’t think that intellect leads to happiness or richness in your life, it just leads to knowledge.
Anderson: So, if the primitive people are neither happier or less happy, and yet for you there’s definitely a case to be made for being more happy, by transcending biology…
Cannon: Less miserable.
Anderson: Less miserable. So, do the people who have no conception of happiness or misery ultimately kind of win because they’re not thinking about this stuff at all? And are we just sort of running away from, this trying to…design our way out of it?
Cannon: No, I wouldn’t…I wouldn’t say that, because they’re going to have low points and they’re going to have high points. And they may experience the loss of a relative, and they’re deeply saddened. Well yes, they can’t be deeply saddened by stock market crashes, right, which is a bonus. But also they can’t formulate the words to express themselves and get empathy from another human being as efficiently as somebody who’s mastered complex language, or something like that, let’s say. Or define things into shades of gray. I mean, I really honestly think that that’s what we’re really talking about, is the granularity at which… You know, do you want a high-def TV, because yes, you will see it all in clarity. All of it. Which isn’t always pleasant.
Anderson: That’s interesting. So, there’s kind of a ratcheting up of everything.
Cannon: Yes.
Anderson: I think that’s a really deep underlying theme in a lot of the conversations I’ve had about the future with people. I’m thinking of a guy who I just posted his interview today, and he works the Land Institute. His name’s Wes Jackson. He knows his science, and he is a scientist. And yet when I talked to him, he was also concerned about technological fundamentalism. And I think kind of underneath a lot of that conversation was the sense that like, by always seeking more, you are ratcheting up everywhere the ability for more pleasure but also for more risk. And risk is something that a few people can make the decisions to take. But collectively, we all share the burden of it. And he’s thinking more in terms of food systems and energy systems, kind of an overreach that leads to a famine.
If we take that idea and we sort of look over at transhumanism, which is kind of a different but related conversation, and we ask, is it in its ratcheting up of everything…and we talk about the Singularity being the point at which there’s nothing known, and it could be amazing, but it could obliterate us—
Cannon: Absolutely. Yeah. We could be marching headlong into our own destruction. I mean, perhaps I’m just a little more realistic, but I just, I mean… If we’re talking about our own destruction and how likely it is, taking a look at the chaos and absolute lack of concern for life that the universe clearly has… I mean massive destruction everywhere we look. And those are punctuations of the massive amounts of nothing—
Anderson: But I think what scares people is the idea, it’s easier to be okay with a natural disaster than it is with a man-made disaster. Because it feels like we’re culpable for it.
Cannon: I think it’s bizarre to recognize that the fear that you have is based on a psychological truth and not a real truth, and then go, “But let’s go with the psychological truth.”
Anderson: Hm. Explain that.
Cannon: In other words, what people are saying is like, “Yes, of course we’re out of control and we could be killed at any minute. But, it feels worse when it’s us.”
Anderson: Well, but it is something… It’s avertible. Whereas if a supernova happens, there’s really nothing we can do about it. But if this is something—
Cannon: But I don’t think it’s avertible. We’re in such a complex system that you can’t. There’s no way. I mean, DIY bio is just the tip of the iceberg. I mean, try to imagine when you’re capable of… I mean extremely soon if not already…biohackers working with genetics will be fully capable of creating viruses that just wipe everyone out, right. And there is not a thing that anybody can do to stop it. Nothing. Our destruction at our own hands is not avoidable.
Anderson: But that’s…I think that’s given a certain cultural setting, right. But if that worlds unravels, as say Wes Jackson is concerned about, then is it a given? All of these things require a massive technological infrastructure to be able to do, you know. [crosstalk] If you can’t go to the store and get a breadboard—
Cannon: We’ll build it back. I mean, we’ll build it back up. I mean, you know—
Anderson: So you’re thinking long-term. Eventually kind of a Canticle for Liebowitz sort of thing. You nuke yourself and you re-evolve and you nuke yourself and you re-evolve—
Cannon: Yeah, there’s just no… Well, I mean venom evolved something like five separate times. So clearly venom is a winner. It’s unavoidable, and particularly with… I mean if we’re not talking about severely changing the biology of a human so that their brain doesn’t secrete dopamine, which is— I mean, dopamine is the chemical that says, “Go here. Do this.” Monkeys will choose cocaine to food until they starve themselves to death, and it’s because it pretty much floods the brain with dopamine, right. Dopamine is released when we are inspired by ideas and complexity. We’re going to continue to follow that road until it ends. Bonus: it doesn’t end. And if we kill ourselves, this planet is going to shit out some other species with intelligence, just like venom. And they’re going to reach, and maybe they’ll get it right if we don’t, you know what I mean. And that’d be great. I don’t—
Anderson: There’s a real sense of determinism there. I mean, it almost feels like the fix is in and the universe just moves towards complexity. And the way we’ve sort of framed it here, is that transhumanism is inevitable, because complexity is inevitable. There’s a strong statement in framing anything as inevitable, because then you take it off the table for discussion. But if we do frame it as inevitable, then in that future how do we decide what is a good value or what is a bad value? And this is something that I like to push people on, because it always ends up with the arational.
Cannon: Yeah, I make no apologies for the fact that I do a lot of things that… I wouldn’t say they’re irrational. I have a perfectly great rationale, which is that my hardware is guiding me, and I have no idea how I’ll behave once I remove that. But the fact is right now I’m seeking dopamine. And how we train ourselves to acquire that feeling tends to be what shapes our personalities and behaviors, I think in a big way.
Anderson: And yet there’s still wiggle room in there, right. You are different from me. So it seems like there is some play in that values can’t just be derived from what the dopamine pushes you towards.
Cannon: No, that would not be a good idea at all, as I mentioned in the monkeys/cocaine example. You don’t want to to be guided by that, but we’re beginning to tune ourselves, attempting to leave the least footprint. And I think that of all of the things that you can kind of do, being happy, whatever that means, that feeling that you crave, without affecting other things that might have competing desires I think is probably the direction that we should try to go. You look at Jains and the religion of Jainism, and it’s all about leaving the least footprint and doing the least harm, and those sorts of things. And I think—
Anderson: But isn’t that very different from the conversation we’ve been having where we’re talking about going into a future where all bets are off? You know, which is a decision that a few can make for the many?
Cannon: Absolutely. Yeah, I know. But when we’re talking about what makes a good value and what makes a bad value, I think that that’s a pretty good guidepost, you know. I don’t necessarily know it that’s the way we are headed. I think that’s the way we should be headed.
And I would assert that these things are a product of our biology. I think that the reason that we ask ourselves these deep, penetrating philosophical questions about why we do the things we do… I mean, it’s a giant shade over things to kind of not admit that we’re doing the things that we do because we’re electricity running across hardware. And that’s rough. I think it discounts the idea that there could not be a why. And there could not be a direction, and you know this— I don’t think there’s an endgame. I think we are just, we’re here. None of this means anything. We’re a bunch of… We’re…soup that’s moving, you know what I mean? I think just the fact that we can conceive of a direction and morality is…whoo, we’re already doing way better than we should be, you know. And I think that if we continue in that direction, we… There’s no benefit. You can’t…you how, how are you going to make the universe better? Is it better to not exist, or exist? What if just life as a process in the universe actually is completely destructive to it? And then [in] that case what do you say? You know, there’s no philosophical argument like, “Well everybody, cash in your chips!”
Anderson: Well, and then why not? Because it seems like if you get to that point of such relativity, you do get to nihilism. Things can’t necessarily have meaning because they’re relative.
Cannon: Because the why not ends up being completely—like you said, arational, you know what I mean. When you say why or why not, it’s because well, I really like orgasms and you can’t have them when you’re dead, you know. Ten out of ten people prefer experience to not experiencing shit, you know. I mean, it just seems like everybody’s in agreement, but you know, I don’t…I think that you know, the why of it is is truly just because fun is fun.
For me, I’m so aware of how meaningless we are in the grand scheme of things, and how meaningless our survival or achievement would be, that to me none of that provokes fear. I’m not afraid of that sort of stuff because it’s like, “Society collapsed!” Well, clearly we fucked up, you know what I mean. Like, we got what we deserved. I kind of view myself as very detached from the morality of it all, because I don’t know that any of this stuff will pan out. I don’t know what the Singularity’s going to hold. I can’t know, you know, if, you know, or even if that will be—
Anderson: Right. By definition.
Cannon: Right, yeah. And so I kind of don’t think that it’s productive to venture guesses. I mean, it’d be like asking your dog advice on how to pilot an aircraft, you know. It’s like, you know, it’s just not, his input is not going to be valid at all. It’s not even going to be helpful, it’ll be counterproductive to attempt. So I mean, I think that that tends to be the problem, is that we want to be able to control this outcome. And so in order to satiate that desire, we talk about it. Whereas there are closer future issues which we can control, that the goalpost is going to agilely move as we find out new information.
And maybe this is just too much of the software developer in me, but we use a process, a lot of software developers use a process called agile development. And the idea is you don’t know what the end product is going to be, because the customer’s always gonna change their mind, you’re going to run into showstoppers, and you’re going to hit all these walls. But if every two weeks, you re-evaluate where you are and what your priorities are, and you’re constantly, iteratively, trying to make the best right decision with the best information that you have, then the end result will probably be something that people are happy with. And I think that that’s what we’re talking about here.
Right now what we do is the old model of software development. It’s called the waterfall model. We take requirements, and then we go through this other phase, and then we go through a design phase, and then we have this grandiose plan, and we’re gonna develop the project over seven, eight months, and then the finished product is definitely going to be this and there will be no showstoppers, and if there are we’ll just start at square one.
And that’s what you’re talking about. You’re talking about these zeitgeist moments, these…you know, movements or giant shifts in paradigm. And that’s the problem, is that we’re not taking that iterative approach. We’re not having the Conversation regularly. We’re having it in sparse two hundred-year periods, where we then plan our next move.
And so what you have is the pressure cooker, right. And the way we’re doing things now is that we turn up the pressure until it blows off a giant chunk of steam, and then kind of renormalizes, and then there’s resistance, and— Rather than just opening the damn lid. And so I think that that’s the problem, is that these turns would come gradually if we were to manage them gradually. And scientifically, I find this super easy to do, right, because there’s all this evidence. And it’s a lot harder on the…you know, philosophical and value scale, to quantify those things. But ultimately, I think it’s good to make plans, but not plan the results.
Aengus Anderson: So, make plans for the short term, but don’t worry about the long term.
Micah Saul: Well, there’s a smackdown to Alexander Rose
Anderson: Yeah. Iterative thinking. I don’t think we’ve seen anything like that yet.
Saul: We’ve we’ve certainly talked about making short, iterative changes, but I’ve personally never heard the concept of agile programming being applied to society.
Anderson: This is one of those moments where an idea sort of short-circuits and jumps across, and it’s really exciting to see that, because I had never even thought about those ideas. And suddenly here’s Tim applying them to our project, and the hypothesis of our project.
Saul: Yeah. Very cool.
Anderson: So there’s a lot of stuff in this conversation. It is packed. And it’s a really fun one, too. And Tim was also doing this on almost no sleep. So, kudos to him.
Saul: It’s also…it was so clearly a conversation. And that’s just really cool. It’s so easy to slip into the interview mode. And this one did not. This was two people having a chat.
Anderson: What big idea should we start here with? We’ve got so many. I’m kind of, well…the big theme for me was determinism and inevitability. And I don’t feel that we ever settled on anything. It was kind of a shapeshifting idea.
Saul: Let’s start just with determinism. I think his ideas of determinism very much lead to the idea of inevitability. So, with determinism, we’ve actually been debating for a couple hours now, off tape, about the hardware/software analogy he uses for body and mind. And we’ve arrived at the crux of that matter. In some ways metaphors can shape the way you look at the world.
Anderson: Right. The metaphor has really real implications, and it’s rooted in sort of the things at hand. Your technology, your cultural context, in the same way that in earlier eras, and actually now, people have talked about the land as an organism, or as a body. Or, maybe in the 19th century or 18th century, thinking about the body as clockwork. Or now the body as something beyond electronics, a computer specifically.
And how does that lead you to think about what we really are as people? Does that lead you to sell us short as agents and to maybe see us as more programmed?
Saul: That’s where the hardware/software analogy gets uncomfortable.
Anderson: Right. We don’t want to think of ourselves as just machines.
Saul: Right.
Anderson: And if we do think of ourselves as being essentially machines, dopamine-seeking machines, where does that get us?
Saul: I mean, it gets us, as he says…I mean, if you let the machine run, you have a really troubling state of nature sort of world that he’s painting for us.
Anderson: And it also gets you to a sense that some things may in fact be inevitable. He uses the venom example. He also uses the idea of complexity, that we are organisms that dopamine is given to us when we create or understand complex things. And in a way that almost suggests that whatever happens, if your timeframe is long enough, we are always going to follow down the road of complexity and technology.
Saul: Which leads us inevitably towards the Singularity.
Anderson: Exactly. And towards changing what we are, which is what this conversation is really about. Getting into who gets to make that decision, what are the implications of that decision, what are other options? Is that decision really inevitable?
Saul: I have a question. Does that sense of inevitability—and you talk about this in your conversation where you say once you frame something as being inevitable you’ve sort of taken it off the table for discussion. So, does the concept of inevitability running through this… Is that a way to shirk responsibility for the choices that are being made that are affecting the rest of the world that they can’t opt out of? I mean, if it’s inevitable that we are going to become more than human, that we are going to transcend biology, then there’s no moral culpability for being the ones that make that decision now for everyone else.
Anderson: Absolutely. And I think that’s something that we kind of got to this in little pieces. The question of, is this a thing that you can opt out of? Well, yes, but you’ll be having on a reservation with John Zerzan grazing for berries. Well, that’s not really an option because then there’s such a power dynamic that you’re left out. And he mentions that with the monkey example, the monkey in the zoo. You don’t really listen to it. It’s a different species now.
Saul: Right
Anderson: And the idea of that being a prospect with humanity… If you say it could lead to that outcome, and that outcome is also bad for a lot of people, you need the inevitability because you feel that you’re not comfortable saying, “I’m just going to make this decision for all of you, and it will be bad for you.”
Saul: This this leads to one of the biggest tensions, I think, in this conversation. Because…so, there’s that idea, right? At the same time he describes how best to be. And how best to be is, let everybody sort of make their own decisions, and I think everybody can kind of just be okay.
Anderson: Right.
Saul: Don’t harm.
Anderson: Don’t harm, period.
Saul: Right.
Anderson: So how do we reconcile those ideas? On one hand you have a small group making a decision that affects the large group in a way that maybe the large group doesn’t necessarily want. On the other hand, you have a moral idea of just, let everyone sort of find their own path, try not to tread on each other’s feet. It’s the individual/community tension we’ve talking about in a lot of our conversations lately.
Saul: Absolutely.
Anderson: And it doesn’t feel like my conversation with Tim is on one side of the spectrum or the other. It feels like it goes back and forth.
Saul: Absolutely. No no, it definitely does. It’s a similar sort of tension that there was in Ariel Waldman’s conversation. You know. Have fun. Do your own thing. But then also, the idea that we would cede control to these greater technological forces, like the self driving car. These things that we yield agency to them to make the general community better.
Anderson: That’s just a hornet’s nest of a problem. No one can get around that.
Saul: Right
Anderson: It’s the problem of governance.
Saul: Exactly. I was going to say this is not a new problem.
Anderson: No, the Greeks were really dealing with this head on. Let’s see, so we just talked about individual/community, inevitability, moral responsibility, determinism in the mind. We’ve got some great stuff there.
Saul: I’ve got a question for you.
Anderson: Okay.
Saul: Is Tim a nihilist?
Anderson: That’s a tough one. There’s a particular moment in my mind where he says, “But none of this means anything anyway, we’re just soup that moves.” This is one of my favorite lines in the conversation, the idea of, is the universe better with? Is it better without us? That feels like nihilism.
Saul: Absolutely.
Anderson: These things are unanswerable questions. And yet, the counter-argument in my mind, is well he gives us all these examples of preferable conditions, right. It feels like you can’t even be seeking a transhumanist future without throwing nihilism away. You’ve chosen an option, and in choosing an option you’re choosing one that ultimately you think is better.
Saul: Right. And the option you’ve chosen is continued existence.
Anderson: So I like that he actually does something that we’ve seen in a lot of other conversations and he attacks it a little differently. But, here’s an example. Frances Whitehead talks about the idea of being overwhelmed in complexity, and the artist just having to go forward and do. And you acknowledge your subjectivity and you just do, because that sort of postmodern deconstruction ad nauseum gets you nowhere. That’s nihilism, that she’s fighting against.
Saul: Right.
Anderson: Tim talks about the same thing. He does it in somewhat more colorful language. He says, “Well, you can’t keep having orgasms after you’re dead. Ten out of ten people prefer existence to non-existence.” I mean, in a very strange way, this resonates with a lot of our other conversations for people who are talking about being post-irony.
Saul: Interesting. Yes.
Anderson: So in that sense I would say he is not a nihilist at all, even though philosophically he may kind of work himself into the same nihilistic corner that a lot of us have to. You follow that road down there. You say well, everything’s constructed, everything’s subjective. Then you say, “And I’m not satisfied with that. Throw it all out.” But I think what’s interesting about Tim is that he traces that back to neurochemistry, and he says, “The reason I make this a rational assumption is because I’m just a dopamine machine.” Which is very different than Frances’ rationale for throwing out nihilism, or her rationale for embracing the arational. (Her rationale for embracing the arational…) I should not be allowed to speak anymore. But that is what I meant.
Saul: Yeah.
Anderson: That was Tim Cannon of Grindhouse Wetware, recorded August 23, 2012 at his house outside of Pittsburgh, Pennsylvania.
Saul: This is The Conversation. You can find us on Twitter at @aengusanderson and on the web at findtheconversation.com
Anderson: So thanks for listening. I’m Aengus Anderson.
Saul: And I’m Micah Saul.
Further Reference
This interview at the Conversation web site, with project notes, comments, and taxonomic organization specific to The Conversation.