Micah Saul: This project is built on a hypoth­e­sis. There are moments in his­to­ry when the sta­tus quo fails. Political sys­tems prove insuf­fi­cient, reli­gious ideas unsat­is­fac­to­ry, social struc­tures intol­er­a­ble. These are moments of crisis. 

Aengus Anderson: During some of these moments, great minds have entered into con­ver­sa­tion and torn apart inher­it­ed ideas, dethron­ing truths, com­bin­ing old thoughts, and cre­at­ing new ideas. They’ve shaped the norms of future generations.

Saul: Every era has its issues, but do ours war­rant The Conversation? If they do, is it happening?

Anderson: We’ll be explor­ing these sorts of ques­tions through con­ver­sa­tions with a cross-section of American thinkers, peo­ple who are cri­tiquing some aspect of nor­mal­i­ty and offer­ing an alter­na­tive vision of the future. People who might be hav­ing The Conversation.

Saul: Like a real con­ver­sa­tion, this project is going to be sub­jec­tive. It will fre­quent­ly change direc­tions, con­nect unex­pect­ed ideas, and wan­der between the tan­gi­ble and the abstract. It will leave us with far more ques­tions than answers because after all, nobody has a monop­oly on dream­ing about the future.

Anderson: I’m Aengus Anderson.

Saul: And I’m Micah Saul. And you’re lis­ten­ing to The Conversation.


Micah Saul: So here we are, sit­ting in a apart­ment in Brooklyn together.

Aengus Anderson: We’re not squatting.

Saul: We’re not talk­ing on the phone, and…

Anderson: No, we can actu­al­ly hear each oth­er, for the most part. Whether or not we lis­ten to each oth­er is of course…all bets are off there.

Saul: Basically. So, today’s episode is with Tim Cannon at Grindhouse Wetware in Pittsburgh, Pennsylvania. Grindhouse Wetware are a group of DIY base­ment bio­hack­ers. They build implanta­bles. They are very much tran­shu­man­ists, sim­i­lar to Max More.

Anderson: And that’s a theme that actu­al­ly I’m real­ly glad we have back, because it’s informed a lot of our con­ver­sa­tions. Maybe an inor­di­nate num­ber of our con­ver­sa­tions, because it is one of the most fun­da­men­tal­ly new ideas on the block. 

Saul: But these guys are actu­al­ly doing it. They are build­ing things and putting them in their bod­ies that move them beyond the just human. They are adding extrasen­so­ry organs, basically. 

Anderson: And we get into that a lit­tle bit in the begin­ning of the con­ver­sa­tion. But some­thing that we do need to sort of talk a lit­tle bit more about, that ties into broad­er tran­shu­man­ist themes is the Singularity, which kind of pops up through­out this con­ver­sa­tion, and I only real­ized as I was edit­ing it that we did­n’t define it anywhere.

Saul: So, in physics, a sin­gu­lar­i­ty is the cen­ter of the black hole. In math, a sin­gu­lar­i­ty is…basically it’s a ver­ti­cal or hor­i­zon­tal asymp­tote on a graph. It’s where the slope of the graph reach­es infin­i­ty. In tech­nol­o­gy, the tech­no­log­i­cal Singularity is when the slope of tech­no­log­i­cal progress reach­es near-verticality.

Anderson: Right, and this is an idea that’s real­ly been pop­u­lar­ized by Ray Kurzweil.

Saul: Right. It orig­i­nal­ly came from some sci­ence fic­tion writ­ing, but Kurzweil’s real­ly tak­en it and run with it. So, the idea is that if you were to chart tech­no­log­i­cal progress against time, it is not a lin­ear curve. It is a rapid­ly increas­ing curve. And at some point, tech­no­log­i­cal progress over time is hap­pen­ing so fast that we lose the abil­i­ty to know what it is.

Anderson: Which ties in with a lot of oth­er themes that we’ve seen. The idea of things pick­ing up, the idea of so much infor­ma­tion that it actu­al­ly out­strips our abil­i­ty to know. So, the Singularity as sort of an abstract con­cept actu­al­ly ties into a lot of tan­gi­ble ideas that we’ve been bat­ting around in this project.

Saul: Right.

Anderson: So that’s some­thing that you should def­i­nite­ly keep in mind as you go into this. Because Tim will men­tion it at a cou­ple of points.

Saul: I think that’s the best point now to just give you Tim Cannon at Grindhouse Wetware.


Tim Cannon: My name’s Tim Cannon, and I’m a soft­ware engi­neer and design­er, and I do bio­hack­ing and any­thing that is based on do-it-yourself human aug­men­ta­tion. I guess you’d call me a tran­shu­man­ist. You know, I def­i­nite­ly believe that biol­o­gy is some­thing that we should be try­ing to tran­scend. Not even improve. Just get over it and that sort of thing and move on from it. And I think that that’s going to start in small steps, and it’s going to have two very dif­fer­ent vec­tors, sci­en­tif­ic and social. And I think that the social vec­tor is going to be a lot more dif­fi­cult to over­come than the sci­en­tif­ic vec­tor. So yeah, main­ly my pur­pose is to kind of get this tech­nol­o­gy social­ly accept­ed, and get it into the hands of peo­ple, open the code, teach them to use it them­selves. Really put their fate in their own hands. Because the con­cern of course is that these devices are com­ing, right, and you can have them brought to you by Apple and Halliburton, or you can have them brought to you by the open source com­mu­ni­ty who encour­ages you to crack them open.

Anderson: How did you get inter­est­ed in this?

Cannon: I would say that I have been inter­est­ed in this since I was a child. I think I real­ized that I was prob­a­bly liv­ing at the cusp of a time where, at some point in my life it would be that way. I thought it would be a lit­tle lat­er. I thought I would be too old to get involved. But I thought I’d see it happen.

Cut to twen­ty years lat­er and I see a video from Lepht Anonym, fin­ger mag­nets. And I’d seen a TED talk from Kevin Warwick about cyber­net­ics is with­in our grasp, and he said, Don’t be a chick­en. Get the implant.” And then Lepht Anonym said, Hey, don’t be a chick­en, get the implant.” No this was about April, and by May I had a fin­ger mag­net. I mean it was that— I mean I… Really? It’s start­ed? Let’s go! You know what I mean? Like, I wish I’d known. 

I had some­body ask me like, Did you wan­na start in baby steps with the mag­net?” I said no if I had the devices I’ve got in my base­ment now, they’d be in me, too. It was­n’t about small starts, it was about that was what was avail­able because I want­ed it all.

Anderson: What is the fin­ger magnet?

Cannon: Well, I have a neodymi­um fin­ger mag­net implant­ed in my left index fin­ger. And when there are elec­tro­mag­net­ic fields in my pres­ence, I can feel them. And so microwaves kick these things off, high ten­sion pow­er lines, real­ly pow­er­ful mag­nets are quite intim­i­dat­ing, actu­al­ly, nowa­days. It’s real­ly fun­ny how mun­dane you think these things are, and then you get this lev­el of enhance­ment where you tru­ly real­ize the pow­er that’s in that lit­tle piece of met­al. Like, what’s in there. And you know, I’ve held these hard dri­ve mag­nets where I’ll put my fin­ger over it and I’m whoa, okay I’m not get­ting any clos­er, that feels awkward. 

Anderson: Really.

Cannon: Yeah, it real­ly does, and it gives you a whole new respect. And it’s anoth­er lay­er of data. It’d be like if you just went into a place one day and got to see this extra col­or. Your whole world would be just slight­ly more col­or­ful, but that’s enough for you to find dif­fer­ent and unique pat­terns, you know.

Anderson: So, what are the projects you’re work­ing on now?

Cannon: Well, our pri­ma­ry focus, we—

Anderson: And we should prob­a­bly say who we is.

Cannon: Oh, I’m sor­ry. Grindhouse Wetware is the name we work under, basi­cal­ly. And it’s just a group of guys and girls. Right now we work on about three projects, and we have maybe anoth­er five devices in the chute. 

So, we have one called Thinking Cap, and it pass­es volt­age through your brain. Small amounts of volt­age and amper­age, two mil­liamp, which is like noth­ing. And basi­cal­ly this rais­es the poten­tial fir­ing rate of your neu­rons, so a lot of peo­ple call it over­clock­ing your brain. And it caus­es a state of hyper­fo­cus, and it’s been proven to increase mem­o­ry reten­tion, con­cen­tra­tion, these sorts of things. So, as you can imag­ine, that device leads us to be able to cre­ate oth­er devices because, you know, pop it on and then study, or what­ev­er. The effect lasts about two hours. It’s pret­ty interesting.

And then we have a device called Bottlenose, which if you have a fin­ger mag­net, it has a range find­er that con­verts the range data into a pulse delay so that when things are clos­er it’s puls­ing faster, and when things are fur­ther away it’s puls­ing slow­er. And it just lets out a lit­tle elec­tro­mag­net­ic field, so it makes no sound, nobody unaug­ment­ed can use the device. It’s built by cyborgs for cyborgs. And it basi­cal­ly just allows you to use your fin­ger mag­net to get the sonar, so you can kind of close your eyes and nav­i­gate a room or some­thing like that, just with your fin­ger mag­net as your only sen­so­ry organ.

And then the third device, which we haven’t released to the pub­lic yet, is called HeLED[?]. It’s going to be implant­ed in my arm. My mod­el, which will be like the uber-prototype mod­el, is going to have eight LEDs that shine up through my skin dis­play­ing the time in bina­ry. Because I’m a giant nerd and we love bina­ry because it makes us feel like we know things that oth­er peo­ple don’t know. And so, it’ll be dis­play­ing the time in bina­ry, but addi­tion­al­ly it’s Bluetooth-enabled. It will have four gigs of stor­age. And then we have a tem­per­a­ture sen­sor and a pulse mon­i­tor, so that you can col­lect bio­med­ical data on the device and then kick it up to an Android phone which will kick it up to servers, and then we can kind of ana­lyze all this health data and quan­ti­fy it, and these sorts of things. So it’s kind of a multi-purpose device to cap­ture your body mod­i­fi­ca­tion peo­ple, because they’re going to be like, Can you turn that into a cir­cle that looks like a gear and glows up to—” and then you’re going to have the quan­ti­fied self peo­ple being like, More sen­sors, more data!” And then you’re going to have the bio­hack­ers in gen­er­al just being like, Wow, I can’t wait to make this device way more awe­some because you guys aren’t that smart.” 

Anderson: So, where does all of this go? What’s the ide­al state?

Cannon: I mean, the real­ly long-range goal is to tran­scend human­i­ty in gen­er­al, tran­scend biol­o­gy. I know there’s a lot of bio­hack­ers who want to do stuff with genet­ics. I tend to feel that that’s a waste of time. Basically to me genet­ics seems like you found a lump of mud that just hap­pened from a storm, and now you’re going to try to turn it into a house instead of using build­ing mate­ri­als, you know what I mean. And it’s like…just use build­ing mate­ri­als. Well, you know of course there’s going to be prob­lems no mat­ter what ves­sel you choose to occu­py. But I think that we can make a much more prac­ti­cal and durable and mod­u­lar and eas­i­er to improve sit­u­a­tion with­out biol­o­gy. I think you cut that out of the equation. 

So, I would imag­ine even­tu­al­ly we’ll just start offload­ing pieces of our brain and biol­o­gy into elec­tron­ic— inte­grat­ing it slow­ly until no human remains. And then at that point I mean, you’ve got thou­sands of years to fig­ure out how to live tens of thou­sands of years. 

Anderson: So is the ulti­mate goal more life? I almost start­ed this project— Second inter­view was Max More at Alcor Life Extension.

Cannon: Yeah

Anderson: And for him, life exten­sion is sort of…that’s the end game. It’s being longer.

Cannon: Yeah, I mean… I don’t want to die, and I don’t have any inten­tion of dying. I don’t like sleep­ing. Because I’m…I go out. You only get this tiny amount of life and so many peo­ple hit their death bed with regrets and wish­es that they could fix things and… I think that the idea that you should accept death is just ridicu­lous. I mean like Well of course you should accept death,” and you’re like, Yeah, exact­ly. Just like we accept our poor eye­sight. So we don’t ever get our­selves glass­es, or…” We are always using tech­nol­o­gy to improve and enhance our expe­ri­ence, and we’re going to con­tin­ue to do that. That’s def­i­nite­ly the goal, the long-term goal.

The short-term goal is prob­a­bly to put human enhance­ment in the hands of the gen­er­al pop­u­lace. I don’t like the idea that some guy in the ghet­to does­n’t get the arti­fi­cial heart, but the rich ass­hole, he does get it. And why? It’s not because he’s a bet­ter per­son. It’s not because…any oth­er vec­tor oth­er than the fact that he just man­aged to greed­i­ly col­lect more beans than the rest of the tribe. 

Anderson: Is there a hur­ry to do this? Is this some­thing we should be mov­ing slow­ly into because it has mas­sive ram­i­fi­ca­tions for actu­al­ly ques­tion­ing what it means to be human?

Cannon: I would­n’t say hur­ry, but I just think that it should be done with all delib­er­ate haste. If you don’t tran­scend the prob­lem, you’re going to… People are going to die and you don’t want to lose those minds.

I try not to get caught up with pre­dict­ing what could hap­pen, because I think the term Singularity is very appro­pri­ate. Because, if you think about what sci­en­tists and enthu­si­asts call the Singularity… You know, a lot of peo­ple view it as a mes­sian­ic event, or like you know, Oh, it’s going to be the gold­en age—the age of Aquarius!” you know.

Anderson: I’m glad you’re bring­ing that up, because I want­ed that term specif­i­cal­ly in this project.

Cannon: Yeah, and I don’t think peo­ple under­stand that a sin­gu­lar­i­ty in a black hole, which is from whence the idea comes, it’s where you’re at this event hori­zon and you can’t even pre­dict what’s going on inside there. All physics is bro­ken down, so there’s just absolute­ly— I mean, it could be clowns jug­gling bowl­ing balls, you know what I mean, in the cen­ter of a black hole, and it’s just as like­ly as any­thing else. Because you just have no—there’s not enough data.

Anderson: Could there be dan­ger in there?

Cannon: Absolutely. That’s what I’m get­ting at. But—

Anderson: I guess clowns jug­gling bowl­ing balls is def­i­nite­ly dangerous.

Cannon: Clearly you’ve nev­er seen Stephen King’s It, sir.

Anderson: Clearly.

Cannon: But, I just think that the world is a very dan­ger­ous place, and there are peo­ple with very dan­ger­ous ideas. And as good peo­ple, you just want to try as hard as you can to keep the bal­ance in the favor of peo­ple who want to help the poor, advance human­i­ty, edu­cate them­selves and oth­ers around them. And I think that’s the best you can do because not mak­ing the tech­nol­o­gy just means that Halliburton makes it first. And then they put it in a whole bunch of guys who then go kill peo­ple in oth­er coun­tries that we need their resources.

Anderson: So you see this tech­nol­o­gy as inevitable?

Cannon: Oh man, yeah. Yeah, I don’t even… It’s nev­er even occurred to me that it would­n’t be this way.

Anderson: A theme that comes up in this project a lot, because I’ve talked to peo­ple from all dif­fer­ent walks of life—

Cannon: Right.

Anderson: —and one theme is sort of cri­sis and col­lapse. If we have some sort of gigan­tic social unrav­el­ing, does that put the brakes on technology?

Cannon: I see humans as a mar­velous­ly resilient crea­ture. I think it would be impos­si­ble to now root it out of our cul­ture. There are too many things that we know that are high­ly per­va­sive facts. And the more intel­li­gent and edu­cat­ed you are, the eas­i­er it is to very quick­ly build those things back up. I’m not wor­ried. I don’t wor­ry about col­lapse, because even if it does come, I just know that they’ll be a rebuild­ing effort and that it’s com­plete­ly out of our hands and it’s impos­si­ble to predict.

Anderson: These tech­nolo­gies, which you see as inevitable, a lot of peo­ple I think are afraid of them because it seems like there’s no way to opt out. And that’s some­thing I talked to John Zerzan, who is kind of a neo­prim­i­tivist thinker. And he was talk­ing about, peo­ple ask him, Why do you use a com­put­er?” And he’s like, You can’t not use a com­put­er if you want to get a mes­sage out. I would love to live in a cave, and I can’t opt out.” And it seems like you can make sort of a sim­i­lar anal­o­gy with tran­shu­man­ism. At some point, you have peo­ple who have made them­selves bet­ter than oth­er people—

Cannon: Right.

Anderson: —and it’s kind of like, the peo­ple who choose not to do that can’t real­ly not choose. Because then they lose. A cou­ple peo­ple can make a deci­sion that every­one effec­tive­ly has to fol­low. It’s like being the first on the block in the 1400s with a gun.

Cannon: Right. I mean, I would say that the Amish aren’t win­ning. But they seem hap­py. There is a decid­ed cul­tur­al min­i­mum, and that’s been for­ev­er, and it’s been unfair for­ev­er. I think that if neo­prim­i­tives want to be be neo­prim­i­tives, we should def­i­nite­ly set up a reser­va­tion for them, you know? Go! Be free range, Neo, and that sort of thing. And the peo­ple who want to live a slow­er lifestyle, like I said, watch­ing Teen Mom on MTV and get­ting fat, great. Enjoy that, too. I need to hur­tle myself out into space because I’ve got to know what the hell the cen­ter of the uni­verse looks like, and I’d like you to stay out of my way, so here’s a bunch of free shit. 

Anderson: But there’s a huge pow­er dynam­ic there, right?

Cannon: Yeah, you definitely—

Anderson: Because they may have the mate­r­i­al goods, but they don’t have the agency in that case.

Cannon: Well, right, and anoth­er prob­lem­at­ic sit­u­a­tion is that if they do get in your way, you know what I mean… Transhumanism, there is a prob­lem, because you begin to leave your human­i­ty behind—

Anderson: Right, and I think that’s a fright­en­ing thing for a lot of peo­ple, right, because you lose the definition?

Cannon: Well… They’ve all lost their human­i­ty, giv­en enough steps back. Most peo­ple go to zoos and they’re like, This is fine. And you know, that mon­key becomes a problem…we’re not going to hear it out, we’re just going to shoot it.” We put down dogs. And these are very close, clear­ly more fit, species, as their longevi­ty implies. 

Anderson: So there’s noth­ing lost if you change out of being the mon­key, then? Like, there’s noth­ing intrin­si­cal­ly good about being a human?

Cannon: I can… I mean, intrin­si­cal­ly good? No, I mean it’s all rel­a­tive. I mean, we are a com­mu­nal ani­mal that’s devel­oped to believe that it’s the cen­ter of the uni­verse. And we behave as such. You know, we want to con­quer, because our brain is wired to want to eat and fuck anoth­er day, you know what I mean. That’s what we’re wired to do. That’s where our evil comes from. That’s our ambi— It’s our ani­mal roots that cause us to need things, and desire things. 

Anderson: So is over­com­ing those ani­mal sorts of desires, is that part of this?

Cannon: I think so. I think it’d be great to be like, You know, I don’t want to be hun­gry any­more.” So for exam­ple if I’m hun­gry, I may end up overeat­ing or eat­ing some­thing that is bad for me. If I’m real­ly hun­gry my blood sug­ar goes low and then I start mak­ing poor deci­sions and I am not as intel­lec­tu­al­ly acute. And so I may end up behav­ing badly. 

I don’t like the gov­er­nance that my phys­i­cal body has over my behav­ior. And so when I want to tran­scend human­i­ty, I don’t nec­es­sar­i­ly think that I’m talk­ing about the intel­lec­tu­al soft­ware which was born of the hard­ware. I’m talk­ing about remov­ing the prob­lems that exist in the hard­ware that can over­ride the soft­ware that we’ve devel­oped. Because the bot­tom line is our cur­rent social soft­ware says, Don’t mur­der peo­ple, and real­ly don’t eat peo­ple,” right. And yet, if I’m hun­gry on a cold moun­tain, and it’s you and me… I mean, I am a real­ist enough to admit that I’m going to do what I got­ta do, right. And that sucks, you know what I mean? Like, I hate that, you know. I’d rather just not need to eat, or not need to survive.

I see all of the good things about us, I see as the parts that we’ve already transcended. 

Anderson: Hmm. What do you mean by that?

Cannon: In oth­er words, our real­iza­tion that you know hmm, maybe we should check if a woman gives con­sent before hav­ing sex with her. I think that’s a great advance­ment. Not how we’re nat­u­ral­ly built. We are not built to know what’s right or wrong. We’ve devel­oped what’s right or wrong. We’re built to be com­mu­nal. Which is dif­fer­ent. We’re built to skirt the edges of what is tolerated.

Anderson: Well, and that’s what I was won­der­ing. Like, is part of being a com­mu­nal ani­mal, hav­ing kind of an inborn moral­i­ty, almost?

Cannon: Yes. I believe altru­ism is an evo­lu­tion­ary char­ac­ter­is­tic. But… Okay see, I think that we had hard­ware for­ev­er. And we were try­ing to upgrade using hard­ware. And then lan­guage came. And then it became a soft­ware upgrade. And soft­ware is a lot eas­i­er and a lot faster to devel­op than hard­ware, right. I mean, I think peo­ple missed…like, could not under­stand the idea that there were humans with­out lan­guage, which means they had no inner mono­logue. Try to imag­ine noth­ing in there. I mean just nobody talk­ing inside your head—

Anderson: I mean, that almost seems like an ear­li­er Singularity.

Cannon: Right, exact­ly. It’s this major leap where we were able to go, Hey guys, maybe we should think about this,” and you were able to break it into pieces, and con­quer it, and that sort of thing. And I think, so for exam­ple, our bark, our lash, you know our lash­ing out. We’ve start­ed to tamp that down. But when you snap at your spouse or girl­friend, you might go back and apply log­ic to that. And that’s good for you. But the fact is you just snapped like a dog. You got angry and you snapped. And you did­n’t mean to do it, you could­n’t help it. And you’re so bound by that, and it’s frus­trat­ing. It’s frus­trat­ing to be bound by that. It is pre­vent­ing me from always being tol­er­ant, and lov­ing, and you know, that sort of thing.

Cannon: The soft­ware upgrade was the begin­ning of tran­scend­ing the hard­ware lim­i­ta­tions. And that’s why I think that when peo­ple say human” and is there any­thing intrin­si­cal­ly good about being a human, I would say no. Because that’s the beast. That’s the thing that com­mits incest and rape and murder—

Anderson: But isn’t it also the soft­ware, isn’t that part of what’s being human? I mean, that is now what’s human, right?

Cannon: I think it’s acci­den­ta— I think it was a for­tu­itous event, you know what I mean, that took place because of the right things com­ing togeth­er. I don’t think it’s inten­tion­al. Like, if you look at—

Anderson: But it can still be acci­den­tal like evo­lu­tion and still be what you are, right?

Cannon: Um… Let’s just put it this way. If we killed every­body and just left babies that were blank slates, I don’t know that com­mu­ni­ca­tion would evolve again for quite some time. And I think that we would go on just fine being those base humans with­out that. And when I think of what it requires to be human, I think of what’s inte­gral. You know, tran­scend­ing your needs to do hor­ri­ble things because you’re an ani­mal respond­ing to stim­uli, it’s not requi— Enlightenment is not nec­es­sary. Goodness is not nec—

Anderson: But it seems like it is for some, right? Because if that was­n’t a require­ment on some lev­el, why would there be tran­shu­man­ists? Like, why would you need to do that?

Cannon: You know what? You may have some­thing there? Because I will say this: The chem­i­cal reac­tion that goes off in your brain when you’re inspired by com­plex­i­ty leads to curios­i­ty, which then of course is allow­ing us to tran­scend these baser things. So per­haps, if there is one good thing about human­i­ty, it’s that intrin­si­cal­ly we have acci­den­tal­ly been tuned towards awe and complexity. 

Anderson: Hm. Let’s go back to the prim­i­tive world.

Cannon: Mm hm.

Anderson: These undo­mes­ti­cat­ed peo­ple. Could they have sim­i­lar­ly rich lives, even though they lack all of the sort of…the many many lay­ers of civ­i­liza­tion and thought that we have?

Cannon: I don’t think that that’s even quan­tifi­able. But, that being said, I think that it’s just as like­ly that they can sit in a cave and tran­scend their world­ly desires, and I think that they can make analo­gies and have these rich lives on sim­pler lev­els. I don’t think that intel­lect leads to hap­pi­ness or rich­ness in your life, it just leads to knowledge. 

Anderson: So, if the prim­i­tive peo­ple are nei­ther hap­pi­er or less hap­py, and yet for you there’s def­i­nite­ly a case to be made for being more hap­py, by tran­scend­ing biology…

Cannon: Less miserable.

Anderson: Less mis­er­able. So, do the peo­ple who have no con­cep­tion of hap­pi­ness or mis­ery ulti­mate­ly kind of win because they’re not think­ing about this stuff at all? And are we just sort of run­ning away from, this try­ing to…design our way out of it? 

Cannon: No, I wouldn’t…I would­n’t say that, because they’re going to have low points and they’re going to have high points. And they may expe­ri­ence the loss of a rel­a­tive, and they’re deeply sad­dened. Well yes, they can’t be deeply sad­dened by stock mar­ket crash­es, right, which is a bonus. But also they can’t for­mu­late the words to express them­selves and get empa­thy from anoth­er human being as effi­cient­ly as some­body who’s mas­tered com­plex lan­guage, or some­thing like that, let’s say. Or define things into shades of gray. I mean, I real­ly hon­est­ly think that that’s what we’re real­ly talk­ing about, is the gran­u­lar­i­ty at which… You know, do you want a high-def TV, because yes, you will see it all in clar­i­ty. All of it. Which isn’t always pleasant. 

Anderson: That’s inter­est­ing. So, there’s kind of a ratch­et­ing up of every­thing.

Cannon: Yes. 

Anderson: I think that’s a real­ly deep under­ly­ing theme in a lot of the con­ver­sa­tions I’ve had about the future with peo­ple. I’m think­ing of a guy who I just post­ed his inter­view today, and he works the Land Institute. His name’s Wes Jackson. He knows his sci­ence, and he is a sci­en­tist. And yet when I talked to him, he was also con­cerned about tech­no­log­i­cal fun­da­men­tal­ism. And I think kind of under­neath a lot of that con­ver­sa­tion was the sense that like, by always seek­ing more, you are ratch­et­ing up every­where the abil­i­ty for more plea­sure but also for more risk. And risk is some­thing that a few peo­ple can make the deci­sions to take. But col­lec­tive­ly, we all share the bur­den of it. And he’s think­ing more in terms of food sys­tems and ener­gy sys­tems, kind of an over­reach that leads to a famine. 

If we take that idea and we sort of look over at tran­shu­man­ism, which is kind of a dif­fer­ent but relat­ed con­ver­sa­tion, and we ask, is it in its ratch­et­ing up of everything…and we talk about the Singularity being the point at which there’s noth­ing known, and it could be amaz­ing, but it could oblit­er­ate us—

Cannon: Absolutely. Yeah. We could be march­ing head­long into our own destruc­tion. I mean, per­haps I’m just a lit­tle more real­is­tic, but I just, I mean… If we’re talk­ing about our own destruc­tion and how like­ly it is, tak­ing a look at the chaos and absolute lack of con­cern for life that the uni­verse clear­ly has… I mean mas­sive destruc­tion every­where we look. And those are punc­tu­a­tions of the mas­sive amounts of nothing—

Anderson: But I think what scares peo­ple is the idea, it’s eas­i­er to be okay with a nat­ur­al dis­as­ter than it is with a man-made dis­as­ter. Because it feels like we’re cul­pa­ble for it.

Cannon: I think it’s bizarre to rec­og­nize that the fear that you have is based on a psy­cho­log­i­cal truth and not a real truth, and then go, But let’s go with the psy­cho­log­i­cal truth.”

Anderson: Hm. Explain that.

Cannon: In oth­er words, what peo­ple are say­ing is like, Yes, of course we’re out of con­trol and we could be killed at any minute. But, it feels worse when it’s us.” 

Anderson: Well, but it is some­thing… It’s avert­ible. Whereas if a super­no­va hap­pens, there’s real­ly noth­ing we can do about it. But if this is something—

Cannon: But I don’t think it’s avert­ible. We’re in such a com­plex sys­tem that you can’t. There’s no way. I mean, DIY bio is just the tip of the ice­berg. I mean, try to imag­ine when you’re capa­ble of… I mean extreme­ly soon if not already…biohackers work­ing with genet­ics will be ful­ly capa­ble of cre­at­ing virus­es that just wipe every­one out, right. And there is not a thing that any­body can do to stop it. Nothing. Our destruc­tion at our own hands is not avoidable.

Anderson: But that’s…I think that’s giv­en a cer­tain cul­tur­al set­ting, right. But if that worlds unrav­els, as say Wes Jackson is con­cerned about, then is it a giv­en? All of these things require a mas­sive tech­no­log­i­cal infra­struc­ture to be able to do, you know. [crosstalk] If you can’t go to the store and get a breadboard—

Cannon: We’ll build it back. I mean, we’ll build it back up. I mean, you know—

Anderson: So you’re think­ing long-term. Eventually kind of a Canticle for Liebowitz sort of thing. You nuke your­self and you re-evolve and you nuke your­self and you re-evolve—

Cannon: Yeah, there’s just no… Well, I mean ven­om evolved some­thing like five sep­a­rate times. So clear­ly ven­om is a win­ner. It’s unavoid­able, and par­tic­u­lar­ly with… I mean if we’re not talk­ing about severe­ly chang­ing the biol­o­gy of a human so that their brain does­n’t secrete dopamine, which is— I mean, dopamine is the chem­i­cal that says, Go here. Do this.” Monkeys will choose cocaine to food until they starve them­selves to death, and it’s because it pret­ty much floods the brain with dopamine, right. Dopamine is released when we are inspired by ideas and com­plex­i­ty. We’re going to con­tin­ue to fol­low that road until it ends. Bonus: it does­n’t end. And if we kill our­selves, this plan­et is going to shit out some oth­er species with intel­li­gence, just like ven­om. And they’re going to reach, and maybe they’ll get it right if we don’t, you know what I mean. And that’d be great. I don’t—

Anderson: There’s a real sense of deter­min­ism there. I mean, it almost feels like the fix is in and the uni­verse just moves towards com­plex­i­ty. And the way we’ve sort of framed it here, is that tran­shu­man­ism is inevitable, because com­plex­i­ty is inevitable. There’s a strong state­ment in fram­ing any­thing as inevitable, because then you take it off the table for dis­cus­sion. But if we do frame it as inevitable, then in that future how do we decide what is a good val­ue or what is a bad val­ue? And this is some­thing that I like to push peo­ple on, because it always ends up with the arational.

Cannon: Yeah, I make no apolo­gies for the fact that I do a lot of things that… I would­n’t say they’re irra­tional. I have a per­fect­ly great ratio­nale, which is that my hard­ware is guid­ing me, and I have no idea how I’ll behave once I remove that. But the fact is right now I’m seek­ing dopamine. And how we train our­selves to acquire that feel­ing tends to be what shapes our per­son­al­i­ties and behav­iors, I think in a big way.

Anderson: And yet there’s still wig­gle room in there, right. You are dif­fer­ent from me. So it seems like there is some play in that val­ues can’t just be derived from what the dopamine push­es you towards.

Cannon: No, that would not be a good idea at all, as I men­tioned in the monkeys/cocaine exam­ple. You don’t want to to be guid­ed by that, but we’re begin­ning to tune our­selves, attempt­ing to leave the least foot­print. And I think that of all of the things that you can kind of do, being hap­py, what­ev­er that means, that feel­ing that you crave, with­out affect­ing oth­er things that might have com­pet­ing desires I think is prob­a­bly the direc­tion that we should try to go. You look at Jains and the reli­gion of Jainism, and it’s all about leav­ing the least foot­print and doing the least harm, and those sorts of things. And I think—

Anderson: But isn’t that very dif­fer­ent from the con­ver­sa­tion we’ve been hav­ing where we’re talk­ing about going into a future where all bets are off? You know, which is a deci­sion that a few can make for the many? 

Cannon: Absolutely. Yeah, I know. But when we’re talk­ing about what makes a good val­ue and what makes a bad val­ue, I think that that’s a pret­ty good guide­post, you know. I don’t nec­es­sar­i­ly know it that’s the way we are head­ed. I think that’s the way we should be headed.

And I would assert that these things are a prod­uct of our biol­o­gy. I think that the rea­son that we ask our­selves these deep, pen­e­trat­ing philo­soph­i­cal ques­tions about why we do the things we do… I mean, it’s a giant shade over things to kind of not admit that we’re doing the things that we do because we’re elec­tric­i­ty run­ning across hard­ware. And that’s rough. I think it dis­counts the idea that there could not be a why. And there could not be a direc­tion, and you know this— I don’t think there’s an endgame. I think we are just, we’re here. None of this means any­thing. We’re a bunch of… We’re…soup that’s mov­ing, you know what I mean? I think just the fact that we can con­ceive of a direc­tion and moral­i­ty is…whoo, we’re already doing way bet­ter than we should be, you know. And I think that if we con­tin­ue in that direc­tion, we… There’s no ben­e­fit. You can’t…you how, how are you going to make the uni­verse bet­ter? Is it bet­ter to not exist, or exist? What if just life as a process in the uni­verse actu­al­ly is com­plete­ly destruc­tive to it? And then [in] that case what do you say? You know, there’s no philo­soph­i­cal argu­ment like, Well every­body, cash in your chips!” 

Anderson: Well, and then why not? Because it seems like if you get to that point of such rel­a­tiv­i­ty, you do get to nihilism. Things can’t nec­es­sar­i­ly have mean­ing because they’re relative.

Cannon: Because the why not ends up being completely—like you said, ara­tional, you know what I mean. When you say why or why not, it’s because well, I real­ly like orgasms and you can’t have them when you’re dead, you know. Ten out of ten peo­ple pre­fer expe­ri­ence to not expe­ri­enc­ing shit, you know. I mean, it just seems like every­body’s in agree­ment, but you know, I don’t…I think that you know, the why of it is is tru­ly just because fun is fun. 

For me, I’m so aware of how mean­ing­less we are in the grand scheme of things, and how mean­ing­less our sur­vival or achieve­ment would be, that to me none of that pro­vokes fear. I’m not afraid of that sort of stuff because it’s like, Society col­lapsed!” Well, clear­ly we fucked up, you know what I mean. Like, we got what we deserved. I kind of view myself as very detached from the moral­i­ty of it all, because I don’t know that any of this stuff will pan out. I don’t know what the Singularity’s going to hold. I can’t know, you know, if, you know, or even if that will be—

Anderson: Right. By definition.

Cannon: Right, yeah. And so I kind of don’t think that it’s pro­duc­tive to ven­ture guess­es. I mean, it’d be like ask­ing your dog advice on how to pilot an air­craft, you know. It’s like, you know, it’s just not, his input is not going to be valid at all. It’s not even going to be help­ful, it’ll be counterpro­duc­tive to attempt. So I mean, I think that that tends to be the prob­lem, is that we want to be able to con­trol this out­come. And so in order to sati­ate that desire, we talk about it. Whereas there are clos­er future issues which we can con­trol, that the goal­post is going to agile­ly move as we find out new information.

And maybe this is just too much of the soft­ware devel­op­er in me, but we use a process, a lot of soft­ware devel­op­ers use a process called agile devel­op­ment. And the idea is you don’t know what the end prod­uct is going to be, because the cus­tomer’s always gonna change their mind, you’re going to run into show­stop­pers, and you’re going to hit all these walls. But if every two weeks, you re-evaluate where you are and what your pri­or­i­ties are, and you’re con­stant­ly, iter­a­tive­ly, try­ing to make the best right deci­sion with the best infor­ma­tion that you have, then the end result will prob­a­bly be some­thing that peo­ple are hap­py with. And I think that that’s what we’re talk­ing about here. 

Right now what we do is the old mod­el of soft­ware devel­op­ment. It’s called the water­fall mod­el. We take require­ments, and then we go through this oth­er phase, and then we go through a design phase, and then we have this grandiose plan, and we’re gonna devel­op the project over sev­en, eight months, and then the fin­ished prod­uct is def­i­nite­ly going to be this and there will be no show­stop­pers, and if there are we’ll just start at square one.

And that’s what you’re talk­ing about. You’re talk­ing about these zeit­geist moments, these…you know, move­ments or giant shifts in par­a­digm. And that’s the prob­lem, is that we’re not tak­ing that iter­a­tive approach. We’re not hav­ing the Conversation reg­u­lar­ly. We’re hav­ing it in sparse two hundred-year peri­ods, where we then plan our next move. 

And so what you have is the pres­sure cook­er, right. And the way we’re doing things now is that we turn up the pres­sure until it blows off a giant chunk of steam, and then kind of renor­mal­izes, and then there’s resis­tance, and— Rather than just open­ing the damn lid. And so I think that that’s the prob­lem, is that these turns would come grad­u­al­ly if we were to man­age them grad­u­al­ly. And sci­en­tif­i­cal­ly, I find this super easy to do, right, because there’s all this evi­dence. And it’s a lot hard­er on the…you know, philo­soph­i­cal and val­ue scale, to quan­ti­fy those things. But ulti­mate­ly, I think it’s good to make plans, but not plan the results.


Aengus Anderson: So, make plans for the short term, but don’t wor­ry about the long term.

Micah Saul: Well, there’s a smack­down to Alexander Rose

Anderson: Yeah. Iterative think­ing. I don’t think we’ve seen any­thing like that yet.

Saul: We’ve we’ve cer­tain­ly talked about mak­ing short, iter­a­tive changes, but I’ve per­son­al­ly nev­er heard the con­cept of agile pro­gram­ming being applied to society.

Anderson: This is one of those moments where an idea sort of short-circuits and jumps across, and it’s real­ly excit­ing to see that, because I had nev­er even thought about those ideas. And sud­den­ly here’s Tim apply­ing them to our project, and the hypoth­e­sis of our project.

Saul: Yeah. Very cool.

Anderson: So there’s a lot of stuff in this con­ver­sa­tion. It is packed. And it’s a real­ly fun one, too. And Tim was also doing this on almost no sleep. So, kudos to him.

Saul: It’s also…it was so clear­ly a con­ver­sa­tion. And that’s just real­ly cool. It’s so easy to slip into the inter­view mode. And this one did not. This was two peo­ple hav­ing a chat.

Anderson: What big idea should we start here with? We’ve got so many. I’m kind of, well…the big theme for me was deter­min­ism and inevitabil­i­ty. And I don’t feel that we ever set­tled on any­thing. It was kind of a shapeshift­ing idea.

Saul: Let’s start just with deter­min­ism. I think his ideas of deter­min­ism very much lead to the idea of inevitabil­i­ty. So, with deter­min­ism, we’ve actu­al­ly been debat­ing for a cou­ple hours now, off tape, about the hardware/software anal­o­gy he uses for body and mind. And we’ve arrived at the crux of that mat­ter. In some ways metaphors can shape the way you look at the world.

Anderson: Right. The metaphor has real­ly real impli­ca­tions, and it’s root­ed in sort of the things at hand. Your tech­nol­o­gy, your cul­tur­al con­text, in the same way that in ear­li­er eras, and actu­al­ly now, peo­ple have talked about the land as an organ­ism, or as a body. Or, maybe in the 19th cen­tu­ry or 18th cen­tu­ry, think­ing about the body as clock­work. Or now the body as some­thing beyond elec­tron­ics, a com­put­er specifically. 

And how does that lead you to think about what we real­ly are as peo­ple? Does that lead you to sell us short as agents and to maybe see us as more programmed?

Saul: That’s where the hardware/software anal­o­gy gets uncomfortable. 

Anderson: Right. We don’t want to think of our­selves as just machines.

Saul: Right.

Anderson: And if we do think of our­selves as being essen­tial­ly machines, dopamine-seeking machines, where does that get us?

Saul: I mean, it gets us, as he says…I mean, if you let the machine run, you have a real­ly trou­bling state of nature sort of world that he’s paint­ing for us.

Anderson: And it also gets you to a sense that some things may in fact be inevitable. He uses the ven­om exam­ple. He also uses the idea of com­plex­i­ty, that we are organ­isms that dopamine is giv­en to us when we cre­ate or under­stand com­plex things. And in a way that almost sug­gests that what­ev­er hap­pens, if your time­frame is long enough, we are always going to fol­low down the road of com­plex­i­ty and technology.

Saul: Which leads us inevitably towards the Singularity.

Anderson: Exactly. And towards chang­ing what we are, which is what this con­ver­sa­tion is real­ly about. Getting into who gets to make that deci­sion, what are the impli­ca­tions of that deci­sion, what are oth­er options? Is that deci­sion real­ly inevitable?

Saul: I have a ques­tion. Does that sense of inevitability—and you talk about this in your con­ver­sa­tion where you say once you frame some­thing as being inevitable you’ve sort of tak­en it off the table for dis­cus­sion. So, does the con­cept of inevitabil­i­ty run­ning through this… Is that a way to shirk respon­si­bil­i­ty for the choic­es that are being made that are affect­ing the rest of the world that they can’t opt out of? I mean, if it’s inevitable that we are going to become more than human, that we are going to tran­scend biol­o­gy, then there’s no moral cul­pa­bil­i­ty for being the ones that make that deci­sion now for every­one else.

Anderson: Absolutely. And I think that’s some­thing that we kind of got to this in lit­tle pieces. The ques­tion of, is this a thing that you can opt out of? Well, yes, but you’ll be hav­ing on a reser­va­tion with John Zerzan graz­ing for berries. Well, that’s not real­ly an option because then there’s such a pow­er dynam­ic that you’re left out. And he men­tions that with the mon­key exam­ple, the mon­key in the zoo. You don’t real­ly lis­ten to it. It’s a dif­fer­ent species now.

Saul: Right

Anderson: And the idea of that being a prospect with human­i­ty… If you say it could lead to that out­come, and that out­come is also bad for a lot of peo­ple, you need the inevitabil­i­ty because you feel that you’re not com­fort­able say­ing, I’m just going to make this deci­sion for all of you, and it will be bad for you.”

Saul: This this leads to one of the biggest ten­sions, I think, in this con­ver­sa­tion. Because…so, there’s that idea, right? At the same time he describes how best to be. And how best to be is, let every­body sort of make their own deci­sions, and I think every­body can kind of just be okay. 

Anderson: Right.

Saul: Don’t harm.

Anderson: Don’t harm, period.

Saul: Right.

Anderson: So how do we rec­on­cile those ideas? On one hand you have a small group mak­ing a deci­sion that affects the large group in a way that maybe the large group does­n’t nec­es­sar­i­ly want. On the oth­er hand, you have a moral idea of just, let every­one sort of find their own path, try not to tread on each oth­er’s feet. It’s the individual/community ten­sion we’ve talk­ing about in a lot of our con­ver­sa­tions lately.

Saul: Absolutely.

Anderson: And it does­n’t feel like my con­ver­sa­tion with Tim is on one side of the spec­trum or the oth­er. It feels like it goes back and forth.

Saul: Absolutely. No no, it def­i­nite­ly does. It’s a sim­i­lar sort of ten­sion that there was in Ariel Waldman’s con­ver­sa­tion. You know. Have fun. Do your own thing. But then also, the idea that we would cede con­trol to these greater tech­no­log­i­cal forces, like the self dri­ving car. These things that we yield agency to them to make the gen­er­al com­mu­ni­ty better.

Anderson: That’s just a hor­net’s nest of a prob­lem. No one can get around that.

Saul: Right

Anderson: It’s the prob­lem of governance.

Saul: Exactly. I was going to say this is not a new problem.

Anderson: No, the Greeks were real­ly deal­ing with this head on. Let’s see, so we just talked about individual/community, inevitabil­i­ty, moral respon­si­bil­i­ty, deter­min­ism in the mind. We’ve got some great stuff there.

Saul: I’ve got a ques­tion for you.

Anderson: Okay.

Saul: Is Tim a nihilist?

Anderson: That’s a tough one. There’s a par­tic­u­lar moment in my mind where he says, But none of this means any­thing any­way, we’re just soup that moves.” This is one of my favorite lines in the con­ver­sa­tion, the idea of, is the uni­verse bet­ter with? Is it bet­ter with­out us? That feels like nihilism.

Saul: Absolutely.

Anderson: These things are unan­swer­able ques­tions. And yet, the counter-argument in my mind, is well he gives us all these exam­ples of prefer­able con­di­tions, right. It feels like you can’t even be seek­ing a tran­shu­man­ist future with­out throw­ing nihilism away. You’ve cho­sen an option, and in choos­ing an option you’re choos­ing one that ulti­mate­ly you think is better.

Saul: Right. And the option you’ve cho­sen is con­tin­ued existence.

Anderson: So I like that he actu­al­ly does some­thing that we’ve seen in a lot of oth­er con­ver­sa­tions and he attacks it a lit­tle dif­fer­ent­ly. But, here’s an exam­ple. Frances Whitehead talks about the idea of being over­whelmed in com­plex­i­ty, and the artist just hav­ing to go for­ward and do. And you acknowl­edge your sub­jec­tiv­i­ty and you just do, because that sort of post­mod­ern decon­struc­tion ad nau­se­um gets you nowhere. That’s nihilism, that she’s fight­ing against.

Saul: Right.

Anderson: Tim talks about the same thing. He does it in some­what more col­or­ful lan­guage. He says, Well, you can’t keep hav­ing orgasms after you’re dead. Ten out of ten peo­ple pre­fer exis­tence to non-existence.” I mean, in a very strange way, this res­onates with a lot of our oth­er con­ver­sa­tions for peo­ple who are talk­ing about being post-irony.

Saul: Interesting. Yes.

Anderson: So in that sense I would say he is not a nihilist at all, even though philo­soph­i­cal­ly he may kind of work him­self into the same nihilis­tic cor­ner that a lot of us have to. You fol­low that road down there. You say well, every­thing’s con­struct­ed, every­thing’s sub­jec­tive. Then you say, And I’m not sat­is­fied with that. Throw it all out.” But I think what’s inter­est­ing about Tim is that he traces that back to neu­ro­chem­istry, and he says, The rea­son I make this a ratio­nal assump­tion is because I’m just a dopamine machine.” Which is very dif­fer­ent than Frances’ ratio­nale for throw­ing out nihilism, or her ratio­nale for embrac­ing the ara­tional. (Her ratio­nale for embrac­ing the ara­tional…) I should not be allowed to speak any­more. But that is what I meant.

Saul: Yeah.

Anderson: That was Tim Cannon of Grindhouse Wetware, record­ed August 23, 2012 at his house out­side of Pittsburgh, Pennsylvania.

Saul: This is The Conversation. You can find us on Twitter at @aengusanderson and on the web at find​the​con​ver​sa​tion​.com

Anderson: So thanks for lis­ten­ing. I’m Aengus Anderson.

Saul: And I’m Micah Saul.

Further Reference

This inter­view at the Conversation web site, with project notes, com­ments, and tax­o­nom­ic orga­ni­za­tion spe­cif­ic to The Conversation.