Luke Robert Mason: For those of you who are here for the first time, the Virtual Futures Conference occurred at the University of Warwick in the mid-90s, and to quote its cofounder it arose at a tip­ping point in the tech­nol­o­giza­tion of first-world cul­tures.

Now, whilst it was most often por­trayed as a techno-positivist fes­ti­val of accel­er­a­tionism towards a posthu­man future, the Glastonbury of cyber­cul­ture” as The Guardian put it, its actu­al aim hid­den behind the brushed steel, the sil­i­con, the jar­gon, the design­er drugs and the charis­mat­ic prophets was much more sober and much more urgent. What Virtual Futures did was try to cast a crit­i­cal eye over how humans and non­hu­mans engage with emerg­ing sci­en­tif­ic the­o­ry and tech­no­log­i­cal devel­op­ment.

This salon series—and it has been a series, we’ve been run­ning for it for about two and a half years now—completes the conference’s aim to bury the 20th cen­tu­ry and begin work on the 21st. So, let’s begin.

Luke Robert Mason: For many in this crowd Adam Greenfield needs no intro­duc­tion. He spent over a decade work­ing in the design and devel­op­ment of net­work dig­i­tal infor­ma­tion tech­nol­o­gy and his new book from Verso Radical Technologies, a field guide to the tech­nolo­gies that are trans­form­ing our lives tack­les almost every buzz­word that’s been forced down our throats by the so-called dig­i­tal gurus and inno­va­tion direc­tors over the last six months.

But unlike those evan­ge­lists, Adam con­fronts these prob­lem­at­ic promis­es with a fresh and crit­i­cal voice. From the smart­phone to the Internet of Things, aug­ment­ed real­i­ty, dig­i­tal fab­ri­ca­tion, cryp­tocur­ren­cy, blockchain, automa­tion, machine learn­ing, and arti­fi­cial intel­li­gence, every tech­nol­o­gy is decon­struct­ed to reveal the col­o­niza­tion of every­day life by infor­ma­tion pro­cess­ing. This book is one step in reveal­ing the hid­den process­es that occur when the inten­tions of design­ers are mutat­ed by the agency of cap­i­tal. And any­body who’s joined us for our event with Douglas Rushkoff and Richard Barbrook knows that this may to some degree be a con­tin­u­a­tion of that dis­cus­sion.

So in an age where our engage­ment with tech­nol­o­gy is one of unques­tion­ing awe and won­der, when we find out about each new advanced tool through the lan­guage struc­tured by the PR team, and where the com­mer­cial news out­lets have to sell us the future, this book is an essen­tial read. So to help us bet­ter nav­i­gate the future, please put your hands togeth­er and join me in wel­com­ing Adam Greenfield to the Virtual Futures stage.

So Adam, what are the rad­i­cal tech­nolo­gies? What do you define as the rad­i­cal tech­nolo­gies and why did you select this par­tic­u­lar set of tech­nolo­gies?

Adam Greenfield: That’s a great ques­tion. So, do you know who Verso is in gen­er­al? Do you have a sense of who Verso is? Yeah, I fig­ured you prob­a­bly did. No, I see one shak­ing head. Verso likes to rep­re­sent them­selves to the world as the pre­mier rad­i­cal pub­lish­er in the English lan­guage. So they’re forth­right­ly left wing. They think of them­selves as a pub­lish­ing house of the left. And you know, for all of the dif­fer­ent per­spec­tives and ten­sions that are bound up in the left I think they do a pret­ty good job of rep­re­sent­ing that tra­di­tion.

So in the first instance it makes a fair amount of sense if you’re going to con­front a title called Radical Technologies from an avowed­ly left wing pub­lish­ing house, you would be for­giv­en for assum­ing per­haps that the intent of the author is to insin­u­ate that these tech­nolo­gies have lib­er­a­to­ry, pro­gres­sive, or eman­ci­pa­to­ry effects when deployed in the world.

And I don’t actu­al­ly mean any­thing of the sort. I mean that these are rad­i­cal in the truer sense, in the orig­i­nal” sense. In, if you will, the root sense of the word rad­i­cal,” which is that these are tech­nolo­gies which con­front us at the very root of our being. They’re not mere­ly add-ons. They’re not mere­ly things which kind of get lay­ered over every­day life. They’re things which fun­da­men­tal­ly trans­form the rela­tion­ship between our­selves and the social, polit­i­cal, eco­nom­ic, and psy­chic envi­ron­ment through which we move.

And it wasn’t very hard to iden­ti­fy the spe­cif­ic tech­nolo­gies” that I want­ed to engage in the book because you know, as we’ve already estab­lished these are the ones that are first and fore­most in the pop­u­lar cul­ture, in the media right now—lit­er­al­ly. And this is a tor­ment and a tor­ture for some­body who’s work­ing on a book that’s intend­ed to be kind of a syn­op­tic overview of some­thing which is evolv­ing in real time. Literally every day as I was work­ing on the book, I would open up my lap­top and there would be The Guardian, there would be The New York Times, there would be the BBC with oh you know, cutting-edge new appli­ca­tions of the blockchain beyond Bitcoin. Or dri­ver­less cars are being test­ed in Pittsburgh. Or indeed some­body whose Tesla was equipped with an autonomous pilot­ing device was actu­al­ly killed in a crash.

So I am pro­found­ly envi­ous of peo­ple who get to write about set­tled domains or sort of set­tled states of affairs in human events. For me, I was deal­ing with a set of tech­nolo­gies which are either recent­ly emerged or still in the process of emerg­ing. And so it was a con­tin­u­al Red Queen’s race to keep up with these things as they announce them­selves to us and try and wrap my head around them, under­stand what it was that they were propos­ing, under­stand what their effects were when deployed in the world.

And the addi­tion­al chal­lenge there is that I’m kind of an empiri­cist. I mean, one of the points of this book is to not take any­thing on faith. Do not take the promis­es of the pro­mot­ers and the ven­dors and the peo­ple who have a finan­cial stake in these tech­nolo­gies on faith. And nei­ther, take the prog­nos­ti­ca­tions of peo­ple who’re inclined towards the doomy end of the spec­trum on faith. Do not assume any­thing. Look instead to the actu­al deploy­ments of these tech­nolo­gies in actu­al human com­mu­ni­ties and sit­u­a­tions, and see what you can derive from an inspec­tion of those cir­cum­stances. And the trou­ble is that we don’t have a lot of that to go on. So that’s the mis­sion of the book.

Mason: So the thing that to a degree unites all of those tech­nolo­gies, all the things you speak about in the book, is some­thing that you’ve called the dri­ve for com­pu­ta­tion to be embed­ded into every sin­gle aspect of the envi­ron­ment. You also call it the col­o­niza­tion of every­day life by infor­ma­tion pro­cess­ing. Could you just explain that core the­sis?

Greenfield: Yeah, sure. I guess in order to do that con­crete­ly and prop­er­ly I have to go back to about 2002. I was work­ing as a con­sul­tant in Tokyo. I was work­ing at a shop called Razorfish. And Razorfish’s whole pitch to the world was every­thing that can become dig­i­tal will. That was lit­er­al­ly their tagline. Very arro­gant shop to work in. Everybody was just suf­fused with the excite­ment of the mil­len­ni­al peri­od and we all thought that we were like, so far ahead of the curve and so awe­some for liv­ing in Tokyo.

And frankly, after September 11th of 2001 I was bored to death in my job and I was real­ly frus­trat­ed with it. Because that was a moment in time in which every­body I knew kind of asked our­selves well, what is it that we’re doing? Is it real­ly that impor­tant? It was a real gut check moment. Everybody I knew includ­ing myself, we all asked our­selves you know, we live in times where every­thing that we aspire to, every­thing we dream about, every­thing that we hope for, every­thing we want to see real­ized in the world, could end in a flash of light—in a heart­beat. So, we should damn well make sure that what it is that we’re doing on a day-to-day basis is some­thing mean­ing­ful and some­thing true.

And at that time I was most­ly involved in the design of the nav­i­ga­tion­al and struc­tur­al aspects of enterprise-scale web sites and I had done about fifty of them for like Fortune 500 clients. And I hat­ed the work and I hat­ed myself for doing the work.

And so I asked myself what comes next after these web sites. Surely this can­not be the end state for the human encounter with net­worked infor­ma­tion tech­nolo­gies. And I asked the smartest peo­ple around me you know, What’s next after the Web? What’s next after the ecom­merce sites that we’re doing?”

And giv­en that it was 2002 in Tokyo, every­body said mobile. Everybody held up their lit­tle i-mode devices and they said, This green screen with the four lines of type on it, that’s the future.” And I couldn’t quite believe that we were going to force every­day life with all of its tex­ture and vari­abil­i­ty and wild het­ero­gene­ity, that we were going to force all of that and boil all of that down to the point that it was going to be squeezed to us through this aper­ture of this lit­tle green screen with its four or five lines of text.

And I was just not par­tic­u­lar­ly sat­is­fied with the answers I was get­ting. And one per­son said some­thing dif­fer­ent, a woman named Anne Galloway. She said to me, Actually, there’s this thing called ubiq­ui­tous com­put­ing. And as it hap­pens there’s a con­fer­ence on ubiq­ui­tous com­put­ing in Gothenburg in Sweden in about three weeks’ time. And it’s a lit­tle bit late but why don’t you see if your com­pa­ny will pay for you to go there and fly there and check it out and see what’s going on.” And and so I trust­ed her and I said you know, she’s onto some­thing here. This ubiq­ui­tous com­put­ing project feels like the future.

Now, what was ubiq­ui­tous com­put­ing? It was the name for the Internet of Things before the Internet of Things. It was essen­tial­ly the attempt to lit­er­al­ly embed sens­ing, trans­mis­sion, dis­play, stor­age, and pro­cess­ing devices into every fab­ric, every phys­i­cal com­po­nent, every sit­u­a­tion of every­day life. All of the build­ings, all of the vehi­cles, all of the cloth­ing, all of the bod­ies, all of the social cir­cum­stances. It was a very aggres­sive vision.

It was pred­i­cat­ed on Moore’s Law. It was basi­cal­ly the idea that these com­put­ing devices are get­ting so cheap that we can essen­tial­ly scat­ter them through the world like grass seed. We can treat them promis­cu­ous­ly. It doesn’t mat­ter if some per­cent­age of them fails, because they’re so cheap. We’re gonna put pro­cess­ing into every­thing. And we’re going to derive knowl­edge about the world, and we’re going to instill ana­lyt­ics on top of this knowl­edge, and we’re going to fig­ure out how to make our lives final­ly more effi­cient. We’re going to real­ize all of our hopes and dreams by cap­tur­ing the sig­nals of the activ­i­ties of our own body, of the dynam­ics of the city, of the wills and desires of human beings. And by inter­pret­ing and ana­lyz­ing those desires, we’re final­ly going to bring har­mo­ny and sense to bear in the realm of human affairs. That was ubiq­ui­tous com­put­ing cir­ca 2002.

Mason: But then the real­i­ty was we didn’t dis­cov­er shit. All we found was that this ubiq­ui­tous data col­lec­tion was being used against us. We were the form of media that was being con­sumed, almost.

Greenfield: You antic­i­pate me. That’s absolute­ly cor­rect. You know, we were the prod­uct, it turned out. But that wasn’t clear for anoth­er cou­ple of years yet. It didn’t real­ly get— I mean, maybe I’m just very stu­pid and maybe it took me longer to fig­ure out what I ought to have.

But that didn’t actu­al­ly become clear to me until around 2008, right. 2010, even. There was some­thing else that hap­pened in the inter­im, which was kind of the last moment of hope that I myself per­son­al­ly remem­ber hav­ing around infor­ma­tion tech­nolo­gies. It was June 29th, 2007. It was the launch of the orig­i­nal Apple iPhone. And in this sin­gle con­verged device, I thought was the real­iza­tion of an awful lot of ambi­tion about mak­ing infor­ma­tion pro­cess­ing human. I was still…I still believed that in those days, as recent­ly as 2007. So as recent­ly as ten years ago I still believed that.

And I went to work at Nokia in Finland to real­ize a com­peti­tor to that device. I was so inspired by that that I thought you know, that’s great for the first world. That’s great for the Global North. But Apple is real­ly only speak­ing to a very lim­it­ed audi­ence of peo­ple in the rel­a­tive­ly wealthy part of the world. Nokia is where the future is. Nokia at that point had 70% of the Chinese mar­ket share in mobile devices, 80% of the Indian mar­ket share in mobile devices. And I thought this is where we’re going to take all of these ambi­tions and force them to jus­ti­fy them­selves against the actu­al cir­cum­stances of the lives and con­di­tions that most peo­ple on Earth expe­ri­ence. I had a lot of hope about that. And as it turns out, that’s not what hap­pened.

We were told that fish­er­men in East Africa would use their mobile devices to find out about mar­ket con­di­tions and the lat­est avail­able spot prices for the fish that they were about to dredge up out of the sea before they went to mar­ket. We were told that canon­i­cal­ly, women would use this to learn about fam­i­ly plan­ning and take con­trol of the cir­cum­stances of their own fer­til­i­ty and increase their agency vis-à-vis their own com­mu­ni­ties. We were told that the canon­i­cal queer kid in Kansas was going to find oth­er peo­ple like them­selves and not feel so iso­lat­ed any­more, and not feel like they were just one in a mil­lion that was arrayed against them—that they were going find sol­i­dar­i­ty and life and voic­es that resem­bled them.

And it is pos­si­ble that all of those things hap­pened, anec­do­tal­ly, on a small scale. But some­thing else hap­pened in the mean­time. Which was the cap­ture of all of these tech­nolo­gies and all these ambi­tions by cap­i­tal.

Mason: Well that was going to be my next ques­tion. If 2008 was the day the Internet died I mean, what was dri­ving the obses­sion up to that point? What was dri­ving the obses­sion to col­lect this data, to make every­thing ubiq­ui­tous? The obses­sion to mod­el the world. I mean, were these done with very kind of egal­i­tar­i­an view­points and just, cap­i­tal hap­pened to get involved and cause the mess that we’ve had over the last sort of six years?

Greenfield: In ret­ro­spect I want to say that those were the last years of the Enlightenment. I real­ly do. It’s a pret­ty big claim but I think that the tech­nolo­gies that we attempt­ed to bring to bear in those years were sort of the last gasp of Enlightenment thought. I mean think about it for a sec­ond, right. The idea that with this device that each one of you I assume has in your pock­et or your hand right now, it gives you essen­tial­ly the entire­ty of human knowl­edge, instan­ta­neous­ly, more or less for free, on demand, wher­ev­er you go. And you can do with it what­ev­er you will. How is that not a real­iza­tion of all of the ambi­tions that are inscribed in the Enlightenment project? It’s real­ly some­thing pret­ty utopi­an to me. And a fact, right. It exists now.

But we for­got to dis­en­tan­gle some things. I mean you know, much of this was done with again the best inten­tions. If you look back at John Perry Barlow in the Declaration of Rights of Cyberspace. If you look back at— Again, the Californian ide­ol­o­gy that suf­fused the ear­ly years of Web and web devel­op­ment. The move towards open­ness, the move toward stan­dard­iza­tion. All of these things were done with the deep­est ded­i­ca­tion to the democ­ra­ti­za­tion of access to infor­ma­tion. And if you think about for exam­ple the slo­gan of the Whole Earth Catalog you know, access to tools and infor­ma­tion,” again this was some­thing that was real­ized in the smart­phone project, and deliv­ered to peo­ple by the hun­dreds of mil­lions.

The trou­ble is that as I say in my pre­sen­ta­tions, some­thing else hap­pened. And it wasn’t the thing that those of us who were invest­ed in mak­ing this hap­pen imag­ined or actu­al­ly believed would hap­pen. It wasn’t any kind of eman­ci­pa­tion except per­haps the kind that Marcuse would’ve called repres­sive desub­li­ma­tion,” where all of these things that peo­ple had thought were unsayable in pub­lic were sud­den­ly val­i­dat­ed by their peer groups or sud­den­ly val­i­dat­ed in their echo cham­bers. And all of a sud­den the most antidemo­c­ra­t­ic, the most reac­tionary sen­ti­ments became express­ible in pub­lic. So in a sense we got what we asked for, but it wasn’t what we expect­ed that it was.

Mason: Do you think there’s a degree of mid-90s retrieval in the tech­nolo­gies such as blockchain? I mean these guys, the evan­ge­lists of blockchain say that they’re going to build Web 3.0 and it’s almost as if they for­got that was John Perry Barlow’s orig­i­nal mis­sion, the decen­tral­ized Web. And these guys want to build a decen­tral­ized web but 50% of them are very young kids—my peers—getting into the cryp­tocur­ren­cy trad­ing and actu­al­ly for­get­ting what that under­ly­ing tech­nol­o­gy could poten­tial­ly do, or do you think we’ve already lost when it comes to blockchain?

Greenfield: Well… [laughs] I don’t think there’s 90s retrieval going on in the blockchain so much as a direct line of con­ti­nu­ity from a 1980s project. People in this crowd— I’m rea­son­ably famil­iar the peo­ple in this audi­ence will— Raise a hand every­body who’s ever heard of the Extropians. Oh my good­ness, none. No!

Mason: My first inter­ac­tion with an Extropian was Max More.

Greenfield: [laughs]

Mason: So he was the tran­shu­man­ist philoso­pher and I met him at 18 years old in a hotel room in London.

Greenfield: I’m so sor­ry.

Mason: And he told me that I could ask him any ques­tion apart from about the time they cryo­geni­cal­ly froze his best friend’s moth­er. So this was the Extropian phi­los­o­phy, and a lot of those guys went and became CEOs of cry­on­ics com­pa­nies and want­ed to live for­ev­er. I mean, there was that cor­rup­tion with­in what hap­pened. The phi­los­o­phy nev­er match­es the exe­cu­tion and I won­der why?

Greenfield: Except, except, in the blockchain. So let me let me explain to you who I think the Extropians are. This is a beau­ti­ful vignette that illus­trates some­thing about it. These were tech­no­lib­er­tar­i­ans in, but not pri­mar­i­ly of, the Bay Area in the 1980s. They were hard­core Randians. They were hard­core believ­ers in indi­vid­ual sov­er­eign­ty. They thought of the state as an absolute­ly unac­cept­able intru­sion on the affairs of free, sov­er­eign indi­vid­u­als. They thought that the only valid rela­tions that ought to exist in the world were rela­tion­ships of con­tract between free, will­ing, con­sent­ing adults.

And like oth­er lib­er­tar— Are there any lib­er­tar­i­ans in the audi­ence that I’m going to offend ter­ri­bly by mak­ing fun of? No? Good. Okay. Because I think this is fun­da­men­tal­ly an ado­les­cent and specif­i­cal­ly an ado­les­cent male world­view. It’s a view that sug­gests that I’m gonna do what­ev­er I want and Mommy and Daddy can’t tell me that I can’t. And there’s some­thing kind of like, pis­sy about it.

But these were peo­ple who would swan around the Bay Area in ankle-length leather trench­coats. They gave them­selves names like Max More, because they were all about the pos­i­tive future and your know, our pos­i­tive aspi­ra­tions in that future. They believed in the absolute unlim­it­ed ambit of human progress. And they would give them­selves… You know, they had acronyms like SMILE which was… What was SMILE? I’m for­get­ting this. But it was some­thing about life exten­sion was the “LE,” right. Ah! Yeah, smart drugs, intel­li­gence ampli­fi­ca­tion, and life exten­sion. And they thought they were going to live for­ev­er. They lit­er­al­ly thought they were going to live for­ev­er and of the ways—

Mason: They still do.

Greenfield: Yeah. Yeah. And one of the ways that they thought they were going to do this was by cry­on­i­cal­ly freez­ing them­selves when they thought they were about to die, until nan­otech­nol­o­gy had advanced to the point that their bod­ies could be res­ur­rect­ed, their per­son­al­i­ties could be down­loaded into the newly-revivified bod­ies, and they were going to go on and live immor­tal lives in the par­adise to come that was real­ized through tech­nol­o­gy. These peo­ple real­ly believe this stuff. And they were most­ly, and right­ful­ly, for­got­ten. Because this phi­los­o­phy— You’ll for­give me, I per­son­al­ly believe this phi­los­o­phy is a joke.

Except a cou­ple of them went more or less under­ground and set about build­ing a part of this vision. Not the cry­on­ic part. Not the smart drugs part. Not the infi­nite intel­li­gence expan­sion, or the bush robots, or the Dyson spheres around the sun. Or the Computronium. They set about build­ing the finan­cial infra­struc­ture that would be required by a uni­verse that was pop­u­lat­ed by sov­er­eign indi­vid­ual immor­tal enti­ties.

And that’s how we get the blockchain. We lit­er­al­ly get the whole infra­struc­ture of the smart con­tract, and smart prop­er­ty, and the cal­cu­la­tion­al estab­lish­ment of trust and the whole trust­less archi­tec­ture and infra­struc­ture of the blockchain from peo­ple who didn’t believe that the state had—or any cen­tral authority—had any right­ful busi­ness inter­fer­ing with our affairs. So they built an infra­struc­ture to sub­stan­ti­ate the way of life that they believed in. And it worked.

Mason: The crazy thing is I don’t think the cry­oni­cists are there just yet. I don’t think they’ve even dis­cov­ered blockchain. The fun­ny thing about a lot of the Extropy folks you talk about is, they’ve got a chip on their shoul­der about the fact that they didn’t make a bunch a mon­ey in what hap­pened in the 90s. And Kurzweil then took their Singularity term and made it mar­ketable, and now Elon’s run­ning around and Peter Thiel’s run­ning around doing a lot of the stuff that they proph­e­sized but they don’t get the cred­it for it. And they’ve got a weird sort a chip on their shoul­der. There’s a lot of qui­et blogs on the dark cor­ners of the Internet where they go, We said that in the 80s but you know, these guys are build­ing it. Screw them.”

Greenfield: And hon­est­ly if I were Max More and Natasha Vita-More, his part­ner, I would feel the same way. They were. They were say­ing these things before Peter Thiel thought to infuse his veins with vir­gin blood. They were say­ing these things before…yeah, before Elon came around to say that he got ver­bal gov­ern­ment approval for a vacuum-evacuated tube beneath Washington DC. Yeah, they were. Whether it’s cred­it or blame that they’re look­ing for they deserve it.

Mason: Well we’ll leave it at that. I do want to go back to blockchain, though. So do you think it’s a get rich quick scheme at the moment for cryp­tocur­ren­cy traders? Or do you think per­haps just maybe there’s some­thing more hope­ful there? Can we build the decen­tral­ized web that John Perry Barlow had— I mean [inaudi­ble] the blockchain folks and the pain with speak­ing to them is they so des­per­ate­ly want to be tak­en seri­ous­ly like the Web 2.0 folks. Well, they call it Web 3.0 but they bor­row the lan­guage from Web 2.0. So they call their apps DAPs—

Greenfield: DAPs.

Mason: Decentralized Apps, which is the most fuck­ing stu­pid term I’ve ever heard. Like, Yeah, we’ve got a DAP!” I’m like what the fuck is a DAP? It’s a decen­tral­ized app. They’re try­ing to make it look, sound, and mar­ket it like Web 2.0

Greenfield: You know, I didn’t know when I came here that I was going to be in such you know…comfortable— This is like you know, we’re hav­ing— I hope some­body in the crowd real­ly rad­i­cal­ly dis­agrees with the opin­ions that we’re express­ing up here. Because that’s the only way this is ulti­mate­ly can be of val­ue for any­body.

Mason: Alright, sor­ry.

Greenfield: Because if we agree with each oth­er on—

Mason: So I can make a lot of mon­ey off of ether! What’s wrong with Etherium? It went up $120 after the crash last week, but.

Greenfield: Okay, so the thing about Ponzi schemes is that the peo­ple who’re invest­ed in them believe in them, right? It’s entire­ly legit­i­mate from their per­spec­tive. Any multi-level mar­ket­ing orga­ni­za­tion relies after the first cou­ple of peo­ple on peo­ple who are true believ­ers. And they prop­a­gate the val­ue frame­work of the multi-level mar­ket­ing orga­ni­za­tion or the Ponzi scheme out into the world. And they’re very— You know, like any oth­er reli­gion, we get invest­ed in things. I mean I’ve prob­a­bly got things that I’m invest­ed in that you could con­front me with objec­tive evi­dence that I was wrong, and it would only rein­force me in my insis­tence that I was cor­rect. Because that’s the way the human psy­che appears to work. We now know this. That you can’t argue peo­ple with log­ic or rea­son out of a posi­tion that they haven’t got­ten into by way of log­ic or rea­son. And the secret is that most of the things we believe we didn’t arrive at ratio­nal­ly.

So a lot of the enthu­si­asm for blockchain is being prop­a­gat­ed by peo­ple who are invest­ed in it. And to me the inter­est­ing ques­tion is why are they invest­ed in it? What vision of the future are they try­ing to get to? There are… The most heart­break­ing thing for me is the peo­ple on the hor­i­zon­tal­ist left who are real­ly invest­ed in blockchain psy­chi­cal­ly because they think it will real­ize the kind of utopi­an left anar­chist future. Which is a future that I per­son­al­ly… You know, my pol­i­tics are…you know, lib­er­tar­i­an social­ist. Or you know demo­c­ra­t­ic con­fed­er­al­ist. Whatever you wan­na call it, it’s hor­i­zon­tal­ist, you know, all that stuff.

So yeah, do I want to believe that blockchain can make that hap­pen? Of course I would love to believe that. But I’ve done just enough dig­ging to find out that the odds of that hap­pen­ing are not ter­ri­bly great. And if you want to achieve those goals… Goals of con­fed­er­al­ism or munic­i­pal­ism or hor­i­zon­tal­ism or par­tic­i­pa­to­ry democ­ra­cy. Much bet­ter off try­ing to real­ize them direct­ly rather than automat­ing the achieve­ment of that goal by embrac­ing blockchain tech­nol­o­gy.

Mason: So what do you mean real­ize them direct­ly?

Greenfield: It’s not going to be near­ly as sexy. But I mean hav­ing neigh­bor­hood coun­cils, neigh­bor­hood com­mit­tees. Affinity groups that you work in. The most amaz­ing thing to me at this time is to look at the real-world exam­ples of con­fed­er­al­ists and munic­i­pal­ists who are mak­ing head­way in the world, who aren’t bas­ing their actions and their efforts on utopi­an tech­nolo­gies but are actu­al­ly going out and doing the hard work of orga­niz­ing peo­ple. Almost as if it were the 1930s, right.

Of course, are they using their smart­phones? Yes. Are they using you know, Telegraph? Yeah, of course they are. Are they using text mes­sages and Google Docs? Are they using cloud-based appli­ca­tions to suture peo­ple and com­mu­ni­ties togeth­er, of course they are because we’re not in the 1930s and we do have tools that we didn’t use to.

But the real hard work is the work of retail pol­i­tics. It’s the work of engag­ing peo­ple eye to eye, direct­ly, and account­ing for their human­i­ty, their real­i­ty, their griev­ances, their hopes, their desires. That is not some­thing as yet that I can see being instan­ti­at­ed on any infra­struc­ture, blockchain or oth­er­wise, and hav­ing the same kind of impact in the world.

Mason: Firstly, you’ve writ­ten a lot about the city and I want to go back to IoT, the Internet of Things. So you said you were see­ing it in 2008. I was see­ing about 2012, the excite­ment over smart fridges which seems to repeat itself every three years ad infini­tum. And we nev­er got it. And yet there’s still a dri­ve towards this thing called the smart city. But with things that are hap­pen­ing in the UK, specif­i­cal­ly with the NHS hack I mean, are we think­ing about the cyber­se­cu­ri­ty impli­ca­tions of net­work­ing an entire city?

Greenfield: No, we’re not. And the rea­son is, as I say in the book, as I argue in the book, it’s an arti­fact of busi­ness mod­el. And here again is why it dis­tress­es me specif­i­cal­ly that cap­i­tal cap­tured the Internet of Things. When you go to… Oh what’s the name of the big British chain, Cellphone Warehouse or what­ev­er. The one that’s on you know, Tottenham Court Road.

Mason: Carphone Warehouse.

Greenfield: Fine. Yeah. Okay. You go in there and you buy a web­cam, right. And that web­cam may be ten quid at this point. The fact that it was engi­neered so that it could be deliv­ered to you at ten quid and the man­u­fac­tur­er and the ven­dor still are still going to be able to make a prof­it on it means absolute­ly no pro­vi­sion for secu­ri­ty could be incor­po­rat­ed into that device. It’s sim­ply cut­ting into somebody’s prof­it margin—it will not hap­pen. And so the tech­ni­cal capa­bil­i­ty exists to pro­vide each one of these devices with some kind of buffer against the worst sort of even­tu­al­i­ties. But for rea­sons of prof­it, that hasn’t been done.

And so you can go there, and you can buy a web­cam, and you can slap it up in your nurs­ery or in your liv­ing room or in your garage. And odds are that unless you’re very thought­ful, very knowl­edge­able, you know what you’re doing, you read the man­u­al and you con­fig­ure the thing prop­er­ly… You know guess what, there are search engines that are going to auto­mat­i­cal­ly search the Internet for open ports for cam­eras that are speak­ing to the Internet through that port and that don’t have a pass­word or have the default pass­word secur­ing that feed. And you know, lit­er­al­ly some­body 8,000 miles away can search for open web­cams and find them. And we’re talk­ing about web­cams that are look­ing onto baby’s cribs. We’re talk­ing about web­cams are look­ing onto weed grow ops. We’re talk­ing about the back offices of fast food restau­rants. You name it. It’s out there.

And the rea­son that you can see all of these things from the safe­ty and com­fort of your room is that the manufacturer—probably in Shenzhen—you know, they’re mak­ing two or three pen­nies on each one of these cam­eras sold. If they had both­ered to actu­al­ly engi­neer it so that it could be secured, that prof­it would have evap­o­rat­ed.

And it’s the same thing… You know, there’s always this motive. Wherever you look in the Internet of Things you crop up against this. And frankly I’ll be very hon­est with you, I wish this weren’t so. It is actu­al­ly bor­ing for me at this point to open up the paper and see the lat­est exam­ple— You know, every­body over the last cou­ple of days, everybody’s prob­a­bly seen the thing about Roombas. Have you all seen the thing about Roombas now? You know what Roombas are doing?

Everybody loves Roombas because they’re seen as being these harm­less robots that kind of humbly vac­u­um your home. It turns out that Roombas by def­i­n­i­tion and in order to do what they do have the abil­i­ty to map your home in high res­o­lu­tion. And now, in search of anoth­er rev­enue stream, the ven­dor of Roombas is sell­ing that infor­ma­tion or is-excuse me—con­tem­plat­ing sell­ing that infor­ma­tion to the high­est bid­der.

You didn’t know when you put that lit­tle hock­ey puck thing down to vac­u­um up the cat hair in your house that you were map­ping every con­tour of your exis­tence in high res­o­lu­tion and sell­ing that to some­body. And oh by the way not deriv­ing any finan­cial advan­tage from that your­self but giv­ing up that finan­cial advan­tage to the ven­dor in the third par­ty. You had no idea. You were nev­er asked for your con­sent. You were nev­er noti­fied. But that’s what’s hap­pen­ing.

And I promise you it is no fun at this point to be the ant­i­cap­i­tal­ist Cassandra who sits up here and says, Guess what you guys. This is what’s going on.” Because peo­ple are like, Ah, God, you again. You again. You’re so… You’re no fun. Why won’t you let us have our robots? What’s wrong with hav­ing a web­cam in the house?” And I’m like, fine. If you don’t mind the idea of a hack­er in Kazakhstan look­ing into your kid’s play­room at will, be my guest. But, I wouldn’t do that.

Mason: There’s the micro scale of the home, but there’s the macro scale of the city itself, and there’s a lot of excite­ment around autonomous vehi­cles and self-driving cars. And some of the most trou­bling stuff that I’ve seen writ­ten is, when all of these cars are con­nect­ed, because whether it’s dri­ven by human a or it’s dri­ven by a machine every sin­gle one will have to have a bea­con (at least in the UK pol­i­cy cur­rent­ly) to iden­ti­fy where it is on the road, the abil­i­ty to take con­trol of those cones is opened up. And we won’t have just one London Bridge event where we have some­one careen­ing a truck that they hired into a bunch of peo­ple. We could have six­teen simul­ta­ne­ous­ly done by a truck that was dri­ven by some­one who had no agency over the fact that that was going to go kill peo­ple.

My issue is cyber­se­cu­ri­ty on wide scale, why are we not there yet? Why are we not just run­ning pet­ri­fied from a lot of this IoT stuff going, Are you fuck­ing kid­ding me?”

Greenfield: Because you’ve already answered the ques­tion. I mean, as a mat­ter of fact it would be eas­i­er to do it by pow­ers of two simul­ta­ne­ous­ly, right. It would be eas­i­er to do sixty-four trucks simul­ta­ne­ous­ly, or 256 trucks simul­ta­ne­ous­ly. Because they’re all the same stan­dard mod­el and they all have the same secu­ri­ty pack­age, right. You can cap­ture mul­ti­ple cam­eras at once because they don’t have secu­ri­ty on them. I promise you that there’s going to be a ven­dor of auto­mo­bile net­work­ing that is going to have a sim­i­lar lack of atten­tion to detail, and it will sim­ply be eas­i­er to do it all at once.

Why are we not run­ning scream­ing from these things? Well…we believe in the future. And we believe that the future is going to be bet­ter. And we believe… I mean…putting the ques­tion of ter­ror­ism to the side, why is it that we nev­er talk about autonomous pub­lic trans­port? Why is it that when we imag­ine the dri­ver­less car, the autonomous vehi­cle, we always imag­ine it as sim­ply the car that peo­ple own now but with­out a steer­ing wheel?

Mason: Because of the man­u­fac­tur­ers. Nissan are fuck­ing ter­ri­fied that nobody’s going to buy cars.

Greenfield: And they’re right.

Mason: Yeah. And the insur­ance com­pa­nies are even more pet­ri­fied. If you can prove you’re nev­er going to have a crash, buy a viva.

Greenfield: No, you’re right. You’re right. So again, this is kind of a drum­beat that I’m sure gets tir­ing for peo­ple. Capitalism is the prob­lem, right. Capitalism is the ulti­mate frame­work in which our imag­i­nary is embed­ded. And we have a real­ly real­ly hard time see­ing out­side that frame­work and say­ing well, maybe these things could be col­lec­tive goods. Maybe these things could be munic­i­pal­ly owned. Maybe these things don’t have to repli­cate all of the mis­takes that we’ve made over the last hun­dred years. Wouldn’t that be amaz­ing?

The trou­ble is that— You know, it’s the most enor­mous cliché on the left. It is eas­i­er to imag­ine the end of the world than it is to imag­ine the end of cap­i­tal­ism. Like this is such a cliché that it’s like one of these inspi­ra­tional quotes on Facebook. Nobody’s quite sure who said it orig­i­nal­ly and there are mul­ti­ple peo­ple who’ve— You know, Abraham Lincoln prob­a­bly said it. And we need to begin urgent­ly imag­in­ing what that looks like. Because if we don’t we’re nev­er going to be able to imag­ine a place for these tech­nolo­gies in our lives that responds to the most basic con­sid­er­a­tions of human decen­cy and the kind of world that we want to live in. It’s that sim­ple.

And if you don’t already agree with me, I cer­tain­ly don’t expect con­vince you tonight. This is sim­ply my opin­ion. But it is…you’ll for­give me, it’s an opin­ion that is bol­stered by a depress­ing­ly con­sis­tent litany of evi­dence over what is now fif­teen or twen­ty years. Every sin­gle fuck­ing time we seize on a tech­nol­o­gy that looks as though it might be used for some­thing inter­est­ing, some­thing out­side the enve­lope of every­thing that we expect, every­thing that we’re accus­tomed to, it gets cap­tured and turned back—and in amaz­ing­ly short peri­ods of time. Like, one of you is going to have to do bet­ter. You’re going to have to go out there and rip this enve­lope of con­straints to shreds and imag­ine some­thing that doesn’t look like every­thing that we’ve already been offered. Because oth­er­wise it’s just going to more of the same over and over and over again. And you know, I’m old now, right. I don’t want to live the declin­ing years of my life in an envi­ron­ment where I’ve seen this all before and it’s all— You know, some­body come at me with some­thing pro­found­ly new and dif­fer­ent and I will be the very first per­son to applaud you.

Mason: I just want­ed… From the floor I mean, who still believes in the future? Welcome—

Greenfield: One hand. Yay!

Mason: Welcome to Virtual Futures. We found the oth­ers. And we’re all on God knows what. I mean, we spoke a lot about depres­sion in the last one.

Greenfield: You want to kick the mic into the crowd and see what hap­pens?

Mason: I do, but before we do that I have one oth­er ques­tion which I think—and let’s jump— Should we just embrace the accelerationist’s thought. Should we just go, you know what? If cap­i­tal is the thing that’s dri­ving this all, let’s just accept it. Let’s run for it. Let’s accept that humans are just here to train the machines to take over when we final­ly are killed off by them or we no longer have the biol­o­gy to sur­vive the envi­ron­ments we’re in because we fucked it up? And it would be okay for some of the humans because those would be the guys who fly off to Mars and have their own lit­tle species—their sub­spe­ci­a­tion plan­ets there. I just won­der, should we embrace the accelerationist’s view­point, and should we allow some humans to just sub­spe­ci­ate, or aspe­ci­ate?

Greenfield: Uh, well…you’re all wel­come to but I can’t, and I couldn’t bear myself if I did. Because hon­est­ly? Accelerationism feels to me like a remark­ably priv­i­leged posi­tion. It’s some­thing that peo­ple who are already safe and com­fort­able can say throw cau­tion to the winds; let it all fly,” right. You can say that if you’ve got a roof over your head and food in your bel­ly and health­care for the rest your life. It’s easy to say that.

If you’re any clos­er to the edge than that— If you have any real amount of what we now call pre­car­i­ty, fear, in your life. If you have fear in your bel­ly because you’ve watched the peo­ple around you strug­gle with their health, or their men­tal health. If you’ve been touched in any way by the eco­nom­ic down­turn that’s kind of tak­en up res­i­dence in our lives since the intro­duc­tion of aus­ter­i­ty. If you per­ceive your­self to in any way not have been advan­taged by the past forty years of neolib­er­al hege­mo­ny across the Western world, it’s impos­si­ble to embrace accel­er­a­tionism if you have a beat­ing heart and any­thing resem­bling a soul. It’s my own per­son­al opin­ion. I hope I’m not insult­ing any of you. But that is— You know, accel­er­a­tionism to me is an abdi­ca­tion of respon­si­bil­i­ty for the oth­er human beings you share the plan­et with, and also by the way the non­hu­man beings and sen­tiences that you share the plan­et with.

Luke Robert Mason: So on that note, whether you believe in the future or not we are going to throw out to audience questions. We're gonna see if this might work. So we're going to hand this mic around. We're so understaffed it's incredible, so if anybody wants to run our mic that would be great. or we could work as a collaborative unit and pass this mic between folk—

Adam Greenfield: We could make it happen. I'm sure we can make it work.

Mason: Or sometimes we have to just grab mics off of people. By the way, a question has an intonation at the end. So if you have any questions..

Greenfield: Oh right. Yeah no, that's a really really good point. I do a lot of talks where people make reflections. I'm sure you've all got fascinating things to say but I would love to hear those things afterwards over a beer? And right now it's literally for questions that we will attempt to answer. If you have a reflection to make, maybe the time for that is later on.

Mason: Wonderful. Any questions?

Matthew Chalmers: Hello.

Greenfield: Howdy. What's your name, man?

Chalmers: My name is Matthew Chalmers. I'm an academic from the University of Glasgow.

Greenfield: There you go.

Chalmers: There you go. And I just came—I just walked out now from at a meeting in Her Majesty's treasury where there are people from government trying to find out about distributed ledger technologies and what they might do about it. They're skeptical but interested, and they're being hit by this wave of hype. And I was one of the people they throwing rocks. Because I think the hype is just going to become totally overblown.

Greenfield: You've been throwing rocks since I've known you.

Chalmers: Why change the habit of a lifetime? So I wonder whether Adam and the others would like to… What would their message be to the people from the Justice Department, and the treasury, and the banks I just talked to. Because it was really freaky.

Greenfield: I would love to pick your brain over a beer as to what that meeting looked like, to the degree that you're comfortable sharing it. You said they were skeptical, and that's fascinating to me. Like, I assume… My default assumption is that those people are not stupid. And they have a certain ability to know when they're being pushed into a corner. But they don't always have the tools to resist that. And so my question to them would be what is it that people are asking of them? Why is it that distributed ledger… Which is not identical with blockchain—we need to be very careful with the terminology here. But what is it that they hope to achieve with a distributed ledger? And are there not possibly other ways of achieving those ends that don't involve the transition to an entirely new and unproven technology? That would be— I mean, yeah seriously. I mean like, I'm jealous that you got to be in that room; I'm grateful that it was you in that room.

Chalmers: I wasn't the only one.

Greenfield: I'm sure. But I think that… You know, dare I hope that— I'm knocking, you can see knocking the chair here instead of knocking on— Here's wood; knocking on wood. Dare I hope that we have been burned enough at this point and we have plenty of case studies to point to where some multi-billion or tens of billions of pounds of investment was made in the technology, and the technology vendor turned out to not have the best interests of the public entirely… Dare I hope? I don't know. It's an amazing circumstance to think of and I would love to catch up with you more afterwards and find out what that conversation went like.

Mason: Any other questions?

Greenfield: Say your name, please.

Mason: Also, if anyone wants to earn themself a beer, I really need someone to run that mic. So if anybody can help, that would be great. Sorry.

Audience 2: My name is Jaya and I am writing a PhD on blockchain technology. And I would love to also who hear more about what happened in that meeting. My question is not about blockchain, though.

Greenfield: Thank you.

Audience 2: I'm more curious about… The conversation that the two of you were having was very much kind of focused on accidents and potential security problems with digital technologies. And usually when that framing happens, it kind of turns the problem into just another problem for technology to solve as in okay, there's a security problem. Let's get some cryptographers involved, let's get some— You know, it's another problem to be solved by more technology.

So I was wondering if there's a different kind of angle or some other kind of aspects of the critique. I mean, you mentioned a kind of general critique of capitalism, which sounds fantastic—

Greenfield: Pretty broad.

Audience 2: —and makes sense. But I was wondering like, some of the more specific angles that you cover in the book.

Greenfield: I do wonder, you know… In the 1960s, and I'm going to forget and not be able to cite this appropriately. But there was a body of thought in what was then called human factors research about normal accidents. And you can you can look this up right now and you can find the canonical paper on normal accidents. But the idea was that in any of the complex processes that—and I think that the canonical example here was a nuclear power plant. That any of the complex processes that we've installed at the heart of the ways in which we do life under late capitalism at this point in time…accidents aren't accidents. We can expect that our processes are inherently braided enough and complicated enough and thorny enough and counterintuitive enough that errors will arise at predictable intervals, or at least you know, predictably.

And I thought in the seed of that was something profound and not merely amenable to technical resolution. Because as I understood it, the point of that argument was to say not to slap a quick technical fix on a system that you know is going to throw errors at intervals. But in a sense to redefine processes around what we understand about who we are and what we do and how we approach problems. It isn't simply to build backups and cascading redundancies into complicated systems, it's to accept that we make mistakes.

And I think it's that acceptance of human frailty that I found particularly radical and particularly refreshing. That ultimately any of our institutions are going to be marked by…you know, it's no longer done to say "human nature" so I won't say human nature. But anything that we invent, anything that we devise, anything that we come up with, is going to be marked by our humanness. And instead of running from that, it might be best to try and wrap our heads around what that implies for ourselves, and to cut ourselves a goddamn break, you know, and to not ask that we be these sort of high-performance machines that are simply made out of flesh and blood but that are slotted into other networks of machines that don't happen to be made out of flesh and blood.

I thought that there was a hopeful moment in there that could have been retrieved and developed. And I think frankly that there still could be. I think that most of what gives me hope at this point are processes which are not at all sexily high technology but are precisely about understanding how people arrive at decisions under situations of limited information and pressure. And I think that's why I got involved in what was then called human factors in the first place, was because the world is complicated, and it is heterogeneous, and there's not going to be any critical path to a golden key solution to any of this. We have to work at it together, and it's a process that is painstaking and involved and frustrating—oh my God is it frustrating. And to my mind, the more that we understand that, and the more our technologies inscribe that lesson for us in ways that we can't possibly miss, the better off we are. Is that…a reasonable answer? Groovy.

Mason: My flip side to that is we can't prepare for it, and we need the catastrophe to occur. So philosophers have been arguing about the trolley argument with regards to self-driving cars for God knows how long. We won't give two shits until a car actually kills someone, and so blood is actually spilled. And with critique of the Extropy folks, they thought they were going to get their living forever futures without anybody dying. If you're going to experiment with certain types of medical technology on individuals to help them live longer, then you're going to have to experiment on human individuals eventually, and there will be mistakes. The history of science shows us that.

Now Professor Steve Fuller, who we've had here a lot at Virtual Futures, has argued that maybe the only way to actually make some of these crazy visions possible is that we sign up for our humanity. In the same way in the 1930s you signed up for queen and country to go to war, you'd sign up your humanity and you'd go and get your weird biotech experiment to see if it made you live longer, because if it did you would be a pioneer for the future of humanity. And if you died, well you died in the service of the future of the human race…whether we'd ever get there or not.

Greenfield: I think you make a really good point, though, which is that when the Extropians did have, literally, their heads cut off and frozen in liquid nitrogen and they entrusted their heads to these repositories that they thought were going to last for 10,000 years, the holding company went bankrupt and defaulted on their electric bill. The electric bill on the coolers wasn't paid. The coolers were shut off by the electric company. The facility reached room temperature. The coolant leaked out of the vessels and the heads rotted.

Mason: You know their solution for that?

Greenfield: No pun intended, go ahead.

Mason: Yeah. They want to send them to space.

Greenfield: [laughs] 'Course they do.

Mason: They need more space to bury dead people, so the coldest vacuum is space, so why don't you just have them orbiting—

Greenfield: So, but…

Mason: —ad infinitum? I'm fucking—I'm serious.

Greenfield: I believe you. I completely believe you. But the point is that human institutions you know, they're not transhuman, they're not posthuman, we're all too human, right. We go bankrupt and we don't pay the power bill. And then the power company cuts off the po—this is what happens. The space launch system you know, somebody transposes something that was in metric to Imperial, and the capsule that was supposed to orbit in a comfortably tolerable environment and keep your head frozen for ten million years is launched into the sun. Who knows?

The people who believe these things believe in the perfectibility of things which have never in our history ever once been perfect before, and they're betting everything on that perfection. And I find it touchingly naïve and childlike. But as a political program culpably naïve and to be fought with every fiber of my being.

Mason: Is the other piece of your book the thing that unites those technologies? There's a drive for optimization. Whether it's the city, the human, or anything else in between.

Greenfield: I hope again I'm not insulting any of you. None of us in this room are optimal. Like I'm not optimal—I'll never be optimal. I'll never be anything close to optimal and I'm not sure I would want to be optimal. You may have different ambitions and I wish you the best of luck. But I think it's going to be a rough road.

Mason: I found someone who's kind enough to run this mic, thank you ever so much.

Greenfield: You're not yourself asking a question?

Mason: Thank you.

Audience 3: Well…

Greenfield: Say your name.

Audience 3: My name's Tara. Hello. On that note with optimization, could you not say that it's somehow linked to capitalism? That you're always chasing this goal that you can never achieve and we're now bringing that to ourselves with, physically you're meant to follow and you know, you could say the same thing with the sort of gym craze that everyone seems to be going through as a cure for finding this optimal being.

Greenfield: Yeah. I think that capitalism is almost too easy a bugbear, though. Because the desire to optimize or to perfect is older than capitalism. And it's almost as if it has vampirized capitalism to extend itself. That logic of wanting to perfect ourselves, to measure ourselves against the gods you know, it's not new. And it's not shallow, either. I understand why it exists.

But the fact of the matter is that when we go to the gym—I go to the gym. You know, I will spend ninety minutes tomorrow on an elliptical machine. Why will I spend ninety minutes on an elliptical machine? Well, because I want to be fitter. Why do I want to be fitter? I want look better in my clothes. I want people to think that I'm more attractive. I want people to think that I'm more attractive so that they are more likely to want to invite me to things because my financial future depends on me being invited to things.

I mean, all of these… You know, these things are not innocent. And the motivations and the desires that we recognize in ourselves aren't there by accident. And I'm not going to say that they're always 100% there because of you know, capitalism—that's kinda shallow. But they're invidious. And what I would ask is that we each have the courage to ask of ourselves why it is that we feel that we need to be like some gung-ho NASA astronaut of the 1960s, "kick the tires and light the fires." Why is it that we feel called upon to operate in these high-performance regimes when we're after all simply human.

Mason: I think [alternative?] failure of the Extropians… So the morphological freedom thesis was we're going to be stronger, better, faster, more optimized. The thing they forgot is in actual fact that doesn't make us better as an entire species. The thing that we should do is embrace difference.

Greenfield: Yes.

Mason: It wasn't survival of the fittest, it was survival of the mutant, the individual, and the animal that actually survived. The weird ways in which the environment would manipulate them. And it wasn't the fittest ones that survived. And I wonder if we embrace difference instead of driving towards optimization that we'd have a more interesting experience. Or, will it go fully the other way and we subspeciate and we will have those guys who go off-planet and the rest of us will be left here.

Greenfield: Yeah, I think hitting on something true and real and interesting. Before Boing Boing was a web site it was a fanzine. And I think its tagline was something like "Happy facts for happy mutants" or something like that. And the happy mutants part was important, right. It was the idea that we weren't going to be constrained by the human body plan. And that we were going to invent or discover or explore new spaces. Like not merely new expressions of self, new genders, new identities, new personas, new ways of being human, new ways of being alive.

And that was startlingly liberatory. It really was, um… You know, in 1985 or so that felt like something worth investing in, and something worth betting on. And I think it is sort of a failure of the collective imagination that we now interpret freedom to mean essentially the freedom to oppress and exploit other human beings, and the nonhuman population of this planet. Because it did at one point mean… Every single time I see somebody who still like, they're body hacking or they're putting a chip into their wrist or something like that, I have mixed feelings. Because on the one hand I see the last surviving note that somebody's hitting of something that was much bigger and more hopeful, and I also see the totality of the ways in which that's been captured and turned against the original ambition. That's a melancholy and a complicated feeling. But you're right. I mean maybe there's something in that to be retrieved and brought forward to the present moment.

Mason: We can only hope. Another question.

Audience 4: Hey, my name is Henri. I'm French but I'm sure you've already heard that. Anyway—

Greenfield: [laughs]

Mason: What a wonderful opening.

Greenfield: Well done, yeah.

Audience 4: So we know robots are taking more and more jobs everywhere. And there's a belief that creativity's one of the only sectors that won't be touched by automation. But do you think that robots can be creative? And if yes does that mean we've reached a kind of singularity?

Greenfield: So I don't believe in singularities, right. Bang. So let's dispense with that.

Weirdly enough, though, there's some tension between the two parts of my answer. I think the Singularity is a human ideology. I think it doesn't correspond to the nature of nonhuman intelligence. I do think nonhuman intelligences are capable of being creative.

And let me not specifically for the second talk about machinic intelligences. I think that we know, by analogy to other forms of nonhuman intelligence that are capable of creating…using the world as an expressive medium, that you cannot tell me that the informational content of whalesong is all that it's about. You cannot tell me that birdsong is simply about conveying information. It is a presentation of self. It is an embroidery on the available communication channel, and there is pleasure that is taken in that act. So I would interpret that—birdsong, whalesong, communications of animals in general—as expressive and creative acts. Right here, right now, without even having to think about machinic intelligence.

So, do I believe that we will—relatively soon—arrive at a place in which algorithmic systems are generating semantic structures, communicative structures, expressive structures for their own pleasure? Or something indistinguishable from pleasure—yeah, I do. I absolutely do. I do not think that creativity is the last refuge of the human. I think for all that I am in many ways a humanist in the old-fashioned way, it's very difficult for me to draw any line at any point and say this is the unique thing about humans that nothing else in the universe is capable of.

And as a matter fact, what converted me to this position was in fact an attempt to do that, was the attempt to find something uniquely and distinctively human. And you know, if you have any intellectual integrity at all, if you go down this path you find pretty quickly there's nothing that we do that other species don't do. There's nothing that we do that other complex systems in the universe don't do. Very very very little, it turns out, is distinctively human.

So yes I do believe in relatively short order we will be confront— If in fact they don't already exist and we're just simply not perceiving them, in the way that an ant doesn't perceive a superhighway that's rushing past its anthill, right. It is possible that these expressive and communicative structures are already in existence at a scale or at a level of reality that we do not perceive.

But even putting that possibility to the side, yes I think that we will invent and create machinic systems which will to all intents and purposes realize things which we can only understand as art or as creative or as expressive. And then the question becomes what rights does our law provide for those sentient beings—because they will be sentient. What space do we make for them that is anything but slavery? And how do we treat them that is in any way different than the way that we treat people at present?

You know Norbert Wiener, in I'm going to say 1949 and somebody will Google this and tell me that I'm wrong. But one of his first works of thought in cybernetic theory was called The Human Use of Human Beings. And I come back to that framing a lot. It is about the use of things that are regarded as objects, and not things which accorded their own subjectivity, their own interiority, their own perso—their own being. And I think that we're going to have to confront that in our law, in our culture, and in our ways of interacting with one another sooner rather than later.

Mason: Any other questions?

Audience 5: Hey, I'm Matt from Scotland.

Greenfield: Hey. What's up?

Audience 5: You mentioned you want to see the end of the capitalism, and I'm all for it. I actually want to work on that. Do you have any ideas for me?

Greenfield: Yeah. I do. Öcalan. The founder of the PKK in Turkey, he wrote a book called Democratic Confederalism, go read it. Great book.

Audience 6: Hi, I'm Simon from Brighton.

Greenfield: Hi, Simon.

Audience 6: What's your view on how employment's going to be affected over the next twenty years by all of these changes we've been talking about?

Greenfield: Yeah, oh God.

Audience 6: Sort of how the Extropians were going to have their futures without having to die, will we get our futures and still get to keep our jobs?

Greenfield: I think we need to accept that our language around this stuff is braided and interwoven with assumptions which are no longer tenable. So, what is a job? A job is a thing that we do during the hours of our days that is remunerative to us and that generates value for the economy. And that somehow most of us are expected to have as a consequence of being adults, in a culture that expects full employment or something close to full employment. And in which a metric of the healthy functioning of the economy is that there is something close to full employment of human beings.

And I think that all of those assumptions are becoming subject to challenge if they haven't been challenged already. So the notion that a job is a thing that you go to is already you know, it's already been exploded and disassembled by the past thirty or forty years of experience. Like, we have tasks now rather than jobs. We no longer—

Audience 6: [inaudible]

Greenfield: Gig economy, absolutely. That was the first assault on these ideas. But then comes the idea that there are tasks which automated systems can perform at much lower cost than human beings. And particularly if we accept the thesis that I've just argued to the gentleman who asked the previous question before one, that there are very few tasks in the economy that cannot ultimately be performed by machinic systems, right.

Like, I used to make this argument to like ad agency people. And they would say, "Oh you know, a guy who puts together cars on an assembly line yeah, that can be automated away. And a nurse. Well, the job of a nurse can be automated away. We'll find people to wipe the butts of people in nursing homes and robots will do that and algorithms will do the rest. But I'm the creative director of an ad agency, and you'll never automate away the things that I do. The spark of creative fire that I bring."

And I'm like dude, do you understand what a Markov chain is? And do you understand how I could take the whole corpus of 20th century advertising and generate entirely new campaigns out of what worked in the past. So there's very little that I see, again, as being beyond the ability to be automated. And I think that when that happens, we really really have to wrestle with the idea that the assumptions upon which the econometrics that a healthy economy is assessed on are misguided. That the whole notion of economic growth, the whole notion of the wise stewardship of a nation-state being one that's coextensive with economic growth, that is expressed in something close to full employment, we need to devise systems that replace all of that because it's all on its way out.

At this point most people talk about UBI. They say the universal basic income is going to save us all. And I say well that's great. I love the UBI. But surely you're talking also about the UBI in a context of universal healthcare, and the right to housing, and you know, the right to shelter, aren't you? Because if you're not, the UBI will wind up getting siphoned back off of people in the form of user fees for services which used to be provided by the public and are now suddenly privatized. If we simply have the UBI in the usual neoliberal context, we haven't really gotten anything at all.

So, jobs, the economy, employment…hobbies. You know…craft. I mean, all of these terms have been defined in a context in which all of the assumptions that govern that context and no longer tenable. How do we begin to be human in a time when none of these things are any longer true? I have some ideas but I don't have any answers. All I have are my own instincts and the things that I've learned. And all you have your own instincts and the things that you've learned. And together all that we have is our collective sense of what we've seen happen when automation happens. As I say elsewhere—not in the book—we're entering a post-human economy, and a post-human economy implies and requires a post-human politics. And we now have to discover what a post-human politics looks like.

Mason: I want to quickly return to UBI. So there's been two…these key ideas that I've heard which are quite attractive with regards to how UBI would actually work. One is the ability to sell our own data. So…I hate to return to blockchain but the idea that you're about to produce a whole bunch of new data that shouldn't be taken by the stacks. So you can produce genetic data and neuro data. And what do platforms want? Well, they want attention data and that's neuro data. So if you can store that data locally and then sell it or micro-sell it back to the platforms that make the money off us in the first place, that's a way to turn basically bring in a small amount of income every time you're sitting there searching through Facebook, ie. the advertisers pay us to watch their thirty-second bits of rubbish.

Or the second one is we just need to come to terms with the fact that the employees of the sorts of companies who actually advocate the UBI such as Google, you see the very young employees going, "Yes, UBI's a great idea!" But they forget they work for the company that's not paying their taxes in this country, and the tax money would be how to make UBI happen in the first place, so should they not be more accountable to actually turning around to Amazon or Google or wherever the hell they work and go, "Jesus, I'm gonna need this UBI. Fucking pay your taxes."

Greenfield: If corporations paid their taxes we wouldn't be talking about UBI, period end of sentence.

Mason: Yeah.

Greenfield: Yes, yes. Absolutely.

Mason: So the second one's more tenable.

Greenfield: Yeah. I mean, let's dispense with that first, dystopian vision.

Mason: Right.

Greenfield: Let's simply say that in the United States at least, if corporations paid their fair share of the taxes, you could afford the welfare state and a whole lot more. You could afford basic infrastructure. You could afford decent quality of life for every single human in the country and a whole lot more besides, and that's just the United States. Corporations should pay their damn taxes.

Mason: That didn't get a round of applause. I'm slightly concerned now. Any other questions?

Greenfield: Let's have this be the last question, if that's okay.

Audience 7: A lot of pressure. Thank you.

Greenfield: A lot of pressure on my bladder, particularly.

Audience 7: Oh, okay. Then I'll make it short. I want to talk a little bit about (I'm Pete, by the way.) about the market and maybe the taste of people who use technology. I'm thinking particularly about augmented reality. Let's take Pokémon GO as an example, which everyone's a little embarrassed about now. Candy Crush has outlived Pokémon GO. And that maybe in terms of the market, there isn't the taste for the future that transhumanists want. People just want to play Candy Crush and wait for death. And because of that, we're actually defended against certain types of dystopia.

Greenfield: Bless you. I'm so glad somebody use the word taste. So one of the things that happened when we did successfully democratize access to these tools, services, networks, ways of being in the world, was that we lost control of taste, right. I mean, when you had a concentrated decision nexus in the 60s, you could essentially impose high corporate modernism on the world, because there was a very very concentrated number of people who were making decisions that governed the ways in which every day life was to be designed.

And I gotta tell you, me personally I think high corporate modernism was the high point of human aspiration. Like, Helvetica to me is the most beautiful thing that's ever been created. And the International Style, and monochrome you know…everything is to me the epitome of taste. But it turns out that 99.8% of the people on the planet disagree with me. And that they would rather reality be brightly colorful, animated, kawaii, happy, fun, you know…literally animated pieces of shit talking. And that they express themselves to one another by sending themselves animated images of pieces of shit with eyes stuck into them. This is just a neutral and uninflected description of 2017, right.

Audience 7: [inaudible]

Greenfield: Well, okay. But you know, the thing is that um…

Audience 7: It makes people happy.

Greenfield: It makes people happy and who am I— There's not a damn thing wrong with that. That's ultimately where I'm going, is that it turns out that if what I'm arguing for is a radical acceptance of what it is to be human, it turns out that we like Dan Brown novels. And it turns out that we like anime porn. And it turns out that we spend a lot of time in spreadsheets, right. This is what humanity is. It's not…what I would like to believe that we are but that is what humanity is. And if I'm arguing for radical acceptance of that and a radical democratization of things, I have no…I have no choice but to accept that.

Now, what I can do is ask, and I think it's fair to ask, why people want those things. And why people think that these things are funny. And why people think that these are expressions of their own personality. How is it that we got there—this is the cultural studies student in me. How is it that these things became hegemonic? Why is it that we internalize those desires? Why is it that we interpret this as some kind of um…why do we think that these are expressions of our individuality when literally seven billion other people are doing the same thing? And for that matter, why do I think of the way that I'm dressed as an expression of my individuality when there are one million people who are doing the same thing.

These are deep questions. But I think that the only ethically tenable thing is to accept that taste is a production of cultural capital and that the taste that I particularly appreciate and enjoy was never anything but an infliction of a kind of elitism on people who neither wanted nor needed it.

I love brutalism. We see what happens when brutalism is the law of the land. I love Helvetica. You know, I love that stuff. I do. It makes me…it makes my heart sing. But it's not what humanity wanted. I guess…

I am literally bursting. So do you think we could end it there? Will you forgive me if we do? Nobody does this.

Mason: Before I return this audience to their spreadsheets of anime porn and put you out of your misery, I was going to ask you, if you can do it really really quickly…really quickly, how do we build a desirable future? Or should we just wait for the imminent collapse?

Greenfield: No, no. I think we get into the streets. I think we do. I think we get political. I think we get involved in a way that it's no— I would say up until about three, four years ago, I would have said that it was no longer fashionable. Thankfully it's becoming fashionable again to be involved in this way.

I really do think that the emergence in liberated Kurdistan of the YPJ, the YPG, these people are the most realized human beings of our time. They are doing things which are amazing and they're doing so on the basis of feminism and democratic confederalism. It's fucking awesome and inspires me every day of my life, and if they can do that under the insane pressures that they operate under, we can do that in the Global North in the comfort of our own homes.

Mason: Great. So, Adam doesn't have an optimized bladder, so we're gonna finish here. Radical Technologies is now available through everything apart from Amazon. It's available through Amazon but I recommend you buy it from Verso.

Greenfield: If you buy it from Verso, literally it's 90% off today. They're having a promotion. Ninety percent. Go pay like 10p for it, today. It's awesome. It's a good book. Buy from Verso.

Mason: So, very quickly I want to thank the Library Club for hosting us. To Graeme, to the gentleman here…I don't even know him. To Dan on our audio, and everybody who makes Virtual Futures possible.

Greenfield: And Sophia with the microphone.

Mason: Sophia, thank you for the microphone, for actually… We're a skeletal team. And if you like what we do, we don't make any money. So please support us on Patreon and find out about us at "virtual futures" pretty much anywhere.

And I want to end with this, and it's with a warning and it's the same warning I end every single Virtual Futures with—and it is short, don't worry. And it's this: the future is always virtual, and some things that may seem imminent or inevitable never actually happen. Fortunately, our ability to survive the future is not predicated on our capacity for prediction, although, and on those much more rare occasions something remarkable comes of staring the future deep in the eyes and challenging everything that it seems to promise. I hope you feel you've done that this evening. The bar is now open. Please join me in thanking Adam Greenfield.

Greenfield: Thanks, Luke. That was awesome. Thank you. It was awesome. Cheers.

Further Reference

Event page

Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.