Luke Robert Mason: For those of you who are here for the first time, the Virtual Futures Conference occurred at the University of Warwick in the mid-90s, and to quote its cofounder it arose at a tip­ping point in the tech­nol­o­giza­tion of first-world cultures. 

Now, whilst it was most often por­trayed as a techno-positivist fes­ti­val of accel­er­a­tionism towards a posthu­man future, the Glastonbury of cyber­cul­ture” as The Guardian put it, its actu­al aim hid­den behind the brushed steel, the sil­i­con, the jar­gon, the design­er drugs and the charis­mat­ic prophets was much more sober and much more urgent. What Virtual Futures did was try to cast a crit­i­cal eye over how humans and non­hu­mans engage with emerg­ing sci­en­tif­ic the­o­ry and tech­no­log­i­cal development. 

This salon series—and it has been a series, we’ve been run­ning for it for about two and a half years now—completes the con­fer­ence’s aim to bury the 20th cen­tu­ry and begin work on the 21st. So, let’s begin.


Luke Robert Mason: For many in this crowd Adam Greenfield needs no intro­duc­tion. He spent over a decade work­ing in the design and devel­op­ment of net­work dig­i­tal infor­ma­tion tech­nol­o­gy and his new book from Verso Radical Technologies, a field guide to the tech­nolo­gies that are trans­form­ing our lives tack­les almost every buzz­word that’s been forced down our throats by the so-called dig­i­tal gurus and inno­va­tion direc­tors over the last six months. 

But unlike those evan­ge­lists, Adam con­fronts these prob­lem­at­ic promis­es with a fresh and crit­i­cal voice. From the smart­phone to the Internet of Things, aug­ment­ed real­i­ty, dig­i­tal fab­ri­ca­tion, cryp­tocur­ren­cy, blockchain, automa­tion, machine learn­ing, and arti­fi­cial intel­li­gence, every tech­nol­o­gy is decon­struct­ed to reveal the col­o­niza­tion of every­day life by infor­ma­tion pro­cess­ing. This book is one step in reveal­ing the hid­den process­es that occur when the inten­tions of design­ers are mutat­ed by the agency of cap­i­tal. And any­body who’s joined us for our event with Douglas Rushkoff and Richard Barbrook knows that this may to some degree be a con­tin­u­a­tion of that discussion.

So in an age where our engage­ment with tech­nol­o­gy is one of unques­tion­ing awe and won­der, when we find out about each new advanced tool through the lan­guage struc­tured by the PR team, and where the com­mer­cial news out­lets have to sell us the future, this book is an essen­tial read. So to help us bet­ter nav­i­gate the future, please put your hands togeth­er and join me in wel­com­ing Adam Greenfield to the Virtual Futures stage.

So Adam, what are the rad­i­cal tech­nolo­gies? What do you define as the rad­i­cal tech­nolo­gies and why did you select this par­tic­u­lar set of technologies?

Adam Greenfield: That’s a great ques­tion. So, do you know who Verso is in gen­er­al? Do you have a sense of who Verso is? Yeah, I fig­ured you prob­a­bly did. No, I see one shak­ing head. Verso likes to rep­re­sent them­selves to the world as the pre­mier rad­i­cal pub­lish­er in the English lan­guage. So they’re forth­right­ly left wing. They think of them­selves as a pub­lish­ing house of the left. And you know, for all of the dif­fer­ent per­spec­tives and ten­sions that are bound up in the left I think they do a pret­ty good job of rep­re­sent­ing that tradition.

So in the first instance it makes a fair amount of sense if you’re going to con­front a title called Radical Technologies from an avowed­ly left wing pub­lish­ing house, you would be for­giv­en for assum­ing per­haps that the intent of the author is to insin­u­ate that these tech­nolo­gies have lib­er­a­to­ry, pro­gres­sive, or eman­ci­pa­to­ry effects when deployed in the world.

And I don’t actu­al­ly mean any­thing of the sort. I mean that these are rad­i­cal in the truer sense, in the orig­i­nal” sense. In, if you will, the root sense of the word rad­i­cal,” which is that these are tech­nolo­gies which con­front us at the very root of our being. They’re not mere­ly add-ons. They’re not mere­ly things which kind of get lay­ered over every­day life. They’re things which fun­da­men­tal­ly trans­form the rela­tion­ship between our­selves and the social, polit­i­cal, eco­nom­ic, and psy­chic envi­ron­ment through which we move.

And it was­n’t very hard to iden­ti­fy the spe­cif­ic tech­nolo­gies” that I want­ed to engage in the book because you know, as we’ve already estab­lished these are the ones that are first and fore­most in the pop­u­lar cul­ture, in the media right now—lit­er­al­ly. And this is a tor­ment and a tor­ture for some­body who’s work­ing on a book that’s intend­ed to be kind of a syn­op­tic overview of some­thing which is evolv­ing in real time. Literally every day as I was work­ing on the book, I would open up my lap­top and there would be The Guardian, there would be The New York Times, there would be the BBC with oh you know, cutting-edge new appli­ca­tions of the blockchain beyond Bitcoin. Or dri­ver­less cars are being test­ed in Pittsburgh. Or indeed some­body whose Tesla was equipped with an autonomous pilot­ing device was actu­al­ly killed in a crash.

So I am pro­found­ly envi­ous of peo­ple who get to write about set­tled domains or sort of set­tled states of affairs in human events. For me, I was deal­ing with a set of tech­nolo­gies which are either recent­ly emerged or still in the process of emerg­ing. And so it was a con­tin­u­al Red Queen’s race to keep up with these things as they announce them­selves to us and try and wrap my head around them, under­stand what it was that they were propos­ing, under­stand what their effects were when deployed in the world.

And the addi­tion­al chal­lenge there is that I’m kind of an empiri­cist. I mean, one of the points of this book is to not take any­thing on faith. Do not take the promis­es of the pro­mot­ers and the ven­dors and the peo­ple who have a finan­cial stake in these tech­nolo­gies on faith. And nei­ther, take the prog­nos­ti­ca­tions of peo­ple who’re inclined towards the doomy end of the spec­trum on faith. Do not assume any­thing. Look instead to the actu­al deploy­ments of these tech­nolo­gies in actu­al human com­mu­ni­ties and sit­u­a­tions, and see what you can derive from an inspec­tion of those cir­cum­stances. And the trou­ble is that we don’t have a lot of that to go on. So that’s the mis­sion of the book.

Mason: So the thing that to a degree unites all of those tech­nolo­gies, all the things you speak about in the book, is some­thing that you’ve called the dri­ve for com­pu­ta­tion to be embed­ded into every sin­gle aspect of the envi­ron­ment. You also call it the col­o­niza­tion of every­day life by infor­ma­tion pro­cess­ing. Could you just explain that core thesis?

Greenfield: Yeah, sure. I guess in order to do that con­crete­ly and prop­er­ly I have to go back to about 2002. I was work­ing as a con­sul­tant in Tokyo. I was work­ing at a shop called Razorfish. And Razorfish’s whole pitch to the world was every­thing that can become dig­i­tal will. That was lit­er­al­ly their tagline. Very arro­gant shop to work in. Everybody was just suf­fused with the excite­ment of the mil­len­ni­al peri­od and we all thought that we were like, so far ahead of the curve and so awe­some for liv­ing in Tokyo. 

And frankly, after September 11th of 2001 I was bored to death in my job and I was real­ly frus­trat­ed with it. Because that was a moment in time in which every­body I knew kind of asked our­selves well, what is it that we’re doing? Is it real­ly that impor­tant? It was a real gut check moment. Everybody I knew includ­ing myself, we all asked our­selves you know, we live in times where every­thing that we aspire to, every­thing we dream about, every­thing that we hope for, every­thing we want to see real­ized in the world, could end in a flash of light—in a heart­beat. So, we should damn well make sure that what it is that we’re doing on a day-to-day basis is some­thing mean­ing­ful and some­thing true.

And at that time I was most­ly involved in the design of the nav­i­ga­tion­al and struc­tur­al aspects of enterprise-scale web sites and I had done about fifty of them for like Fortune 500 clients. And I hat­ed the work and I hat­ed myself for doing the work. 

And so I asked myself what comes next after these web sites. Surely this can­not be the end state for the human encounter with net­worked infor­ma­tion tech­nolo­gies. And I asked the smartest peo­ple around me you know, What’s next after the Web? What’s next after the ecom­merce sites that we’re doing?”

And giv­en that it was 2002 in Tokyo, every­body said mobile. Everybody held up their lit­tle i‑mode devices and they said, This green screen with the four lines of type on it, that’s the future.” And I could­n’t quite believe that we were going to force every­day life with all of its tex­ture and vari­abil­i­ty and wild het­ero­gene­ity, that we were going to force all of that and boil all of that down to the point that it was going to be squeezed to us through this aper­ture of this lit­tle green screen with its four or five lines of text.

And I was just not par­tic­u­lar­ly sat­is­fied with the answers I was get­ting. And one per­son said some­thing dif­fer­ent, a woman named Anne Galloway. She said to me, Actually, there’s this thing called ubiq­ui­tous com­put­ing. And as it hap­pens there’s a con­fer­ence on ubiq­ui­tous com­put­ing in Gothenburg in Sweden in about three weeks’ time. And it’s a lit­tle bit late but why don’t you see if your com­pa­ny will pay for you to go there and fly there and check it out and see what’s going on.” And and so I trust­ed her and I said you know, she’s onto some­thing here. This ubiq­ui­tous com­put­ing project feels like the future.

Now, what was ubiq­ui­tous com­put­ing? It was the name for the Internet of Things before the Internet of Things. It was essen­tial­ly the attempt to lit­er­al­ly embed sens­ing, trans­mis­sion, dis­play, stor­age, and pro­cess­ing devices into every fab­ric, every phys­i­cal com­po­nent, every sit­u­a­tion of every­day life. All of the build­ings, all of the vehi­cles, all of the cloth­ing, all of the bod­ies, all of the social cir­cum­stances. It was a very aggres­sive vision.

It was pred­i­cat­ed on Moore’s Law. It was basi­cal­ly the idea that these com­put­ing devices are get­ting so cheap that we can essen­tial­ly scat­ter them through the world like grass seed. We can treat them promis­cu­ous­ly. It does­n’t mat­ter if some per­cent­age of them fails, because they’re so cheap. We’re gonna put pro­cess­ing into every­thing. And we’re going to derive knowl­edge about the world, and we’re going to instill ana­lyt­ics on top of this knowl­edge, and we’re going to fig­ure out how to make our lives final­ly more effi­cient. We’re going to real­ize all of our hopes and dreams by cap­tur­ing the sig­nals of the activ­i­ties of our own body, of the dynam­ics of the city, of the wills and desires of human beings. And by inter­pret­ing and ana­lyz­ing those desires, we’re final­ly going to bring har­mo­ny and sense to bear in the realm of human affairs. That was ubiq­ui­tous com­put­ing cir­ca 2002.

Mason: But then the real­i­ty was we did­n’t dis­cov­er shit. All we found was that this ubiq­ui­tous data col­lec­tion was being used against us. We were the form of media that was being con­sumed, almost.

Greenfield: You antic­i­pate me. That’s absolute­ly cor­rect. You know, we were the prod­uct, it turned out. But that was­n’t clear for anoth­er cou­ple of years yet. It did­n’t real­ly get— I mean, maybe I’m just very stu­pid and maybe it took me longer to fig­ure out what I ought to have.

But that did­n’t actu­al­ly become clear to me until around 2008, right. 2010, even. There was some­thing else that hap­pened in the inter­im, which was kind of the last moment of hope that I myself per­son­al­ly remem­ber hav­ing around infor­ma­tion tech­nolo­gies. It was June 29th, 2007. It was the launch of the orig­i­nal Apple iPhone. And in this sin­gle con­verged device, I thought was the real­iza­tion of an awful lot of ambi­tion about mak­ing infor­ma­tion pro­cess­ing human. I was still…I still believed that in those days, as recent­ly as 2007. So as recent­ly as ten years ago I still believed that.

And I went to work at Nokia in Finland to real­ize a com­peti­tor to that device. I was so inspired by that that I thought you know, that’s great for the first world. That’s great for the Global North. But Apple is real­ly only speak­ing to a very lim­it­ed audi­ence of peo­ple in the rel­a­tive­ly wealthy part of the world. Nokia is where the future is. Nokia at that point had 70% of the Chinese mar­ket share in mobile devices, 80% of the Indian mar­ket share in mobile devices. And I thought this is where we’re going to take all of these ambi­tions and force them to jus­ti­fy them­selves against the actu­al cir­cum­stances of the lives and con­di­tions that most peo­ple on Earth expe­ri­ence. I had a lot of hope about that. And as it turns out, that’s not what happened.

We were told that fish­er­men in East Africa would use their mobile devices to find out about mar­ket con­di­tions and the lat­est avail­able spot prices for the fish that they were about to dredge up out of the sea before they went to mar­ket. We were told that canon­i­cal­ly, women would use this to learn about fam­i­ly plan­ning and take con­trol of the cir­cum­stances of their own fer­til­i­ty and increase their agency vis-à-vis their own com­mu­ni­ties. We were told that the canon­i­cal queer kid in Kansas was going to find oth­er peo­ple like them­selves and not feel so iso­lat­ed any­more, and not feel like they were just one in a mil­lion that was arrayed against them—that they were going find sol­i­dar­i­ty and life and voic­es that resem­bled them. 

And it is pos­si­ble that all of those things hap­pened, anec­do­tal­ly, on a small scale. But some­thing else hap­pened in the mean­time. Which was the cap­ture of all of these tech­nolo­gies and all these ambi­tions by capital.

Mason: Well that was going to be my next ques­tion. If 2008 was the day the Internet died I mean, what was dri­ving the obses­sion up to that point? What was dri­ving the obses­sion to col­lect this data, to make every­thing ubiq­ui­tous? The obses­sion to mod­el the world. I mean, were these done with very kind of egal­i­tar­i­an view­points and just, cap­i­tal hap­pened to get involved and cause the mess that we’ve had over the last sort of six years?

Greenfield: In ret­ro­spect I want to say that those were the last years of the Enlightenment. I real­ly do. It’s a pret­ty big claim but I think that the tech­nolo­gies that we attempt­ed to bring to bear in those years were sort of the last gasp of Enlightenment thought. I mean think about it for a sec­ond, right. The idea that with this device that each one of you I assume has in your pock­et or your hand right now, it gives you essen­tial­ly the entire­ty of human knowl­edge, instan­ta­neous­ly, more or less for free, on demand, wher­ev­er you go. And you can do with it what­ev­er you will. How is that not a real­iza­tion of all of the ambi­tions that are inscribed in the Enlightenment project? It’s real­ly some­thing pret­ty utopi­an to me. And a fact, right. It exists now. 

But we for­got to dis­en­tan­gle some things. I mean you know, much of this was done with again the best inten­tions. If you look back at John Perry Barlow in the Declaration of Rights of Cyberspace. If you look back at— Again, the Californian ide­ol­o­gy that suf­fused the ear­ly years of Web and web devel­op­ment. The move towards open­ness, the move toward stan­dard­iza­tion. All of these things were done with the deep­est ded­i­ca­tion to the democ­ra­ti­za­tion of access to infor­ma­tion. And if you think about for exam­ple the slo­gan of the Whole Earth Catalog you know, access to tools and infor­ma­tion,” again this was some­thing that was real­ized in the smart­phone project, and deliv­ered to peo­ple by the hun­dreds of millions.

The trou­ble is that as I say in my pre­sen­ta­tions, some­thing else hap­pened. And it was­n’t the thing that those of us who were invest­ed in mak­ing this hap­pen imag­ined or actu­al­ly believed would hap­pen. It was­n’t any kind of eman­ci­pa­tion except per­haps the kind that Marcuse would’ve called repres­sive desub­li­ma­tion,” where all of these things that peo­ple had thought were unsayable in pub­lic were sud­den­ly val­i­dat­ed by their peer groups or sud­den­ly val­i­dat­ed in their echo cham­bers. And all of a sud­den the most antidemo­c­ra­t­ic, the most reac­tionary sen­ti­ments became express­ible in pub­lic. So in a sense we got what we asked for, but it was­n’t what we expect­ed that it was.

Mason: Do you think there’s a degree of mid-90s retrieval in the tech­nolo­gies such as blockchain? I mean these guys, the evan­ge­lists of blockchain say that they’re going to build Web 3.0 and it’s almost as if they for­got that was John Perry Barlow’s orig­i­nal mis­sion, the decen­tral­ized Web. And these guys want to build a decen­tral­ized web but 50% of them are very young kids—my peers—getting into the cryp­tocur­ren­cy trad­ing and actu­al­ly for­get­ting what that under­ly­ing tech­nol­o­gy could poten­tial­ly do, or do you think we’ve already lost when it comes to blockchain?

Greenfield: Well… [laughs] I don’t think there’s 90s retrieval going on in the blockchain so much as a direct line of con­ti­nu­ity from a 1980s project. People in this crowd— I’m rea­son­ably famil­iar the peo­ple in this audi­ence will— Raise a hand every­body who’s ever heard of the Extropians. Oh my good­ness, none. No!

Mason: My first inter­ac­tion with an Extropian was Max More.

Greenfield: [laughs]

Mason: So he was the tran­shu­man­ist philoso­pher and I met him at 18 years old in a hotel room in London.

Greenfield: I’m so sorry.

Mason: And he told me that I could ask him any ques­tion apart from about the time they cryo­geni­cal­ly froze his best friend’s moth­er. So this was the Extropian phi­los­o­phy, and a lot of those guys went and became CEOs of cry­on­ics com­pa­nies and want­ed to live for­ev­er. I mean, there was that cor­rup­tion with­in what hap­pened. The phi­los­o­phy nev­er match­es the exe­cu­tion and I won­der why?

Greenfield: Except, except, in the blockchain. So let me let me explain to you who I think the Extropians are. This is a beau­ti­ful vignette that illus­trates some­thing about it. These were tech­no­lib­er­tar­i­ans in, but not pri­mar­i­ly of, the Bay Area in the 1980s. They were hard­core Randians. They were hard­core believ­ers in indi­vid­ual sov­er­eign­ty. They thought of the state as an absolute­ly unac­cept­able intru­sion on the affairs of free, sov­er­eign indi­vid­u­als. They thought that the only valid rela­tions that ought to exist in the world were rela­tion­ships of con­tract between free, will­ing, con­sent­ing adults.

And like oth­er lib­er­tar— Are there any lib­er­tar­i­ans in the audi­ence that I’m going to offend ter­ri­bly by mak­ing fun of? No? Good. Okay. Because I think this is fun­da­men­tal­ly an ado­les­cent and specif­i­cal­ly an ado­les­cent male world­view. It’s a view that sug­gests that I’m gonna do what­ev­er I want and Mommy and Daddy can’t tell me that I can’t. And there’s some­thing kind of like, pis­sy about it.

But these were peo­ple who would swan around the Bay Area in ankle-length leather trench­coats. They gave them­selves names like Max More, because they were all about the pos­i­tive future and your know, our pos­i­tive aspi­ra­tions in that future. They believed in the absolute unlim­it­ed ambit of human progress. And they would give them­selves… You know, they had acronyms like SMILE which was… What was SMILE? I’m for­get­ting this. But it was some­thing about life exten­sion was the LE,” right. Ah! Yeah, smart drugs, intel­li­gence ampli­fi­ca­tion, and life exten­sion. And they thought they were going to live for­ev­er. They lit­er­al­ly thought they were going to live for­ev­er and of the ways—

Mason: They still do.

Greenfield: Yeah. Yeah. And one of the ways that they thought they were going to do this was by cry­on­i­cal­ly freez­ing them­selves when they thought they were about to die, until nan­otech­nol­o­gy had advanced to the point that their bod­ies could be res­ur­rect­ed, their per­son­al­i­ties could be down­loaded into the newly-revivified bod­ies, and they were going to go on and live immor­tal lives in the par­adise to come that was real­ized through tech­nol­o­gy. These peo­ple real­ly believe this stuff. And they were most­ly, and right­ful­ly, for­got­ten. Because this phi­los­o­phy— You’ll for­give me, I per­son­al­ly believe this phi­los­o­phy is a joke.

Except a cou­ple of them went more or less under­ground and set about build­ing a part of this vision. Not the cry­on­ic part. Not the smart drugs part. Not the infi­nite intel­li­gence expan­sion, or the bush robots, or the Dyson spheres around the sun. Or the Computronium. They set about build­ing the finan­cial infra­struc­ture that would be required by a uni­verse that was pop­u­lat­ed by sov­er­eign indi­vid­ual immor­tal entities.

And that’s how we get the blockchain. We lit­er­al­ly get the whole infra­struc­ture of the smart con­tract, and smart prop­er­ty, and the cal­cu­la­tion­al estab­lish­ment of trust and the whole trust­less archi­tec­ture and infra­struc­ture of the blockchain from peo­ple who did­n’t believe that the state had—or any cen­tral authority—had any right­ful busi­ness inter­fer­ing with our affairs. So they built an infra­struc­ture to sub­stan­ti­ate the way of life that they believed in. And it worked.

Mason: The crazy thing is I don’t think the cry­oni­cists are there just yet. I don’t think they’ve even dis­cov­ered blockchain. The fun­ny thing about a lot of the Extropy folks you talk about is, they’ve got a chip on their shoul­der about the fact that they did­n’t make a bunch a mon­ey in what hap­pened in the 90s. And Kurzweil then took their Singularity term and made it mar­ketable, and now Elon’s run­ning around and Peter Thiel’s run­ning around doing a lot of the stuff that they proph­e­sized but they don’t get the cred­it for it. And they’ve got a weird sort a chip on their shoul­der. There’s a lot of qui­et blogs on the dark cor­ners of the Internet where they go, We said that in the 80s but you know, these guys are build­ing it. Screw them.”

Greenfield: And hon­est­ly if I were Max More and Natasha Vita-More, his part­ner, I would feel the same way. They were. They were say­ing these things before Peter Thiel thought to infuse his veins with vir­gin blood. They were say­ing these things before…yeah, before Elon came around to say that he got ver­bal gov­ern­ment approval for a vacuum-evacuated tube beneath Washington DC. Yeah, they were. Whether it’s cred­it or blame that they’re look­ing for they deserve it.

Mason: Well we’ll leave it at that. I do want to go back to blockchain, though. So do you think it’s a get rich quick scheme at the moment for cryp­tocur­ren­cy traders? Or do you think per­haps just maybe there’s some­thing more hope­ful there? Can we build the decen­tral­ized web that John Perry Barlow had— I mean [inaudi­ble] the blockchain folks and the pain with speak­ing to them is they so des­per­ate­ly want to be tak­en seri­ous­ly like the Web 2.0 folks. Well, they call it Web 3.0 but they bor­row the lan­guage from Web 2.0. So they call their apps DAPs—

Greenfield: DAPs.

Mason: Decentralized Apps, which is the most fuck­ing stu­pid term I’ve ever heard. Like, Yeah, we’ve got a DAP!” I’m like what the fuck is a DAP? It’s a decen­tral­ized app. They’re try­ing to make it look, sound, and mar­ket it like Web 2.0

Greenfield: You know, I did­n’t know when I came here that I was going to be in such you know…comfortable— This is like you know, we’re hav­ing— I hope some­body in the crowd real­ly rad­i­cal­ly dis­agrees with the opin­ions that we’re express­ing up here. Because that’s the only way this is ulti­mate­ly can be of val­ue for anybody.

Mason: Alright, sorry.

Greenfield: Because if we agree with each oth­er on—

Mason: So I can make a lot of mon­ey off of ether! What’s wrong with Etherium? It went up $120 after the crash last week, but.

Greenfield: Okay, so the thing about Ponzi schemes is that the peo­ple who’re invest­ed in them believe in them, right? It’s entire­ly legit­i­mate from their per­spec­tive. Any multi-level mar­ket­ing orga­ni­za­tion relies after the first cou­ple of peo­ple on peo­ple who are true believ­ers. And they prop­a­gate the val­ue frame­work of the multi-level mar­ket­ing orga­ni­za­tion or the Ponzi scheme out into the world. And they’re very— You know, like any oth­er reli­gion, we get invest­ed in things. I mean I’ve prob­a­bly got things that I’m invest­ed in that you could con­front me with objec­tive evi­dence that I was wrong, and it would only rein­force me in my insis­tence that I was cor­rect. Because that’s the way the human psy­che appears to work. We now know this. That you can’t argue peo­ple with log­ic or rea­son out of a posi­tion that they haven’t got­ten into by way of log­ic or rea­son. And the secret is that most of the things we believe we did­n’t arrive at rationally. 

So a lot of the enthu­si­asm for blockchain is being prop­a­gat­ed by peo­ple who are invest­ed in it. And to me the inter­est­ing ques­tion is why are they invest­ed in it? What vision of the future are they try­ing to get to? There are… The most heart­break­ing thing for me is the peo­ple on the hor­i­zon­tal­ist left who are real­ly invest­ed in blockchain psy­chi­cal­ly because they think it will real­ize the kind of utopi­an left anar­chist future. Which is a future that I per­son­al­ly… You know, my pol­i­tics are…you know, lib­er­tar­i­an social­ist. Or you know demo­c­ra­t­ic con­fed­er­al­ist. Whatever you wan­na call it, it’s hor­i­zon­tal­ist, you know, all that stuff.

So yeah, do I want to believe that blockchain can make that hap­pen? Of course I would love to believe that. But I’ve done just enough dig­ging to find out that the odds of that hap­pen­ing are not ter­ri­bly great. And if you want to achieve those goals… Goals of con­fed­er­al­ism or munic­i­pal­ism or hor­i­zon­tal­ism or par­tic­i­pa­to­ry democ­ra­cy. Much bet­ter off try­ing to real­ize them direct­ly rather than automat­ing the achieve­ment of that goal by embrac­ing blockchain technology.

Mason: So what do you mean real­ize them directly?

Greenfield: It’s not going to be near­ly as sexy. But I mean hav­ing neigh­bor­hood coun­cils, neigh­bor­hood com­mit­tees. Affinity groups that you work in. The most amaz­ing thing to me at this time is to look at the real-world exam­ples of con­fed­er­al­ists and munic­i­pal­ists who are mak­ing head­way in the world, who aren’t bas­ing their actions and their efforts on utopi­an tech­nolo­gies but are actu­al­ly going out and doing the hard work of orga­niz­ing peo­ple. Almost as if it were the 1930s, right. 

Of course, are they using their smart­phones? Yes. Are they using you know, Telegraph? Yeah, of course they are. Are they using text mes­sages and Google Docs? Are they using cloud-based appli­ca­tions to suture peo­ple and com­mu­ni­ties togeth­er, of course they are because we’re not in the 1930s and we do have tools that we did­n’t use to.

But the real hard work is the work of retail pol­i­tics. It’s the work of engag­ing peo­ple eye to eye, direct­ly, and account­ing for their human­i­ty, their real­i­ty, their griev­ances, their hopes, their desires. That is not some­thing as yet that I can see being instan­ti­at­ed on any infra­struc­ture, blockchain or oth­er­wise, and hav­ing the same kind of impact in the world. 

Mason: Firstly, you’ve writ­ten a lot about the city and I want to go back to IoT, the Internet of Things. So you said you were see­ing it in 2008. I was see­ing about 2012, the excite­ment over smart fridges which seems to repeat itself every three years ad infini­tum. And we nev­er got it. And yet there’s still a dri­ve towards this thing called the smart city. But with things that are hap­pen­ing in the UK, specif­i­cal­ly with the NHS hack I mean, are we think­ing about the cyber­se­cu­ri­ty impli­ca­tions of net­work­ing an entire city?

Greenfield: No, we’re not. And the rea­son is, as I say in the book, as I argue in the book, it’s an arti­fact of busi­ness mod­el. And here again is why it dis­tress­es me specif­i­cal­ly that cap­i­tal cap­tured the Internet of Things. When you go to… Oh what’s the name of the big British chain, Cellphone Warehouse or what­ev­er. The one that’s on you know, Tottenham Court Road. 

Mason: Carphone Warehouse.

Greenfield: Fine. Yeah. Okay. You go in there and you buy a web­cam, right. And that web­cam may be ten quid at this point. The fact that it was engi­neered so that it could be deliv­ered to you at ten quid and the man­u­fac­tur­er and the ven­dor still are still going to be able to make a prof­it on it means absolute­ly no pro­vi­sion for secu­ri­ty could be incor­po­rat­ed into that device. It’s sim­ply cut­ting into some­body’s prof­it margin—it will not hap­pen. And so the tech­ni­cal capa­bil­i­ty exists to pro­vide each one of these devices with some kind of buffer against the worst sort of even­tu­al­i­ties. But for rea­sons of prof­it, that has­n’t been done. 

And so you can go there, and you can buy a web­cam, and you can slap it up in your nurs­ery or in your liv­ing room or in your garage. And odds are that unless you’re very thought­ful, very knowl­edge­able, you know what you’re doing, you read the man­u­al and you con­fig­ure the thing prop­er­ly… You know guess what, there are search engines that are going to auto­mat­i­cal­ly search the Internet for open ports for cam­eras that are speak­ing to the Internet through that port and that don’t have a pass­word or have the default pass­word secur­ing that feed. And you know, lit­er­al­ly some­body 8,000 miles away can search for open web­cams and find them. And we’re talk­ing about web­cams that are look­ing onto baby’s cribs. We’re talk­ing about web­cams are look­ing onto weed grow ops. We’re talk­ing about the back offices of fast food restau­rants. You name it. It’s out there. 

And the rea­son that you can see all of these things from the safe­ty and com­fort of your room is that the manufacturer—probably in Shenzhen—you know, they’re mak­ing two or three pen­nies on each one of these cam­eras sold. If they had both­ered to actu­al­ly engi­neer it so that it could be secured, that prof­it would have evaporated. 

And it’s the same thing… You know, there’s always this motive. Wherever you look in the Internet of Things you crop up against this. And frankly I’ll be very hon­est with you, I wish this weren’t so. It is actu­al­ly bor­ing for me at this point to open up the paper and see the lat­est exam­ple— You know, every­body over the last cou­ple of days, every­body’s prob­a­bly seen the thing about Roombas. Have you all seen the thing about Roombas now? You know what Roombas are doing? 

Everybody loves Roombas because they’re seen as being these harm­less robots that kind of humbly vac­u­um your home. It turns out that Roombas by def­i­n­i­tion and in order to do what they do have the abil­i­ty to map your home in high res­o­lu­tion. And now, in search of anoth­er rev­enue stream, the ven­dor of Roombas is sell­ing that infor­ma­tion or is-excuse me—con­tem­plat­ing sell­ing that infor­ma­tion to the high­est bidder. 

You did­n’t know when you put that lit­tle hock­ey puck thing down to vac­u­um up the cat hair in your house that you were map­ping every con­tour of your exis­tence in high res­o­lu­tion and sell­ing that to some­body. And oh by the way not deriv­ing any finan­cial advan­tage from that your­self but giv­ing up that finan­cial advan­tage to the ven­dor in the third par­ty. You had no idea. You were nev­er asked for your con­sent. You were nev­er noti­fied. But that’s what’s happening. 

And I promise you it is no fun at this point to be the ant­i­cap­i­tal­ist Cassandra who sits up here and says, Guess what you guys. This is what’s going on.” Because peo­ple are like, Ah, God, you again. You again. You’re so… You’re no fun. Why won’t you let us have our robots? What’s wrong with hav­ing a web­cam in the house?” And I’m like, fine. If you don’t mind the idea of a hack­er in Kazakhstan look­ing into your kid’s play­room at will, be my guest. But, I would­n’t do that. 

Mason: There’s the micro scale of the home, but there’s the macro scale of the city itself, and there’s a lot of excite­ment around autonomous vehi­cles and self-driving cars. And some of the most trou­bling stuff that I’ve seen writ­ten is, when all of these cars are con­nect­ed, because whether it’s dri­ven by human a or it’s dri­ven by a machine every sin­gle one will have to have a bea­con (at least in the UK pol­i­cy cur­rent­ly) to iden­ti­fy where it is on the road, the abil­i­ty to take con­trol of those cones is opened up. And we won’t have just one London Bridge event where we have some­one careen­ing a truck that they hired into a bunch of peo­ple. We could have six­teen simul­ta­ne­ous­ly done by a truck that was dri­ven by some­one who had no agency over the fact that that was going to go kill people. 

My issue is cyber­se­cu­ri­ty on wide scale, why are we not there yet? Why are we not just run­ning pet­ri­fied from a lot of this IoT stuff going, Are you fuck­ing kid­ding me?” 

Greenfield: Because you’ve already answered the ques­tion. I mean, as a mat­ter of fact it would be eas­i­er to do it by pow­ers of two simul­ta­ne­ous­ly, right. It would be eas­i­er to do sixty-four trucks simul­ta­ne­ous­ly, or 256 trucks simul­ta­ne­ous­ly. Because they’re all the same stan­dard mod­el and they all have the same secu­ri­ty pack­age, right. You can cap­ture mul­ti­ple cam­eras at once because they don’t have secu­ri­ty on them. I promise you that there’s going to be a ven­dor of auto­mo­bile net­work­ing that is going to have a sim­i­lar lack of atten­tion to detail, and it will sim­ply be eas­i­er to do it all at once. 

Why are we not run­ning scream­ing from these things? Well…we believe in the future. And we believe that the future is going to be bet­ter. And we believe… I mean…putting the ques­tion of ter­ror­ism to the side, why is it that we nev­er talk about autonomous pub­lic trans­port? Why is it that when we imag­ine the dri­ver­less car, the autonomous vehi­cle, we always imag­ine it as sim­ply the car that peo­ple own now but with­out a steer­ing wheel?

Mason: Because of the man­u­fac­tur­ers. Nissan are fuck­ing ter­ri­fied that nobody’s going to buy cars.

Greenfield: And they’re right.

Mason: Yeah. And the insur­ance com­pa­nies are even more pet­ri­fied. If you can prove you’re nev­er going to have a crash, buy a viva. 

Greenfield: No, you’re right. You’re right. So again, this is kind of a drum­beat that I’m sure gets tir­ing for peo­ple. Capitalism is the prob­lem, right. Capitalism is the ulti­mate frame­work in which our imag­i­nary is embed­ded. And we have a real­ly real­ly hard time see­ing out­side that frame­work and say­ing well, maybe these things could be col­lec­tive goods. Maybe these things could be munic­i­pal­ly owned. Maybe these things don’t have to repli­cate all of the mis­takes that we’ve made over the last hun­dred years. Wouldn’t that be amazing? 

The trou­ble is that— You know, it’s the most enor­mous cliché on the left. It is eas­i­er to imag­ine the end of the world than it is to imag­ine the end of cap­i­tal­ism. Like this is such a cliché that it’s like one of these inspi­ra­tional quotes on Facebook. Nobody’s quite sure who said it orig­i­nal­ly and there are mul­ti­ple peo­ple who’ve— You know, Abraham Lincoln prob­a­bly said it. And we need to begin urgent­ly imag­in­ing what that looks like. Because if we don’t we’re nev­er going to be able to imag­ine a place for these tech­nolo­gies in our lives that responds to the most basic con­sid­er­a­tions of human decen­cy and the kind of world that we want to live in. It’s that simple. 

And if you don’t already agree with me, I cer­tain­ly don’t expect con­vince you tonight. This is sim­ply my opin­ion. But it is…you’ll for­give me, it’s an opin­ion that is bol­stered by a depress­ing­ly con­sis­tent litany of evi­dence over what is now fif­teen or twen­ty years. Every sin­gle fuck­ing time we seize on a tech­nol­o­gy that looks as though it might be used for some­thing inter­est­ing, some­thing out­side the enve­lope of every­thing that we expect, every­thing that we’re accus­tomed to, it gets cap­tured and turned back—and in amaz­ing­ly short peri­ods of time. Like, one of you is going to have to do bet­ter. You’re going to have to go out there and rip this enve­lope of con­straints to shreds and imag­ine some­thing that does­n’t look like every­thing that we’ve already been offered. Because oth­er­wise it’s just going to more of the same over and over and over again. And you know, I’m old now, right. I don’t want to live the declin­ing years of my life in an envi­ron­ment where I’ve seen this all before and it’s all— You know, some­body come at me with some­thing pro­found­ly new and dif­fer­ent and I will be the very first per­son to applaud you. 

Mason: I just want­ed… From the floor I mean, who still believes in the future? Welcome—

Greenfield: One hand. Yay! 

Mason: Welcome to Virtual Futures. We found the oth­ers. And we’re all on God knows what. I mean, we spoke a lot about depres­sion in the last one.

Greenfield: You want to kick the mic into the crowd and see what happens?

Mason: I do, but before we do that I have one oth­er ques­tion which I think—and let’s jump— Should we just embrace the accel­er­a­tionist’s thought. Should we just go, you know what? If cap­i­tal is the thing that’s dri­ving this all, let’s just accept it. Let’s run for it. Let’s accept that humans are just here to train the machines to take over when we final­ly are killed off by them or we no longer have the biol­o­gy to sur­vive the envi­ron­ments we’re in because we fucked it up? And it would be okay for some of the humans because those would be the guys who fly off to Mars and have their own lit­tle species—their sub­spe­ci­a­tion plan­ets there. I just won­der, should we embrace the accel­er­a­tionist’s view­point, and should we allow some humans to just sub­spe­ci­ate, or aspeciate?

Greenfield: Uh, well…you’re all wel­come to but I can’t, and I could­n’t bear myself if I did. Because hon­est­ly? Accelerationism feels to me like a remark­ably priv­i­leged posi­tion. It’s some­thing that peo­ple who are already safe and com­fort­able can say throw cau­tion to the winds; let it all fly,” right. You can say that if you’ve got a roof over your head and food in your bel­ly and health­care for the rest your life. It’s easy to say that. 

If you’re any clos­er to the edge than that— If you have any real amount of what we now call pre­car­i­ty, fear, in your life. If you have fear in your bel­ly because you’ve watched the peo­ple around you strug­gle with their health, or their men­tal health. If you’ve been touched in any way by the eco­nom­ic down­turn that’s kind of tak­en up res­i­dence in our lives since the intro­duc­tion of aus­ter­i­ty. If you per­ceive your­self to in any way not have been advan­taged by the past forty years of neolib­er­al hege­mo­ny across the Western world, it’s impos­si­ble to embrace accel­er­a­tionism if you have a beat­ing heart and any­thing resem­bling a soul. It’s my own per­son­al opin­ion. I hope I’m not insult­ing any of you. But that is— You know, accel­er­a­tionism to me is an abdi­ca­tion of respon­si­bil­i­ty for the oth­er human beings you share the plan­et with, and also by the way the non­hu­man beings and sen­tiences that you share the plan­et with.


Luke Robert Mason: So on that note, whether you believe in the future or not we are going to throw out to audi­ence ques­tions. We’re gonna see if this might work. So we’re going to hand this mic around. We’re so under­staffed it’s incred­i­ble, so if any­body wants to run our mic that would be great. or we could work as a col­lab­o­ra­tive unit and pass this mic between folk—

Adam Greenfield: We could make it hap­pen. I’m sure we can make it work.

Mason: Or some­times we have to just grab mics off of peo­ple. By the way, a ques­tion has an into­na­tion at the end. So if you have any questions..

Greenfield: Oh right. Yeah no, that’s a real­ly real­ly good point. I do a lot of talks where peo­ple make reflec­tions. I’m sure you’ve all got fas­ci­nat­ing things to say but I would love to hear those things after­wards over a beer? And right now it’s lit­er­al­ly for ques­tions that we will attempt to answer. If you have a reflec­tion to make, maybe the time for that is lat­er on. 

Mason: Wonderful. Any questions?

Matthew Chalmers: Hello.

Greenfield: Howdy. What’s your name, man?

Chalmers: My name is Matthew Chalmers. I’m an aca­d­e­m­ic from the University of Glasgow.

Greenfield: There you go. 

Chalmers: There you go. And I just came—I just walked out now from at a meet­ing in Her Majesty’s trea­sury where there are peo­ple from gov­ern­ment try­ing to find out about dis­trib­uted ledger tech­nolo­gies and what they might do about it. They’re skep­ti­cal but inter­est­ed, and they’re being hit by this wave of hype. And I was one of the peo­ple they throw­ing rocks. Because I think the hype is just going to become total­ly overblown. 

Greenfield: You’ve been throw­ing rocks since I’ve known you. 

Chalmers: Why change the habit of a life­time? So I won­der whether Adam and the oth­ers would like to… What would their mes­sage be to the peo­ple from the Justice Department, and the trea­sury, and the banks I just talked to. Because it was real­ly freaky. 

Greenfield: I would love to pick your brain over a beer as to what that meet­ing looked like, to the degree that you’re com­fort­able shar­ing it. You said they were skep­ti­cal, and that’s fas­ci­nat­ing to me. Like, I assume… My default assump­tion is that those peo­ple are not stu­pid. And they have a cer­tain abil­i­ty to know when they’re being pushed into a cor­ner. But they don’t always have the tools to resist that. And so my ques­tion to them would be what is it that peo­ple are ask­ing of them? Why is it that dis­trib­uted ledger… Which is not iden­ti­cal with blockchain—we need to be very care­ful with the ter­mi­nol­o­gy here. But what is it that they hope to achieve with a dis­trib­uted ledger? And are there not pos­si­bly oth­er ways of achiev­ing those ends that don’t involve the tran­si­tion to an entire­ly new and unproven tech­nol­o­gy? That would be— I mean, yeah seri­ous­ly. I mean like, I’m jeal­ous that you got to be in that room; I’m grate­ful that it was you in that room. 

Chalmers: I was­n’t the only one.

Greenfield: I’m sure. But I think that… You know, dare I hope that— I’m knock­ing, you can see knock­ing the chair here instead of knock­ing on— Here’s wood; knock­ing on wood. Dare I hope that we have been burned enough at this point and we have plen­ty of case stud­ies to point to where some multi-billion or tens of bil­lions of pounds of invest­ment was made in the tech­nol­o­gy, and the tech­nol­o­gy ven­dor turned out to not have the best inter­ests of the pub­lic entire­ly… Dare I hope? I don’t know. It’s an amaz­ing cir­cum­stance to think of and I would love to catch up with you more after­wards and find out what that con­ver­sa­tion went like.

Mason: Any oth­er questions?

Greenfield: Say your name, please. 

Mason: Also, if any­one wants to earn them­self a beer, I real­ly need some­one to run that mic. So if any­body can help, that would be great. Sorry.

Audience 2: My name is Jaya and I am writ­ing a PhD on blockchain tech­nol­o­gy. And I would love to also who hear more about what hap­pened in that meet­ing. My ques­tion is not about blockchain, though. 

Greenfield: Thank you.

Audience 2: I’m more curi­ous about… The con­ver­sa­tion that the two of you were hav­ing was very much kind of focused on acci­dents and poten­tial secu­ri­ty prob­lems with dig­i­tal tech­nolo­gies. And usu­al­ly when that fram­ing hap­pens, it kind of turns the prob­lem into just anoth­er prob­lem for tech­nol­o­gy to solve as in okay, there’s a secu­ri­ty prob­lem. Let’s get some cryp­tog­ra­phers involved, let’s get some— You know, it’s anoth­er prob­lem to be solved by more technology. 

So I was won­der­ing if there’s a dif­fer­ent kind of angle or some oth­er kind of aspects of the cri­tique. I mean, you men­tioned a kind of gen­er­al cri­tique of cap­i­tal­ism, which sounds fantastic—

Greenfield: Pretty broad.

Audience 2: —and makes sense. But I was won­der­ing like, some of the more spe­cif­ic angles that you cov­er in the book. 

Greenfield: I do won­der, you know… In the 1960s, and I’m going to for­get and not be able to cite this appro­pri­ate­ly. But there was a body of thought in what was then called human fac­tors research about nor­mal acci­dents. And you can you can look this up right now and you can find the canon­i­cal paper on nor­mal acci­dents. But the idea was that in any of the com­plex process­es that—and I think that the canon­i­cal exam­ple here was a nuclear pow­er plant. That any of the com­plex process­es that we’ve installed at the heart of the ways in which we do life under late cap­i­tal­ism at this point in time…accidents aren’t acci­dents. We can expect that our process­es are inher­ent­ly braid­ed enough and com­pli­cat­ed enough and thorny enough and coun­ter­in­tu­itive enough that errors will arise at pre­dictable inter­vals, or at least you know, predictably. 

And I thought in the seed of that was some­thing pro­found and not mere­ly amenable to tech­ni­cal res­o­lu­tion. Because as I under­stood it, the point of that argu­ment was to say not to slap a quick tech­ni­cal fix on a sys­tem that you know is going to throw errors at inter­vals. But in a sense to rede­fine process­es around what we under­stand about who we are and what we do and how we approach prob­lems. It isn’t sim­ply to build back­ups and cas­cad­ing redun­dan­cies into com­pli­cat­ed sys­tems, it’s to accept that we make mistakes. 

And I think it’s that accep­tance of human frailty that I found par­tic­u­lar­ly rad­i­cal and par­tic­u­lar­ly refresh­ing. That ulti­mate­ly any of our insti­tu­tions are going to be marked by…you know, it’s no longer done to say human nature” so I won’t say human nature. But any­thing that we invent, any­thing that we devise, any­thing that we come up with, is going to be marked by our human­ness. And instead of run­ning from that, it might be best to try and wrap our heads around what that implies for our­selves, and to cut our­selves a god­damn break, you know, and to not ask that we be these sort of high-performance machines that are sim­ply made out of flesh and blood but that are slot­ted into oth­er net­works of machines that don’t hap­pen to be made out of flesh and blood. 

I thought that there was a hope­ful moment in there that could have been retrieved and devel­oped. And I think frankly that there still could be. I think that most of what gives me hope at this point are process­es which are not at all sex­i­ly high tech­nol­o­gy but are pre­cise­ly about under­stand­ing how peo­ple arrive at deci­sions under sit­u­a­tions of lim­it­ed infor­ma­tion and pres­sure. And I think that’s why I got involved in what was then called human fac­tors in the first place, was because the world is com­pli­cat­ed, and it is het­ero­ge­neous, and there’s not going to be any crit­i­cal path to a gold­en key solu­tion to any of this. We have to work at it togeth­er, and it’s a process that is painstak­ing and involved and frus­trat­ing—oh my God is it frus­trat­ing. And to my mind, the more that we under­stand that, and the more our tech­nolo­gies inscribe that les­son for us in ways that we can’t pos­si­bly miss, the bet­ter off we are. Is that…a rea­son­able answer? Groovy. 

Mason: My flip side to that is we can’t pre­pare for it, and we need the cat­a­stro­phe to occur. So philoso­phers have been argu­ing about the trol­ley argu­ment with regards to self-driving cars for God knows how long. We won’t give two shits until a car actu­al­ly kills some­one, and so blood is actu­al­ly spilled. And with cri­tique of the Extropy folks, they thought they were going to get their liv­ing for­ev­er futures with­out any­body dying. If you’re going to exper­i­ment with cer­tain types of med­ical tech­nol­o­gy on indi­vid­u­als to help them live longer, then you’re going to have to exper­i­ment on human indi­vid­u­als even­tu­al­ly, and there will be mis­takes. The his­to­ry of sci­ence shows us that. 

Now Professor Steve Fuller, who we’ve had here a lot at Virtual Futures, has argued that maybe the only way to actu­al­ly make some of these crazy visions pos­si­ble is that we sign up for our human­i­ty. In the same way in the 1930s you signed up for queen and coun­try to go to war, you’d sign up your human­i­ty and you’d go and get your weird biotech exper­i­ment to see if it made you live longer, because if it did you would be a pio­neer for the future of human­i­ty. And if you died, well you died in the ser­vice of the future of the human race…whether we’d ever get there or not.

Greenfield: I think you make a real­ly good point, though, which is that when the Extropians did have, lit­er­al­ly, their heads cut off and frozen in liq­uid nitro­gen and they entrust­ed their heads to these repos­i­to­ries that they thought were going to last for 10,000 years, the hold­ing com­pa­ny went bank­rupt and default­ed on their elec­tric bill. The elec­tric bill on the cool­ers was­n’t paid. The cool­ers were shut off by the elec­tric com­pa­ny. The facil­i­ty reached room tem­per­a­ture. The coolant leaked out of the ves­sels and the heads rotted. 

Mason: You know their solu­tion for that? 

Greenfield: No pun intend­ed, go ahead.

Mason: Yeah. They want to send them to space. 

Greenfield: [laughs] Course they do.

Mason: They need more space to bury dead peo­ple, so the cold­est vac­u­um is space, so why don’t you just have them orbiting—

Greenfield: So, but…

Mason: —ad infini­tum? I’m fuck­ing—I’m seri­ous.

Greenfield: I believe you. I com­plete­ly believe you. But the point is that human insti­tu­tions you know, they’re not tran­shu­man, they’re not posthu­man, we’re all too human, right. We go bank­rupt and we don’t pay the pow­er bill. And then the pow­er com­pa­ny cuts off the po—this is what hap­pens. The space launch sys­tem you know, some­body trans­pos­es some­thing that was in met­ric to Imperial, and the cap­sule that was sup­posed to orbit in a com­fort­ably tol­er­a­ble envi­ron­ment and keep your head frozen for ten mil­lion years is launched into the sun. Who knows? 

The peo­ple who believe these things believe in the per­fectibil­i­ty of things which have nev­er in our his­to­ry ever once been per­fect before, and they’re bet­ting every­thing on that per­fec­tion. And I find it touch­ing­ly naïve and child­like. But as a polit­i­cal pro­gram cul­pa­bly naïve and to be fought with every fiber of my being.

Mason: Is the oth­er piece of your book the thing that unites those tech­nolo­gies? There’s a dri­ve for opti­miza­tion. Whether it’s the city, the human, or any­thing else in between.

Greenfield: I hope again I’m not insult­ing any of you. None of us in this room are opti­mal. Like I’m not optimal—I’ll nev­er be opti­mal. I’ll nev­er be any­thing close to opti­mal and I’m not sure I would want to be opti­mal. You may have dif­fer­ent ambi­tions and I wish you the best of luck. But I think it’s going to be a rough road. 

Mason: I found some­one who’s kind enough to run this mic, thank you ever so much. 

Greenfield: You’re not your­self ask­ing a question?

Mason: Thank you.

Audience 3: Well…

Greenfield: Say your name.

Audience 3: My name’s Tara. Hello. On that note with opti­miza­tion, could you not say that it’s some­how linked to cap­i­tal­ism? That you’re always chas­ing this goal that you can nev­er achieve and we’re now bring­ing that to our­selves with, phys­i­cal­ly you’re meant to fol­low and you know, you could say the same thing with the sort of gym craze that every­one seems to be going through as a cure for find­ing this opti­mal being.

Greenfield: Yeah. I think that cap­i­tal­ism is almost too easy a bug­bear, though. Because the desire to opti­mize or to per­fect is old­er than cap­i­tal­ism. And it’s almost as if it has vam­p­i­rized cap­i­tal­ism to extend itself. That log­ic of want­i­ng to per­fect our­selves, to mea­sure our­selves against the gods you know, it’s not new. And it’s not shal­low, either. I under­stand why it exists. 

But the fact of the mat­ter is that when we go to the gym—I go to the gym. You know, I will spend nine­ty min­utes tomor­row on an ellip­ti­cal machine. Why will I spend nine­ty min­utes on an ellip­ti­cal machine? Well, because I want to be fit­ter. Why do I want to be fit­ter? I want look bet­ter in my clothes. I want peo­ple to think that I’m more attrac­tive. I want peo­ple to think that I’m more attrac­tive so that they are more like­ly to want to invite me to things because my finan­cial future depends on me being invit­ed to things. 

I mean, all of these… You know, these things are not inno­cent. And the moti­va­tions and the desires that we rec­og­nize in our­selves aren’t there by acci­dent. And I’m not going to say that they’re always 100% there because of you know, capitalism—that’s kin­da shal­low. But they’re invid­i­ous. And what I would ask is that we each have the courage to ask of our­selves why it is that we feel that we need to be like some gung-ho NASA astro­naut of the 1960s, kick the tires and light the fires.” Why is it that we feel called upon to oper­ate in these high-performance regimes when we’re after all sim­ply human. 

Mason: I think [alter­na­tive?] fail­ure of the Extropians… So the mor­pho­log­i­cal free­dom the­sis was we’re going to be stronger, bet­ter, faster, more opti­mized. The thing they for­got is in actu­al fact that does­n’t make us bet­ter as an entire species. The thing that we should do is embrace difference.

Greenfield: Yes.

Mason: It was­n’t sur­vival of the fittest, it was sur­vival of the mutant, the indi­vid­ual, and the ani­mal that actu­al­ly sur­vived. The weird ways in which the envi­ron­ment would manip­u­late them. And it was­n’t the fittest ones that sur­vived. And I won­der if we embrace dif­fer­ence instead of dri­ving towards opti­miza­tion that we’d have a more inter­est­ing expe­ri­ence. Or, will it go ful­ly the oth­er way and we sub­spe­ci­ate and we will have those guys who go off-planet and the rest of us will be left here.

Greenfield: Yeah, I think hit­ting on some­thing true and real and inter­est­ing. Before Boing Boing was a web site it was a fanzine. And I think its tagline was some­thing like Happy facts for hap­py mutants” or some­thing like that. And the hap­py mutants part was impor­tant, right. It was the idea that we weren’t going to be con­strained by the human body plan. And that we were going to invent or dis­cov­er or explore new spaces. Like not mere­ly new expres­sions of self, new gen­ders, new iden­ti­ties, new per­sonas, new ways of being human, new ways of being alive.

And that was star­tling­ly lib­er­a­to­ry. It real­ly was, um… You know, in 1985 or so that felt like some­thing worth invest­ing in, and some­thing worth bet­ting on. And I think it is sort of a fail­ure of the col­lec­tive imag­i­na­tion that we now inter­pret free­dom to mean essen­tial­ly the free­dom to oppress and exploit oth­er human beings, and the non­hu­man pop­u­la­tion of this plan­et. Because it did at one point mean… Every sin­gle time I see some­body who still like, they’re body hack­ing or they’re putting a chip into their wrist or some­thing like that, I have mixed feel­ings. Because on the one hand I see the last sur­viv­ing note that some­body’s hit­ting of some­thing that was much big­ger and more hope­ful, and I also see the total­i­ty of the ways in which that’s been cap­tured and turned against the orig­i­nal ambi­tion. That’s a melan­choly and a com­pli­cat­ed feel­ing. But you’re right. I mean maybe there’s some­thing in that to be retrieved and brought for­ward to the present moment. 

Mason: We can only hope. Another question. 

Audience 4: Hey, my name is Henri. I’m French but I’m sure you’ve already heard that. Anyway—

Greenfield: [laughs]

Mason: What a won­der­ful opening.

Greenfield: Well done, yeah.

Audience 4: So we know robots are tak­ing more and more jobs every­where. And there’s a belief that cre­ativ­i­ty’s one of the only sec­tors that won’t be touched by automa­tion. But do you think that robots can be cre­ative? And if yes does that mean we’ve reached a kind of singularity?

Greenfield: So I don’t believe in sin­gu­lar­i­ties, right. Bang. So let’s dis­pense with that.

Weirdly enough, though, there’s some ten­sion between the two parts of my answer. I think the Singularity is a human ide­ol­o­gy. I think it does­n’t cor­re­spond to the nature of non­hu­man intel­li­gence. I do think non­hu­man intel­li­gences are capa­ble of being creative. 

And let me not specif­i­cal­ly for the sec­ond talk about machinic intel­li­gences. I think that we know, by anal­o­gy to oth­er forms of non­hu­man intel­li­gence that are capa­ble of creating…using the world as an expres­sive medi­um, that you can­not tell me that the infor­ma­tion­al con­tent of whalesong is all that it’s about. You can­not tell me that bird­song is sim­ply about con­vey­ing infor­ma­tion. It is a pre­sen­ta­tion of self. It is an embroi­dery on the avail­able com­mu­ni­ca­tion chan­nel, and there is plea­sure that is tak­en in that act. So I would inter­pret that—birdsong, whalesong, com­mu­ni­ca­tions of ani­mals in general—as expres­sive and cre­ative acts. Right here, right now, with­out even hav­ing to think about machinic intelligence. 

So, do I believe that we will—relatively soon—arrive at a place in which algo­rith­mic sys­tems are gen­er­at­ing seman­tic struc­tures, com­mu­nica­tive struc­tures, expres­sive struc­tures for their own plea­sure? Or some­thing indis­tin­guish­able from pleasure—yeah, I do. I absolute­ly do. I do not think that cre­ativ­i­ty is the last refuge of the human. I think for all that I am in many ways a human­ist in the old-fashioned way, it’s very dif­fi­cult for me to draw any line at any point and say this is the unique thing about humans that noth­ing else in the uni­verse is capa­ble of. 

And as a mat­ter fact, what con­vert­ed me to this posi­tion was in fact an attempt to do that, was the attempt to find some­thing unique­ly and dis­tinc­tive­ly human. And you know, if you have any intel­lec­tu­al integri­ty at all, if you go down this path you find pret­ty quick­ly there’s noth­ing that we do that oth­er species don’t do. There’s noth­ing that we do that oth­er com­plex sys­tems in the uni­verse don’t do. Very very very lit­tle, it turns out, is dis­tinc­tive­ly human. 

So yes I do believe in rel­a­tive­ly short order we will be con­front— If in fact they don’t already exist and we’re just sim­ply not per­ceiv­ing them, in the way that an ant does­n’t per­ceive a super­high­way that’s rush­ing past its anthill, right. It is pos­si­ble that these expres­sive and com­mu­nica­tive struc­tures are already in exis­tence at a scale or at a lev­el of real­i­ty that we do not perceive. 

But even putting that pos­si­bil­i­ty to the side, yes I think that we will invent and cre­ate machinic sys­tems which will to all intents and pur­pos­es real­ize things which we can only under­stand as art or as cre­ative or as expres­sive. And then the ques­tion becomes what rights does our law pro­vide for those sen­tient beings—because they will be sen­tient. What space do we make for them that is any­thing but slav­ery? And how do we treat them that is in any way dif­fer­ent than the way that we treat peo­ple at present?

You know Norbert Wiener, in I’m going to say 1949 and some­body will Google this and tell me that I’m wrong. But one of his first works of thought in cyber­net­ic the­o­ry was called The Human Use of Human Beings. And I come back to that fram­ing a lot. It is about the use of things that are regard­ed as objects, and not things which accord­ed their own sub­jec­tiv­i­ty, their own inte­ri­or­i­ty, their own perso—their own being. And I think that we’re going to have to con­front that in our law, in our cul­ture, and in our ways of inter­act­ing with one anoth­er soon­er rather than later. 

Mason: Any oth­er questions?

Audience 5: Hey, I’m Matt from Scotland.

Greenfield: Hey. What’s up?

Audience 5: You men­tioned you want to see the end of the cap­i­tal­ism, and I’m all for it. I actu­al­ly want to work on that. Do you have any ideas for me?

Greenfield: Yeah. I do. Öcalan. The founder of the PKK in Turkey, he wrote a book called Democratic Confederalism, go read it. Great book. 

Audience 6: Hi, I’m Simon from Brighton.

Greenfield: Hi, Simon.

Audience 6: What’s your view on how employ­men­t’s going to be affect­ed over the next twen­ty years by all of these changes we’ve been talk­ing about?

Greenfield: Yeah, oh God.

Audience 6: Sort of how the Extropians were going to have their futures with­out hav­ing to die, will we get our futures and still get to keep our jobs?

Greenfield: I think we need to accept that our lan­guage around this stuff is braid­ed and inter­wo­ven with assump­tions which are no longer ten­able. So, what is a job? A job is a thing that we do dur­ing the hours of our days that is remu­ner­a­tive to us and that gen­er­ates val­ue for the econ­o­my. And that some­how most of us are expect­ed to have as a con­se­quence of being adults, in a cul­ture that expects full employ­ment or some­thing close to full employ­ment. And in which a met­ric of the healthy func­tion­ing of the econ­o­my is that there is some­thing close to full employ­ment of human beings. 

And I think that all of those assump­tions are becom­ing sub­ject to chal­lenge if they haven’t been chal­lenged already. So the notion that a job is a thing that you go to is already you know, it’s already been explod­ed and dis­as­sem­bled by the past thir­ty or forty years of expe­ri­ence. Like, we have tasks now rather than jobs. We no longer— 

Audience 6: [inaudi­ble]

Greenfield: Gig econ­o­my, absolute­ly. That was the first assault on these ideas. But then comes the idea that there are tasks which auto­mat­ed sys­tems can per­form at much low­er cost than human beings. And par­tic­u­lar­ly if we accept the the­sis that I’ve just argued to the gen­tle­man who asked the pre­vi­ous ques­tion before one, that there are very few tasks in the econ­o­my that can­not ulti­mate­ly be per­formed by machinic sys­tems, right. 

Like, I used to make this argu­ment to like ad agency peo­ple. And they would say, Oh you know, a guy who puts togeth­er cars on an assem­bly line yeah, that can be auto­mat­ed away. And a nurse. Well, the job of a nurse can be auto­mat­ed away. We’ll find peo­ple to wipe the butts of peo­ple in nurs­ing homes and robots will do that and algo­rithms will do the rest. But I’m the cre­ative direc­tor of an ad agency, and you’ll nev­er auto­mate away the things that I do. The spark of cre­ative fire that I bring.”

And I’m like dude, do you under­stand what a Markov chain is? And do you under­stand how I could take the whole cor­pus of 20th cen­tu­ry adver­tis­ing and gen­er­ate entire­ly new cam­paigns out of what worked in the past. So there’s very lit­tle that I see, again, as being beyond the abil­i­ty to be auto­mat­ed. And I think that when that hap­pens, we real­ly real­ly have to wres­tle with the idea that the assump­tions upon which the econo­met­rics that a healthy econ­o­my is assessed on are mis­guid­ed. That the whole notion of eco­nom­ic growth, the whole notion of the wise stew­ard­ship of a nation-state being one that’s coex­ten­sive with eco­nom­ic growth, that is expressed in some­thing close to full employ­ment, we need to devise sys­tems that replace all of that because it’s all on its way out. 

At this point most peo­ple talk about UBI. They say the uni­ver­sal basic income is going to save us all. And I say well that’s great. I love the UBI. But sure­ly you’re talk­ing also about the UBI in a con­text of uni­ver­sal health­care, and the right to hous­ing, and you know, the right to shel­ter, aren’t you? Because if you’re not, the UBI will wind up get­ting siphoned back off of peo­ple in the form of user fees for ser­vices which used to be pro­vid­ed by the pub­lic and are now sud­den­ly pri­va­tized. If we sim­ply have the UBI in the usu­al neolib­er­al con­text, we haven’t real­ly got­ten any­thing at all. 

So, jobs, the econ­o­my, employ­ment…hob­bies. You know…craft. I mean, all of these terms have been defined in a con­text in which all of the assump­tions that gov­ern that con­text and no longer ten­able. How do we begin to be human in a time when none of these things are any longer true? I have some ideas but I don’t have any answers. All I have are my own instincts and the things that I’ve learned. And all you have your own instincts and the things that you’ve learned. And togeth­er all that we have is our col­lec­tive sense of what we’ve seen hap­pen when automa­tion hap­pens. As I say elsewhere—not in the book—we’re enter­ing a post-human econ­o­my, and a post-human econ­o­my implies and requires a post-human pol­i­tics. And we now have to dis­cov­er what a post-human pol­i­tics looks like. 

Mason: I want to quick­ly return to UBI. So there’s been two…these key ideas that I’ve heard which are quite attrac­tive with regards to how UBI would actu­al­ly work. One is the abil­i­ty to sell our own data. So…I hate to return to blockchain but the idea that you’re about to pro­duce a whole bunch of new data that should­n’t be tak­en by the stacks. So you can pro­duce genet­ic data and neu­ro data. And what do plat­forms want? Well, they want atten­tion data and that’s neu­ro data. So if you can store that data local­ly and then sell it or micro-sell it back to the plat­forms that make the mon­ey off us in the first place, that’s a way to turn basi­cal­ly bring in a small amount of income every time you’re sit­ting there search­ing through Facebook, ie. the adver­tis­ers pay us to watch their thirty-second bits of rubbish.

Or the sec­ond one is we just need to come to terms with the fact that the employ­ees of the sorts of com­pa­nies who actu­al­ly advo­cate the UBI such as Google, you see the very young employ­ees going, Yes, UBI’s a great idea!” But they for­get they work for the com­pa­ny that’s not pay­ing their tax­es in this coun­try, and the tax mon­ey would be how to make UBI hap­pen in the first place, so should they not be more account­able to actu­al­ly turn­ing around to Amazon or Google or wher­ev­er the hell they work and go, Jesus, I’m gonna need this UBI. Fucking pay your taxes.”

Greenfield: If cor­po­ra­tions paid their tax­es we would­n’t be talk­ing about UBI, peri­od end of sentence.

Mason: Yeah.

Greenfield: Yes, yes. Absolutely. 

Mason: So the sec­ond one’s more tenable.

Greenfield: Yeah. I mean, let’s dis­pense with that first, dystopi­an vision.

Mason: Right.

Greenfield: Let’s sim­ply say that in the United States at least, if cor­po­ra­tions paid their fair share of the tax­es, you could afford the wel­fare state and a whole lot more. You could afford basic infra­struc­ture. You could afford decent qual­i­ty of life for every sin­gle human in the coun­try and a whole lot more besides, and that’s just the United States. Corporations should pay their damn taxes. 

Mason: That did­n’t get a round of applause. I’m slight­ly con­cerned now. Any oth­er questions?

Greenfield: Let’s have this be the last ques­tion, if that’s okay. 

Audience 7: A lot of pres­sure. Thank you.

Greenfield: A lot of pres­sure on my blad­der, particularly.

Audience 7: Oh, okay. Then I’ll make it short. I want to talk a lit­tle bit about (I’m Pete, by the way.) about the mar­ket and maybe the taste of peo­ple who use tech­nol­o­gy. I’m think­ing par­tic­u­lar­ly about aug­ment­ed real­i­ty. Let’s take Pokémon GO as an exam­ple, which every­one’s a lit­tle embar­rassed about now. Candy Crush has out­lived Pokémon GO. And that maybe in terms of the mar­ket, there isn’t the taste for the future that tran­shu­man­ists want. People just want to play Candy Crush and wait for death. And because of that, we’re actu­al­ly defend­ed against cer­tain types of dystopia. 

Greenfield: Bless you. I’m so glad some­body use the word taste. So one of the things that hap­pened when we did suc­cess­ful­ly democ­ra­tize access to these tools, ser­vices, net­works, ways of being in the world, was that we lost con­trol of taste, right. I mean, when you had a con­cen­trat­ed deci­sion nexus in the 60s, you could essen­tial­ly impose high cor­po­rate mod­ernism on the world, because there was a very very con­cen­trat­ed num­ber of peo­ple who were mak­ing deci­sions that gov­erned the ways in which every day life was to be designed. 

And I got­ta tell you, me per­son­al­ly I think high cor­po­rate mod­ernism was the high point of human aspi­ra­tion. Like, Helvetica to me is the most beau­ti­ful thing that’s ever been cre­at­ed. And the International Style, and mono­chrome you know…everything is to me the epit­o­me of taste. But it turns out that 99.8% of the peo­ple on the plan­et dis­agree with me. And that they would rather real­i­ty be bright­ly col­or­ful, ani­mat­ed, kawaii, hap­py, fun, you know…literally ani­mat­ed pieces of shit talk­ing. And that they express them­selves to one anoth­er by send­ing them­selves ani­mat­ed images of pieces of shit with eyes stuck into them. This is just a neu­tral and unin­flect­ed descrip­tion of 2017, right. 

Audience 7: [inaudi­ble]

Greenfield: Well, okay. But you know, the thing is that um…

Audience 7: It makes peo­ple happy.

Greenfield: It makes peo­ple hap­py and who am I— There’s not a damn thing wrong with that. That’s ulti­mate­ly where I’m going, is that it turns out that if what I’m argu­ing for is a rad­i­cal accep­tance of what it is to be human, it turns out that we like Dan Brown nov­els. And it turns out that we like ani­me porn. And it turns out that we spend a lot of time in spread­sheets, right. This is what human­i­ty is. It’s not…what I would like to believe that we are but that is what human­i­ty is. And if I’m argu­ing for rad­i­cal accep­tance of that and a rad­i­cal democ­ra­ti­za­tion of things, I have no…I have no choice but to accept that.

Now, what I can do is ask, and I think it’s fair to ask, why peo­ple want those things. And why peo­ple think that these things are fun­ny. And why peo­ple think that these are expres­sions of their own per­son­al­i­ty. How is it that we got there—this is the cul­tur­al stud­ies stu­dent in me. How is it that these things became hege­mon­ic? Why is it that we inter­nal­ize those desires? Why is it that we inter­pret this as some kind of um…why do we think that these are expres­sions of our indi­vid­u­al­i­ty when lit­er­al­ly sev­en bil­lion oth­er peo­ple are doing the same thing? And for that mat­ter, why do I think of the way that I’m dressed as an expres­sion of my indi­vid­u­al­i­ty when there are one mil­lion peo­ple who are doing the same thing.

These are deep ques­tions. But I think that the only eth­i­cal­ly ten­able thing is to accept that taste is a pro­duc­tion of cul­tur­al cap­i­tal and that the taste that I par­tic­u­lar­ly appre­ci­ate and enjoy was nev­er any­thing but an inflic­tion of a kind of elit­ism on peo­ple who nei­ther want­ed nor need­ed it. 

I love bru­tal­ism. We see what hap­pens when bru­tal­ism is the law of the land. I love Helvetica. You know, I love that stuff. I do. It makes me…it makes my heart sing. But it’s not what human­i­ty want­ed. I guess… 

I am lit­er­al­ly burst­ing. So do you think we could end it there? Will you for­give me if we do? Nobody does this.

Mason: Before I return this audi­ence to their spread­sheets of ani­me porn and put you out of your mis­ery, I was going to ask you, if you can do it real­ly real­ly quick­ly…real­ly quick­ly, how do we build a desir­able future? Or should we just wait for the immi­nent collapse?

Greenfield: No, no. I think we get into the streets. I think we do. I think we get polit­i­cal. I think we get involved in a way that it’s no— I would say up until about three, four years ago, I would have said that it was no longer fash­ion­able. Thankfully it’s becom­ing fash­ion­able again to be involved in this way. 

I real­ly do think that the emer­gence in lib­er­at­ed Kurdistan of the YPJ, the YPG, these peo­ple are the most real­ized human beings of our time. They are doing things which are amaz­ing and they’re doing so on the basis of fem­i­nism and demo­c­ra­t­ic con­fed­er­al­ism. It’s fuck­ing awe­some and inspires me every day of my life, and if they can do that under the insane pres­sures that they oper­ate under, we can do that in the Global North in the com­fort of our own homes. 

Mason: Great. So, Adam does­n’t have an opti­mized blad­der, so we’re gonna fin­ish here. Radical Technologies is now avail­able through every­thing apart from Amazon. It’s avail­able through Amazon but I rec­om­mend you buy it from Verso.

Greenfield: If you buy it from Verso, lit­er­al­ly it’s 90% off today. They’re hav­ing a pro­mo­tion. Ninety per­cent. Go pay like 10p for it, today. It’s awe­some. It’s a good book. Buy from Verso.

Mason: So, very quick­ly I want to thank the Library Club for host­ing us. To Graeme, to the gen­tle­man here…I don’t even know him. To Dan on our audio, and every­body who makes Virtual Futures possible. 

Greenfield: And Sophia with the microphone.

Mason: Sophia, thank you for the micro­phone, for actu­al­ly… We’re a skele­tal team. And if you like what we do, we don’t make any mon­ey. So please sup­port us on Patreon and find out about us at vir­tu­al futures” pret­ty much anywhere. 

And I want to end with this, and it’s with a warn­ing and it’s the same warn­ing I end every sin­gle Virtual Futures with—and it is short, don’t wor­ry. And it’s this: the future is always vir­tu­al, and some things that may seem immi­nent or inevitable nev­er actu­al­ly hap­pen. Fortunately, our abil­i­ty to sur­vive the future is not pred­i­cat­ed on our capac­i­ty for pre­dic­tion, although, and on those much more rare occa­sions some­thing remark­able comes of star­ing the future deep in the eyes and chal­leng­ing every­thing that it seems to promise. I hope you feel you’ve done that this evening. The bar is now open. Please join me in thank­ing Adam Greenfield.

Greenfield: Thanks, Luke. That was awe­some. Thank you. It was awe­some. Cheers.

Further Reference

Event page