Sarah Newman: As Daniel said, thank you for being here. I’m very excit­ed today to intro­duce Ian Bogost. He is the Ivan Allen College Distinguished Chair in Media Studies and a Professor of Interactive Computing at Georgia Institute of Technology. He also holds an appoint­ment in the Scheller College of Business.

He will be in dia­logue with Jeffrey Schnapp, who is a fac­ul­ty direc­tor of metaLAB, a co-director of the Berkman Klein Center for Internet & Society. And he is the Pescosolido Chair in Romance and Comparative Literatures here at Harvard. 

Today, Ian will be talk­ing about A Pessimist’s Guide to the Future of Technology. As we were just dis­cussing, pes­simism is clear­ly pop­u­lar based on the turnout today, and right­ful­ly so. [audi­ence laughs]

Jeffrey Schnapp: A vote of confidence.

Newman: And right­ful­ly so. In this age of cel­e­bra­tion of tech­nol­o­gy it’s real­ly impor­tant to have the crit­i­cal in dia­logue with the cel­e­bra­tion and the embrace of tech­nol­o­gy that we have here at the Berkman Klein Center and our our extend­ed community. 

Ian is also the author or coau­thor of ten books. He is the cofounder of Persuasive Games. He is a game design­er and schol­ar. He is also a con­tribut­ing writer to The Atlantic, and is a coed­i­tor of the Platform Studies series pub­lished by the MIT Press, and the Object Lessons series pub­lished by Bloomsbury and The Atlantic

Jeffrey is a cul­tur­al his­to­ri­an whose inter­ests range from antiq­ui­ty to the present. He is also a pio­neer in the dig­i­tal human­i­ties and his works range from books to cura­to­r­i­al prac­tice and beyond. 

The empha­sis of Ian’s talk today will be on autonomous vehi­cles as a test case, and the dia­logue should be real­ly inter­est­ing because Jeffrey just com­plet­ed teach­ing a course on robots in the built envi­ron­ment here at Harvard. 

Ian Bogost: [Aside to Schnapp:] It’s almost like I was think­ing about this.

Newman: I also learned recent­ly that Ian has an unusu­al per­spec­tive on what con­sti­tutes a sand­wich, which might include a head of let­tuce, and this is might be inter­est­ing to those in the Berkman com­mu­ni­ty because we’ve been talk­ing about sand­wich­es recently.

Bogost: Yeah. Just, every­thing’s a sandwich.

Newman: Exactly.

Bogost: Let’s just get it over with.

Newman: So with that, thank you for being here. And we’ll make sure we save some time at the end for questions.


Bogost: Great. Okay. I am so hap­py to be here. I just flew in. So you wel­come me with this nice rainy weath­er, which I’ll for­give you for.

When I was a first-year under­grad­u­ate phi­los­o­phy stu­dent, I had this very stern and severe Scottish instruc­tor. And he was kind of against every­thing, it seemed. And we were talk­ing about Kant, Kant’s moral phi­los­o­phy, of course the cat­e­gor­i­cal imper­a­tive. And the idea of the cat­e­gor­i­cal imper­a­tive is that accord­ing to Kant one should act on a max­im only if one can imag­ine it becom­ing a uni­ver­sal law. This is sort of the one-liner on Kant’s moral philosophy.

And you know, philoso­phers are kind of trolls—kind of the orig­i­nal aca­d­e­m­ic trolls, right. Nothing makes them hap­py, they’re very grouchy about every­thing, it’s all about sort of sneers and barbs and dag­gers in the side.

And so this instruc­tor had this coun­ter­point, a sort of reduc­tio ad absur­dum to Kant’s world moral phi­los­o­phy, which appar­ent­ly I’ve not for­got­ten and will nev­er for­get, which was okay well if you should act on every max­im as if it should become a uni­ver­sal law then this should work for every max­im, accord­ing to Kant. Anything that you can sup­pose should be testable for its moral qual­i­ty against this premise.

So what about I will play—in order to exer­cise and keep myself phys­i­cal­ly fit, I will play ten­nis in the morn­ings at ten o’clock AM. So if you run that sce­nario and you say well, imag­ine if every­one played ten­nis at ten o’clock in the morn­ing in order to keep phys­i­cal­ly fit. Then it breaks down because every­one would crowd the ten­nis courts, no one could play ten­nis at all, there­fore you must not play ten­nis at ten o’clock AM, which of course seems pre­pos­ter­ous. So this is you know, one way of think­ing about these ideas that does­n’t seem quite right but is insight­ful in the sense that it shows that there are these rever­sals, these kind of edge cas­es; at their neg­a­tive ends or at their extremes, things change. They change form. And so some­thing that’s sort of think­able in a rea­son­able way at the cen­ter, once it moves to the edges and then once the cen­ter moves to that edge, then it alters some­what, it changes.

Now, Kant’s maybe not the best tool to think about this. But Marshall McLuhan and his son Eric in this book that no one reads unfor­tu­nate­ly, called Laws of Media, have this inter­est­ing media phi­los­o­phy of these four laws which you see here: enhance­ment, retrieval, rever­sal, and obso­les­cence. And we don’t have time to go into all of this but what’s inter­est­ing about it for the present con­ver­sa­tion is that for the McLuhans, this idea of rever­sal is kind of like a prop­er­ty of media. It’s not some­thing that hap­pens lat­er when things go wrong. It’s an intrin­sic prop­er­ty that all four of these laws are kind of active on media objects. And of course for McLuhan every­thing is kind of a media object—that’s the elec­tric light­bulb, famous­ly, and so forth. 

And they run these sce­nar­ios in this book of these tetrads. This is the cig­a­rette, which enhances calm, and retrieves group security—you can all go togeth­er and smoke. And the rever­sal, the thing that hap­pens when the cig­a­rette is pushed to its extremes or its lim­its is it becomes this addic­tion, right. Your ner­vous­ness. You’re no longer calm because now you want the cig­a­rette that you can’t have.

There are dozens of these in this book. This is the Xerox machine. I guess the inter­est­ing thing about the Xerox is the rever­sal sce­nario from the McLuhans is that every­body becomes a pub­lish­er, which is some­thing we used to talk about in a very pos­i­tive way and now we’re not quite so sure about any longer.

Or maybe this exam­ple. This is the car. There’s all sorts of inter­est­ing and bizarre things going on here, and we again won’t take the time to unpack them all. But the knight in shin­ing armor is the retrieved medi­um, this idea like there’s some­thing from the past that comes to the sur­face in the pres­ence. So I guess this— You know you can now get out of any situation—certainly this is what car share ser­vices are like now. 

But this is obvi­ous that the car when pushed to its lim­its, when every­one has a car, every­one is in their car at once, then you get traf­fic which is the oppo­site of mobil­i­ty and free— It’s inter­est­ing that free­dom and mobil­i­ty are not the thing that the McLuhans iden­ti­fy but pri­va­cy. But it’s okay, this is just a tool.

And it makes per­fect sense. And so I give you this exam­ple in par­tic­u­lar not only because we’re going to talk about autonomous vehi­cles a lit­tle bit, or I’m going to riff on that a lit­tle bit. But also because you can see how it just makes sense. That it’s not that there’s some­thing wrong when you get traf­fic jams. There’s some­thing intrin­sic to the design object that is the auto­mo­bile in its urban con­text that is traf­fic. That’s part of what the car is. And that’s the insight that the McLuhans had in this tool.

Now, the pes­simism busi­ness was a lit­tle bit of a… I don’t know. I mean, I am I think a nat­ur­al pes­simist. But in this moment that we’re in today with tech­nol­o­gy, where we’re I think shift­ing final­ly into a mode where it’s pos­si­ble to be crit­i­cal with­out get­ting sneered at, if we kind of look back at the…I don’t know, the opti­mistic aspi­ra­tional­ism that we’ve been using to encounter tech­nol­o­gy in the broad­est sense, and we look back on those moments of the recent past or even the dis­tant past, we can see how we knew how things were going to turn out, actu­al­ly. We just weren’t pay­ing them heed.

So you know, we kind of knew twenty-five years ago that the notion of iden­ti­ty and anonymi­ty online was trou­bled. We knew that. And we just made jokes about ha ha that’s fun­ny, let’s move on, right. But that turned out to be an intrin­sic part—for good and for ill, right. 

Or this is from about— Well, it’s 2006. You can see back when we were cel­e­brat­ing how blog­ging was going to do away with these gate­keep­ers that were keep­ing peo­ple out of shar­ing and spread­ing ideas and that was just ter­rif­ic. And you know, okay well what hap­pens? Just think about it for like ten sec­onds. What hap­pens when any idea can be shared and can’t be dis­tin­guished from any oth­er idea? Well, you have no qual­i­ty con­trol and no abil­i­ty to dis­cern which ideas are sort of even not just desir­able but even true.

We knew this and we just kind of went head­long into it, think­ing well this is great. Nothing can go wrong. 

Various mobile phone and messaging devices

Or with these devices, which I’ve pre­vi­ous­ly called the cig­a­rette of the of the 21st cen­tu­ry, these gadgets. 

The rela­tion­ship— I feel mine buzzing in my pock­et lit­er­al­ly right now as I’m talk­ing. The rela­tion­ship we have with our smart­phones uni­ver­sal­ly, we knew that it was— We had we had these pagers and then the Blackberry, which was this evo­lu­tion of the pager into email and so forth. And that role of the impor­tant per­son, the doc­tor or then the exec­u­tive or the gov­ern­men­tal work­er would have. You had to be con­nect­ed to what was hap­pen­ing. That when that uni­ver­sal­ized, we would all be work­ing, essen­tial­ly labor­ing, all the time, which is what we’re doing. We’re not always labor­ing for our workplaces—often it’s for wealthy tech­nol­o­gy com­pa­nies or for our own per­son­al brands or what­not. Or what have you.

And like now we’re kin­da going, Oh shit.” We know now that some of the ways that even for those who were involved in cre­at­ing these infra­struc­tures, they’re kind of admit­ting, Yeah, we weren’t think­ing, even a lit­tle bit about the impli­ca­tions of what we were mak­ing.” And that’s a nice con­clu­sion to come to after you’ve made a boat­load of mon­ey and it might not mat­ter so much anymore. 

So I’ve been think­ing about this whole ques­tion, this whole sort of set of ideas in the con­text of autonomous vehi­cles. And I pick them part­ly because I’m legit­i­mate­ly inter­est­ed, and part­ly because they are so new that we’ve not yet made these errors with them. We haven’t com­mit­ted in any— Either at the design lev­el, on an urban plan­ning lev­el, at a per­son­al use lev­el. We have some run­way, some blue sky to work with.

But when you look at the way even now that we’re kind of talk­ing about this future, it’s either as this sort of like wonder—like, final­ly we’ll be able to rid our­selves of these awful machines that we despise. And you know, it’ll just be easy to get any­where you want to go. You won’t have to main­tain and pay for a car. That’s one of those scenarios.

And anoth­er one of these sort of— I’m real­ly look­ing for­ward to self-loathing cars. This is such a good com­ic. Or the last step in the mile­stones is cars capa­ble of argu­ing about the trol­ley prob­lem on Facebook. Like, this is fun­ny. But it’s also a sig­nal that— It’s a kind of cur­so­ry take on what these futures of autonomous cars might look like.

So I’ve been think­ing about this for a lit­tle while. We don’t have to just talk about autonomous vehi­cles but I think it’s an inter­est­ing test case. And I’ve been run­ning through a num­ber of sort of like­ly sce­nar­ios in this kind of McLuhanite way. If we take this thing and we imag­ine push­ing it to its extreme so that it real­ly is uni­ver­sal, what hap­pens? What takes place? And I’m just going to run through some notes of some of the things that seems like­ly to me, at uni­ver­sal scale, sort of when ful­ly rolled out.

One of those is that all of the trol­ley prob­lem busi­ness, like is the car going to run down pedes­tri­ans, is a very very tem­po­rary prob­lem that has to do with this tran­si­tion mode between human-driven cars and pedes­tri­ans and bicy­cles and so forth, and ful­ly autonomous cars. Once you get vehi­cles that are ful­ly autonomous and want to roll them out com­plete­ly, then one of the like­ly things that will hap­pen is that the way that roads oper­ate will also change.

So, you can pack autonomous vehi­cles much clos­er togeth­er. You have that sort of like Minority Report vision, if you remem­ber the tracks of Lexus-branded vehi­cles that are sort of swap—very high speeds, sort of swap­ping places with one anoth­er. These cars can coor­di­nate with one another.

And so it’s not so much that it will be unde­sir­able or no longer a tech­ni­cal prob­lem that peo­ple or oth­er humans dri­ving tra­di­tion­al vehi­cles might be at risk, but it will no longer be fea­si­ble for them to even par­tic­i­pate in that mode of con­veyance. To the point that it strikes me as like­ly, not just pos­si­ble but like­ly, that espe­cial­ly major arte­ri­als will become sort of new free­ways that will be inac­ces­si­ble to not just human dri­vers but as right of ways what­so­ev­er for pedes­tri­ans, for cyclists. And they’ll do that because it will no longer be safe to inter­act with the way that the autonomous cars are behaving.

Another thing that will change is if you— I was recent­ly in Tempe, where Uber is run­ning one of their test mar­kets for these autonomous cars. And they have peo­ple in them still. But even so, you real­ize that even if you were on the road with them, your rela­tion­ship with the dri­ver in the vehi­cle, as a pedes­tri­an, as a cyclist, as anoth­er dri­ver, is very impor­tant. You kind of know okay, I kind of have a sense of what you might do even if I can’t see your eyes because I know the pos­si­bil­i­ty space of things that peo­ple do. 

But we don’t under­stand what com­put­ers do any­more, most of the time. And the pro­gram­mers of these sys­tems often don’t under­stand how they work, and we get into these deep AI/ML kinds of sys­tems, and that’s exact­ly what autonomous cars are. So even if you are in the same space as one of these vehi­cles, you’ll have no idea what its capac­i­ties are and what it might do. So you can no longer read those apparatuses—they’re no longer legible.

So you know, if you kind of run that sce­nario out, maybe it would just be bet­ter to take this thing that we’ve had since we’ve main­tained pub­lic roads, which is the idea of the pub­lic right of way. All of us can go out to the street and use the street and it’s main­tained and owned by a gov­ern­men­tal enti­ty, by a munic­i­pal­i­ty, by a coun­ty, what have you—by a state. And they’re respon­si­ble for it, and as a result every­one has the capa­bil­i­ty of using it. Maybe that does­n’t make so much sense anymore. 

And in fact it’s become very expen­sive to main­tain American infra­struc­ture, as we all know, and it’s kind of falling apart. And so when you have wealthy tech­nol­o­gy com­pa­nies that are absolute­ly going to roll out autonomous vehi­cles as kind of Uber-style car ser­vices not as con­veyances that you would buy and own and garage your­self, then just in the same way that Amazon is essen­tial­ly get­ting these unbe­liev­able bribes out of munic­i­pal­i­ties that want to host its new head­quar­ters, you know maybe it’s best just to lease off those spaces to Google, to Tesla, to Uber, to who­ev­er those play­ers are, in order that they can kind of man­age them and upgrade them to smart roads so that they can make them even more efficient.

And this might mean that— I mean it’ll start with the larg­er streets but then it’ll cer­tain­ly bleed into small­er ones. Maybe there’ll be times when you can’t use your own road—like imag­ine you kind of walk out of your house and you no longer have access, or at least not direct pub­lic access, to that space. 

You can even imag­ine a sort of blockchain-driven smart con­tracts kind of sys­tem where you’ve got your phone in your pock­et, right, and you want to cross the street. This is like Philip K. Dick stuff, right. You just want to cross the street, and it’s fine for you to cross the street as long as there’s no vehi­cles in the area. You’ll just be charged a small fee invis­i­bly when you enter into that, because it’s pri­vate prop­er­ty now. Or at least it’s leased off in such a way that it’s con­strued as pri­vate property. 

Once that takes place you know, when you don’t need things like traf­fic lights, for exam­ple. Because those are man­ag­ing human-dri­ven vehi­cles and these autonomous fleets are much more effi­cient. Just yank out the traf­fic lights and they will invis­i­bly coor­di­nate their behav­ior with one anoth­er. Well when you take those sorts of things out, one of the things that comes along with them are the wayfind­ing devices. Street signs, street names, those are all put up for our ben­e­fit as human dri­vers, cyclists, pedes­tri­ans, and so forth.

And they’re quite unsight­ly when you think about it. Who likes to look at traf­fic lights or street signs? So maybe let’s just remove those as a kind of urban renew­al pro­gram that could be under­writ­ten by a com­pa­ny like Google. And you know, they would have a sec­ondary inter­est in doing so because as it hap­pens as you might remem­ber, Google pro­vides map­ping ser­vices to all of us now. We don’t use paper maps any­more. And in fact there’s kind of a long his­to­ry of obfus­cat­ing pub­lic space with maps that sort of are false or slight­ly inac­cu­rate, or that you can con­trol what peo­ple know. The Soviet Union has a num­ber of exam­ples of this that I don’t have time to go into right now.

But if that’s the case then you know, the idea that we have pub­lic access or sort of gen­er­al access to maps of that might also begin to dis­si­pate. You know, maybe there’s a ser­vice lev­el kind of sub­scrip­tion to a kind of radius from your­self that you can see. Or maybe it’s just in the inter­est of these new public/private part­ner­ships between munic­i­pal­i­ties and Uber and Google and so forth to just elim­i­nate cit­i­zen use of maps because all it does is cause trou­ble, peo­ple go places they should­n’t… I’m not even talk­ing about like, the obvi­ous ampli­fi­ca­tion of kind of the his­to­ry of redlin­ing and oth­er sorts of geo­graph­ic dis­par­i­ties that have— We’re already see­ing impacts in the way that ordi­nary car ser­vices work in terms of access. 

We could kind of go on down on this road. And there’s a bunch of oth­er inter­est­ing sce­nar­ios. Parking. A lot of folks have start­ed talk­ing about the delight that will come from the removal of park­ing lots, which are their own blight paved over spaces. And you know, that’s cer­tain­ly like­ly, but it’s already hap­pen­ing with flat sur­face lev­el park­ing lots, espe­cial­ly in dense urban cen­ters. Those are being bought up and turned into tall lux­u­ry office and con­do tow­ers, most­ly, right. They’re not mixed-used spaces real­ly. Now you have sort of expen­sive con­dos and office spaces for com­pa­nies that want to move back into the cen­ters of cities after hav­ing spent decades on their edges. 

The park­ing lot, though, that exist infra­struc­tural­ly, that’re sort of at the bot­tom lev­el sort of under­ground large build­ings, it’s not like those are going to go away. What might hap­pen to those? They might become stag­ing areas for these fleet cars. That’s one thing that’s been pro­posed. But anoth­er thing that strikes me is I think that there’s so much more space than you would ever need to stage for autonomous car ser­vice that how might you repur­pose park­ing decks and under­ground park­ing struc­tures? They could sort of become new hous­ing. Because hous­ing is very expen­sive. And we have all of these work­ers who want to live in the cen­ter of cities now, if there are still jobs for them after automation. 

But even if there aren’t, we have some sig­nals that this is already hap­pen­ing. Just on the way up, I read this arti­cle. In San Francisco there are about a thou­sand new apart­ments that are being gen­er­at­ed out of old boil­er rooms and base­ments. So it’s about 200 square feet, which is per­fect­ly accept­able (a hotel room isn’t much big­ger than that) at a bar­gain price of only twenty-four hun­dred dol­lars a month, in San Francisco mar­kets. You can imag­ine sort of cre­at­ing these sort under­ground slums for work­ers. And this would be a ben­e­fit, real­ly. You would­n’t have to go out to the sub­urbs, because the sub­urbs are like­ly to become com­plete­ly inac­ces­si­ble, I think. As we see more folks move in and den­si­fy urban cores, the cars won’t even be nec­es­sary any­more. We’ll have new pedes­tri­an and bike cor­ri­dors. Think about the way that Amazon has rede­vel­oped Seattle, the sort of South Lake Union area of Seattle, and that sort of thing seems increas­ing­ly likely. 

So if you’re wealthy enough to live in the city cen­ter, you prob­a­bly won’t be both­ered by autonomous cars at all and maybe we’ll see a shift into these sort of autonomous bus­es that ship peo­ple back out to the sub­urbs and the exurbs. And then once you’re out there if you don’t have a car any­more you’re com­plete­ly screwed. What’re you going to do? So you’ll be kind of under house arrest in those spaces, or maybe maybe we’ll devel­op these kind of like, ille­gal human-driven taxi ser­vices that will crop up. And that’s not even to men­tion what hap­pens to folks in rur­al areas once they can’t get access to elec­tric or inter­nal com­bus­tion engine vehi­cles if they’re tak­en offline. 

I also thunk about garages, about all of the in-town, not sub­ur­ban but kind of urban single-family garages that exist all over America, which would no longer be nec­es­sary, real­ly. You’re not going to own these vehi­cles. And so if you’re lucky enough already to own a prop­er­ty like that, then you’ve got a kind of built-in Airbnb. So like there’s some kind of mass con­ver­sion. And there will obvi­ous­ly be a kind of clas­sist rela­tion­ship with peo­ple who are rent­ing out these con­vert­ed garage spaces. Not to men­tion the fact that it only ampli­fies kind of exist­ing wealth inequal­i­ty as we’ve built so much of our wealth in America for the everyper­son around prop­er­ty ownership. 

Anyway, there’s like you know, dozens of these kinds of sce­nar­ios that we could spin out. I may be right or wrong. It does­n’t real­ly mat­ter in some ways. It’s rather that if we sort of shift from think­ing about the tech­nol­o­gy and the near-term prob­lems to these kind of medi­um to long-term sce­nar­ios that assume adop­tion at a uni­ver­sal scale—just to ask ques­tions about them—then that sort of sce­nario, which you know, bears some rela­tion­ship to sci­ence fic­tion, some rela­tion­ship to like, RAND-style sce­nario plan­ning, some rela­tion­ship to oth­er kinds of futur­ism but I think is still dis­tinct from them. Because it’s ask­ing ques­tions about like, what is the cur­rent tech­nol­o­gy going to be when it flips its bit, when it revers­es. Because that could still be changed as we’re work­ing in the present. 

So those are some thoughts. That’s sort of what I came with. 

Schnapp: Yeah, no. I mean, I think that’s a great launch­pad for this con­ver­sa­tion. And I guess I’d be inter­est­ed in start­ing out the dis­cus­sion part of this. And Ian and I agreed ear­ly on that we very much want to involve the whole audi­ence here as part of this con­ver­sa­tion. But I thought Ian maybe as a prompt since you intro­duced Kant into the con­ver­sa­tion right from the get-go—

Bogost: Sorry for that.

Schnapp: —and your own train­ing has this real­ly rich and inter­est­ing sort of crossover between phi­los­o­phy, the­o­ry, cri­tique, and prac­tice, and [indis­tinct]—

Bogost: Yeah.

Schnapp: —that if maybe we could talk a lit­tle bit about pes­simism as a stance. Because of course in these var­i­ous tetrads that you won­der­ful­ly brought up from the McLuhan era of media theory…you know, pes­simism itself is often not pessimistic.

Bogost: Right.

Schnapp: Rather it is an inter­ven­tion in an emerg­ing set of debates, of con­cerns, of forces that run in dif­fer­ent direc­tions. I mean, cer­tain­ly for Nietzsche, pessimism…and for a whole strand of philo­soph­i­cal cri­tique, pes­simism is the cor­rec­tive. And in the case of auton­o­my, and I think you won­der­ful­ly spun out some of the poten­tial ram­i­fi­ca­tions of an auton­o­miza­tion of the world. 

But of course the ques­tion in the word auton­o­my that…coming from a philo­soph­i­cal back­ground would imme­di­ate­ly intro­duce is of course auton­o­my for whom? Like, who gets to be autonomous? In the ser­vice of what val­ues? I mean, all of these var­i­ous sce­nar­ios that you described, from these work­er colonies, or maybe encamp­ments in the exurbs, of dis­en­fran­chised a pop­u­la­tions. They all raise this ques­tion. Who gets to be the dri­ver in the place of the dri­ver, so to speak. Whether it’s the lev­el of social forces, or the archi­tects of cities, or who owns the pub­lic spaces—what is a pub­lic space? 

So I guess I’m just curi­ous giv­en the extra­or­di­nary range of your own work, how you see this kind of crit­i­cal inter­ven­tion in shap­ing that future con­ver­sa­tion about the design of cities. Because we’re just at the begin­ning of that. I mean, I think as you sug­gest­ed, this is a lit­tle bit dif­fer­ent than some of the oth­er cas­es that we start­ed with.

Bogost: Right, right, right. So one thought I have about that is that when you bring… When we think about the inter­ac­tion between tech­nol­o­gists and philoso­phers, it’s a sort of smarmy con­ver­sa­tion we have about that inter­ac­tion, right. Like, [mock­ing­ly] The human­i­ties are still impor­tant. We’ll bring in these philoso­phers who’ll help lead us—” 

It’s like um, real­ly? That’s all we can muster, is this sort smarmy appeal to like, ethics and you know. Which isn’t to say that it’s a bad thing to think about the moral impli­ca­tions. But I actu­al­ly think it’s a mis­take, it’s like almost a cat­e­go­ry error to take these kinds of sce­nar­ios as just moral impli­ca­tions. They are in some ways meta­phys­i­cal impli­ca­tions, right. Like, onto­log­i­cal impli­ca­tions. We made this thing—it was blogs, it was the Internet, it was smart­phones, it was autonomous vehi­cles. Whatever it is. And you know, every­one has the best inten­tions, or at least some­thing that’s not the worst inten­tions. They had some good inten­tions. And then things got away from us, right. It took on a life of its own. And the best con­clu­sion we seem to come to, once those out­comes are unex­pect­ed is, Oh well. This just once again proves that tech­nol­o­gy is nei­ther pos­i­tive nor neg­a­tive nor neu­tral,” right?

Okay, great. And then we’re just left with the results. And then we just kind of like, move on to the next thing as thought noth­ing hap­pened, Oh…let’s kin­da wash our hands of it.” So to me, one way of get­ting at that answer is that… You know, design is the space that sits between tech­nol­o­gy and phi­los­o­phy? And unfor­tu­nate­ly design has also sort of been trou­bled in recent years as the sort of design think­ing non­sense has tak­en over all con­ver­sa­tions. You know, what does it mean? Well it means…just basically…speculative finance, right. Like every­thing, right. Design think­ing is kind of spec­u­la­tive finance, like tech­nol­o­gy is kind of spec­u­la­tive finance. The philoso­phers haven’t yet got­ten around to cast­ing their work as spec­u­la­tive finance, so…

Schnapp: That’ll come, though.

Bogost: Maybe, well…probably won’t come. 

Anyway, so design is this space where we muster abstrac­tion and make it con­crete. And then it gets pushed out into the world through a imple­men­ta­tion. So I’m inter­est­ed in that com­mu­ni­ty or that mode of think­ing as one where you could begin ask­ing ques­tions instead of about use, or about out­comes, or about these sort of moral or social impli­ca­tions, all of these kind of smarmy frames that we draw around things, and that frankly, the folks who are mak­ing these tech­nolo­gies aren’t that inter­est­ed in hear­ing, and trans­form that into ques­tions about the essence of these prod­ucts or ser­vices or objects.

Schnapp: So just to jump in, though, in the case of your work on game design, for exam­ple, the use of inter­ac­tive game plat­forms as spaces of cri­tique or crit­i­cal engage­ment of some form. Would you see an anal­o­gous exten­sion out into the sphere of, sort of get in under the hood and make or tweak or [crosstalk] hack tech­nolo­gies that involve—

Bogost: Yeah, right. I mean. Well there was this— When I start work­ing in games, you know, and I did all this work with kind of games and pol­i­tics and edu­ca­tion, and you know I had this whole argu­ment, this sort of like 150 thousand-word-long [indis­tinct]. I built a whole game stu­dio around the idea that we could take the way that things behave…you know, these sys­tems of behav­ior, these com­plex sys­tems of behav­ior in the world. And because we have the capac­i­ty with soft­ware sys­tems like games to depict those sys­tems, sys­tem­i­cal­ly, in rep­re­sen­ta­tion­al form, that we would be able to under­stand them, cri­tique them, maybe make alter­ations or claims about them more eas­i­ly. And that works in the­o­ry, on paper. But what I did­n’t think about at the time, you know, ten-plus years ago, fif­teen years ago when I was work­ing on this, is that those media objects and that whole design phi­los­o­phy exists in a media ecosys­tem with every­thing else. 

So, if you zoom back and you kind of imag­ine okay, it’s not just that we’re kind of mak­ing these rep­re­sen­ta­tions of how things work rather than depict­ing and describ­ing them, but then we’re also try­ing to alter the media land­scape such that peo­ple are look­ing for under­stand­ing and con­vers­ing about those kinds of sys­tems, that’s what would be nec­es­sary. And of course that’s not what hap­pened at all. We don’t talk about like soft­ware mod­el… You don’t wake up in the morn­ing and open your phone and look at the lat­est soft­ware mod­el depic­tion of the cur­rent state of cli­mate or pol­i­tics, right. You read text, you look at images, you watch videos, you lis­ten to audio. It’s just the 20th cen­tu­ry. 20th cen­tu­ry forever. 

And so the two lessons I would draw in answer to your ques­tion is that on the one hand it’s the same inter­ests, right, this idea that there’s sort of deep struc­ture in things. Essence is very unpop­u­lar, right. No one likes to talk about it. They like to talk about trans­for­ma­tion and change and becom­ing. But no, there’s some­thing about essence, about deep struc­ture, that seems endem­ic to grasp­ing some­thing. Which is one of the rea­sons why the McLuhans are of inter­est to me. 

But then also that I made that very cat­e­go­ry error that I’m talk­ing about, here, today, which is that I thought that this was a design prob­lem that was unre­lat­ed to oth­er design prob­lems in the media eco—or not even design prob­lems, just sort of trends and flows. And now I mean, I real­ly do believe that that oppor­tu­ni­ty is…like that time­line has been snipped. We don’t know what it would be like to go down it any longer. 

Schnapp: Great. Well, I’m gonna open up the floor here for just peo­ple to jump in. Daniel, are you going around with mics? Yeah. So if you would like to join the con­ver­sa­tion, just raise your hand and Daniel will come over. 


Audience 1: Great talk, by the way. I have a cou­ple of ques­tions but I’m going to ask one of them and then you can tell me where it goes. First, I mean, is it pes­simism or is it cre­ative destruc­tion? I mean that’s an eco­nom­ic term I guess, as against a philo­soph­i­cal, right? Economic philosophy. 

Bogost: Yeah. Well I mean, so the eco­nom­ic posi­tion is that it does­n’t mat­ter what hap­pens so long as there con­tin­ues to be muster­able pro­duc­tiv­i­ty. So long as there can be an eco­nom­ic machine that con­tin­ues run­ning. Whereas pes­simism says things are bad and they’re get­ting worse. 

Audience 1: Right.

Bogost: So if the way that you mea­sure good­ness is through eco­nom­ic val­ue, then so long as eco­nom­ic val­ue con­tin­ues to increase and so long as it increas­es for the agents for whom you think it’s impor­tant that it increas­es then you’re fine. It’s all good. There’s only optimism. 

And you know, it’s arguable that this posi­tion is the strongest one, right. That even the optimism/pessimism dyad is just a foil for a true inter­est in con­tin­ued eco­nom­ic pro­duc­tiv­i­ty for a selec­tive­ly small­er and small­er group. And you know, I don’t think we can just dis­miss that idea and say well obvi­ous­ly we don’t want to go down that road. Because in fact that’s the road we’ve been on for you know, a long time. 

But if you take—the inter­est­ing thing about the pes­simist as like a sort of fig­ure to embody, right, is that it’s this…you have a—it’s like putting on a hat that says I’m just gonna ask what’s the worst case,” you know, what’s the worst pos­si­ble sce­nario. Not because you’re some sort of a masochist. Or real­ly a pes­simist in the pes­simists sense, every­thing is going to hell. But rather that pos­ing that ques­tion, even from the van­tage point of eco­nom­ic devel­op­ment, right, would allow you to see pos­si­ble sce­nar­ios that you would oth­er­wise miss. 

The inter­est­ing thing about the way that tech­nol­o­gy has been pro­ceed­ing, even on the eco­nom­ic reg­is­ter, is that with­out ask­ing any ques­tions what­so­ev­er, it seems to be work­ing out, right. Through acci­dent rather than a sort of cre­ative destruc­tion kind of sub—through most­ly dumb luck. And then a kind of ampli­fi­ca­tion of those sce­nar­ios, espe­cial­ly as Internet-based ser­vices have glob­al­ized and they’ll sort of ream­pli­fy the dif­fi­cul­ty of find­ing new answers, of inter­ven­ing in these systems. 

So, I’ll give you one exam­ple which is prob­a­bly on peo­ple’s minds late­ly, which is this net neu­tral­i­ty con­ver­sa­tion, right. So…I mean…I don’t even know if I want to touch this sub­ject in this room.

Schnapp: It’s dangerous.

Bogost: Well look, com­mon car­riage makes sen— It makes sense for broad­band and wire­less data to be treat­ed as com­mon car­riage. But at the same time, the Internet is kind of garbage, and some­thing that might change it in any way is worth at least talk­ing about. At least talk­ing about, right. But you can’t even real­ly do that. You kin­da say, Well let’s just like step back—” and you get yelled at on Twitter or what­ev­er by the throngs of… I don’t even know what their posi­tion is. Like there’s left­ies, there’s sort of these cen­trist lib­er­tar­i­an­ism that have the same—it’s kind of all over the map. 

So we’ve worked our­selves into a cor­ner with a lot of these ques­tions, where we can’t even real­ly pose inter­est­ing ques­tions about them. And one of the rea­sons we can’t is because we’re stuck with­in these sys­tems that we’re sup­pos­ed­ly pre­serv­ing, by means of tak­ing on an obvi­ous posi­tion like we wan­na make sure at all costs that net neu­tral­i­ty does­n’t dis­turb the sanc­ti­ty of the Internet—which we also believe is garbage and now we have con­vinc­ing evi­dence has had real neg­a­tive impli­ca­tions on civic life and so forth.

So yeah I mean, I think that’s the inter­est­ing thing about wear­ing the pes­simist hat. It’s a license to sort of say okay, what is awful, or what might go wrong, and let me at least think about that for five min­utes even then I’m going to sort of shed it, take it off and come back to reality. 

Audience 1: And just one follow-up thought exper­i­ment, right. So, if we were sitting—I’m in Buffalo, New York where I work—and we had an anal­o­gous event, right, 150 years ago, we tried to build the Erie Canal. Which was kind of like a huge evo­lu­tion­ary leap. We were try­ing to build a canal that would make boats go from Buffalo to New York City. But then as they start­ed devel­op­ing this high-tech event, infra­struc­ture project, trains came in. 

Bogost: Yeah.

Audience 1: Killed the canal. Just when the trains done, high­ways came in. Killed the trains. We can see the same, you know—

Bogost: Now it hap­pens a lot faster.

Audience 1: It hap­pened with cable TV and net­work TV, right. We see that evo­lu­tion­ary process and I won­der if we were sit­ting in a room back then, would we be pes­simists fight­ing that process with that same view? Is it just his­to­ry repeat­ing itself with a new set of tech­nolo­gies? That’s the only oth­er question. 

Audience 2: Hello. Sarah had men­tioned going and that I would real­ly enjoy this talk. And she…was right. This is all I think about all the time. Something that real­ly stuck out to me was ear­li­er when you were talk­ing on the blog exam­ple where there was so much promise and you know, you talk about how much excite­ment there was around it. And I’m a tech­nol­o­gist. I work at large tech com­pa­nies. And when I think of every prod­uct launch, it’s just like peo­ple are on stage talk­ing about how cool it is that at any point in time you can tell some­one it takes fif­teen min­utes to get home. And you don’t real­ly think about the kind of data that takes or what that means for some­one’s pri­va­cy, etc. 

And you men­tioned also that when peo­ple build stuff, they have good inten­tions. And if any­thing when I’m around here I actu­al­ly often hear the nar­ra­tive that the Bay Area, Silicon Valley is only profit-hungry and every­one cares about mon­ey, and that they actu­al­ly don’t have good inten­tions and they’re evil. And so my ques­tion is, as I think about how to real­ly shift the tides of my field, is it that peo­ple are profit-hungry and evil, and that’s like the real nar­ra­tive? Or, is every­one actu­al­ly just like way too opti­mistic and only wants good things, and that’s the blind side. And which side do you think it real­ly lands on…? 

Bogost: That’s a great point.

Audience 2: And how do we like, change it?

Bogost: The short answer I will give you is I think the vast major­i­ty of peo­ple are blind opti­mists. They’re not pow­er or wealth-hungry extrac­tion­ists or some­thing. There are some of those. And one of the inter­est­ing fea­tures of the tech elite is that it’s a par­tic­u­lar­ly odi­ous kind of pow­er and wealth hunger not because it’s dif­fer­ent from oth­er kinds of busi­ness or from finance, which is I think the ulti­mate sort of ref­er­ence point. But because it’s dis­hon­est about the pow­er and wealth hunger, right. You talk to a hedgie and they’re not going to be like, I’m try­ing to change the world,” right. They’re just like, straight up about it, right. Whereas you talk to a tech VC or CEO and they will feed you that line, whether it’s true or false. And you look at the behav­ior of a com­pa­ny like Uber and it’s pret­ty cons—it’s not all com­pa­nies, right, but it’s kind of clear where they’re— 

But then the folks who are the line work­ers, basi­cal­ly, they real­ly are… Well first they’re just try­ing to make a liv­ing. And these are basi­cal­ly mid­dle-class jobs at this point in the sec­tors where tech is flour­ish­ing. And also they have the best inten­tions. They do. 

They’re also on the ground, you know. And there’s a cer­tain amount of pow­er that they have. But also I think that that sort of whole modal­i­ty of opti­mism, whether it’s truth­ful or false, right…has so…we’re just drunk on it, you know. Like noth­ing can go wrong. And now that we have evi­dence that actu­al­ly things kin­da can go wrong, we weren’t just kid­ding our­selves about that, there’s an open­ing to say okay like, if we can stop in our kind of daily…on a prag­mat­ic lev­el, the dai­ly, week­ly process­es of build­ing these prod­ucts and ser­vices and start ask­ing okay so what hap­pens… We’re going to roll this lit­tle test out of this prod­uct. What hap­pens when every­one in the world is using it, what does that look like? And then you know, do we want to sort of back­track from it at the design level. 

You could also intro­duce… People talk about reg­u­la­tion and oth­er forms of exter­nal con­trol as being important—and they are, I think that’s anoth­er miss­ing bit to this. But we’ve also kin­da gone off the rails with reg­u­la­to­ry man­age­ment of every­thing. So it’s a pipe dream to think that will sud­den­ly like come online. Although it is inter­est­ing that the one thing that seems to have revi­tal­ized itself in the Trump era is cor­po­rate antitrust, which the eight years of Obama, the kind of cool dad social media pres­i­dent and you know, none of that was happening. 

So you know, in oth­er words I think a pure­ly reg­u­la­to­ry answer is prob­a­bly not going to come about. And so unless we get inside the ordi­nary every­day work­er, we have no hope of avert­ing dis­as­ters of the future. 

Schnapp: Just to jump in on your ques­tion isn’t also one of the expres­sions of pes­simism bet­ter design prac­tice? A bet­ter, a more widely-informed…uh…

Bogost: Yeah. You know and…also like a slow­er design prac­tice. I mean, this busi­ness of speed is not just a mat­ter of the increas­ing the speed of change in busi­ness and cul­ture. It’s also the speed of prod­uct and ser­vice devel­op­ment, and deploy­ment? And we’ve cel­e­brat­ed that for a long time. And it allows us to do these exper­i­ments and make these changes and we feel like we’re not hurt­ing any­one in so doing, but it’s clear that actu­al­ly no, we are hurt­ing peo­ple in so doing. And you know, how do you damp­en that? One of my hob­by hors­es that’s a lit­tle bit orthog­o­nal to this talk but is still rel­e­vant is the— So, folks in com­put­ing call them­selves soft­ware engi­neers,” but they’ve nev­er adopt­ed the ori­en­ta­tion of civ­il ser­vice that the engi­neer­ing pro­fes­sions did, through pro­fes­sion­al engi­neer­ing cer­ti­fi­ca­tion but also through just a kind of pro­fes­sion­al ethos, which is not that dif­fer­ent from the way like jour­nal­ists think about their work. And so it does­n’t all have to come from out­side or from sort of like tight reg­u­la­to­ry con­trol. But yeah, slow­ing things down might also help. 

Schnapp: Mm. Interesting.

Audience 3: Hi. Thank you for the glo­ri­ous talk, and I real­ly do think it was glo­ri­ous. But I want to chal­lenge its label as pes­simism, because what I hear is an opti­mism that there will still be a civ­i­liza­tion that will be mak­ing progress for at least some­body, or some small group. And you know, you men­tioned Philip K. Dick and you know, when I think autonomous vehi­cles in the cur­rent struc­ture I go full Philip Dick and think of you know, fleets of abandon—or packs of aban­doned autonomous vehi­cles wan­der­ing aban­doned hulks of cities—

Bogost: Right.

Audience 3: —as the rest of us are going all Mad Max or The Walking Dead try­ing to rein­vent how you make bul­lets or some­thing. So I guess my ques­tion is why are you such an opti­mist? [laugh­ter]

Bogost: No you’re total­ly right. The pes­simism sales pitch was just a lie to get you to come. Yeah. 

Audience 4: Actually I had a sim­i­lar ques­tion, which was you know, your net neu­tral­i­ty exam­ple made me think that you could frame being pro net neu­tral­i­ty as a lack of pes­simism about net neu­tral­i­ty means, or a lack of opti­mism about what dereg­u­la­tion could lead to. So I’m won­der­ing why you choose to frame it as pes­simism rather than skep­ti­cism, just chal­leng­ing your beliefs what­ev­er they are. And I won­der is that because you think that in the realm of tech­nol­o­gy we have an inher­ent bias towards being more will­ing to believe our pos­i­tive self-deception than our neg­a­tive self-deception?

Bogost: That’s a good ques­tion I don’t know…I have to think about that and I will think about many times in the near future. I think my gut reac­tion is that for many years, pes­simism was off the table. The moment you start­ed mak­ing crit­i­cal com­ments about con­tem­po­rary tech­nol­o­gy you were either a Luddite, you were just an obstruc­tion­ist, you’re blink­ered, you did­n’t under­stand. And maybe the only good thing that’s hap­pened in the last year or two is that that pre­con­cep­tion has been stripped away and now okay no, maybe we ought to be more crit­i­cal. But I don’t think like, being crit­i­cal or skepticism…it’s it’s too mod­u­lat­ed, it’s too mod­est and mod­er­ate. And we need a coun­ter­point to that extreme opti­mism that we’ve suf­fered under for so long. So maybe if we go full pes­simism for a while, know­ing that it’s extreme, that it’s too much, then we can sort of find some rea­son­able space in the middle. 

And this is maybe not that dif­fer­ent from any sort of polar­i­ty that we might be expe­ri­enc­ing today, in pol­i­tics and in social issues, where you know, the moment that you try to mod­u­late in the mid­dle you actu­al­ly end up just being pulled to what­ev­er extreme is act­ing in the most extreme way. So like it or not we have to respond to that, maybe excessively. 

Audience 5: So, you men­tioned at one point snip­ping off lines. It feels to me like we’re right now liv­ing in an edge effect time? And how do you get out of that? 

Bogost: Yeah, well one of the amaz­ing things about the arrow of time is that we don’t know what the alter­na­tives might be. And so you know, tra­di­tion­al­ly sci­ence fic­tion, spec­u­la­tive fic­tion, or these sort of spec­u­la­tive design con­cepts that bor­row from that premise but for built objects or the built envi­ron­ment, one of the inter­est­ing things about those tra­di­tions is that they ask ques­tions about what could be, but typ­i­cal­ly its alle­gor­i­cal. It’s actu­al­ly about the present. Whereas it could also be about loss—there’s sort of his­tor­i­cal fic­tion or oth­er ways of think­ing about lost presents from alter­nate futures of our actu­al past, right. 

And then there are the alter­nate futures of our actu­al present, which is not what tra­di­tion­al­ly sci­ence fic­tion does. And so if you muster those objects or those tra­di­tions or trends, what­ev­er, modes as tools, delib­er­ate­ly, in a way that does­n’t like, I don’t know, throw them into the cul­tur­al abyss of sci-fi. Which is a prob­lem. Or, sim­ply kind of turn them back into these alle­gories of the present couched as the future. I think that’s one pos­si­ble tac­tic. It’s cer­tain­ly not the only one and it prob­a­bly is insuf­fi­cient but it’s one that I think about a lot, that if we can just sort of open our eyes to this string the­o­ret­i­cal impos­si­bil­i­ty of all of the pos­si­ble futures that we right now sit at the inter­sec­tion of, and we can think of them as pos­si­ble actu­al futures, then we could design toward them rather than just like I dun­no, we’ll just do what­ev­er,” you know. Like what­ev­er hap­pens, fine, because we did it. And then we meant to and then you kind of tell the sto­ry of how you real­ly meant to. Then that kind of planning…it will look like plan­ning at that point, right. 

Audience 5: Just a follow-up. So I read alter­nate his­to­ry a lot. I look at that. I think about Kim Stanley Robinson has some great cli­mate future his­to­ries. I can think that way, but how do we get the whole elec­torate to think that way?

Bogost: Yeah, when you’re not—

Audience 5: And that’s like our big prob­lem right now. It does­n’t.

Bogost: Yeah. I mean you know, like the whole elec­torate is prob­a­bly not a good tar­get mar­ket for much of anything? 

Audience 5: A majority.

Bogost: Well, I mean, think about where change hap­pens. It does­n’t hap­pen from the will of the peo­ple, even though they often get to vote—at least in theory—on these things. It hap­pens at nodes of pow­er and influ­ence. And so if we can change those, then we might actu­al­ly have more influ­ence on that col­lec­tive, rather than going to them at the grassroots. 

Sara Watson: So, fol­low­ing up on that a lit­tle bit—Sara Watson—I’ve been think­ing a lot about the kind of tra­jec­to­ry of how these pes­simistic or crit­i­cal con­ver­sa­tions have been hap­pen­ing but also how they’ve changed over time. And I’m wondering…you know, even over the last two years, think­ing about the worst-case sce­nario, plen­ty of peo­ple have talked about like the worst-case of Facebook, or you know, kind of that… And yet, noth­ing hap­pened or noth­ing was pos­si­ble to hap­pen until a real worst-case sce­nario actu­al­ly hap­pened, Russian inter­fer­ence being one of these—

Bogost: Yeah, we were talk­ing about exact­ly this thing in the last elec­tion cycle. 

Watson: Right. But like it took the worst-case thing to actu­al­ly hap­pen for any­thing… For a scale of peo­ple to actu­al­ly care and for peo­ple to actu­al­ly respond and change things. So, to that end like, where is that line and what is the effect or how do you think about influ­ence when it takes that kind of worst-case example.

Bogost: Yeah. I mean— So either we’re… You know, what are the pos­si­bil­i­ties? We’re idiots. We were unper­sua­sive. It was not impor­tant. It was too seduc­tive and no one could see the alter­na­tives, because they could­n’t feel them—they were abstract. It’s pos­si­ble that even though it seemed like peo­ple are just very bad at future plan­ning. And so it seemed even plau­si­ble? But that plau­si­bil­i­ty did­n’t seem near enough in time to be action­able. And I’m sure there are dozens of oth­er pos­si­ble cas­es that we could run. 

But now we’ve ma—like…that’s done. And you know, these sort of small trend tab adjust­ments that Facebook for exam­ple is mak­ing are prob­a­bly not that impor­tant. So giv­ing up on that and mov­ing on to some­thing else is one pos­si­ble answer. I guess what I’m try­ing to say is that we have to start act­ing incred­i­bly tac­ti­cal­ly. And that is not some­thing that for exam­ple the polit­i­cal left in America, or the sort of technology-friendly, counter, cyber­lib­er­tar­i­an com­mu­ni­ty is very good at doing. It’s just all ide­al­ism, you know. And so mov­ing back into tac­tics, the sort of very prag­mat­ic realpoli­tik of this might be one answer.

Watson: Well and that gets to the ques­tion of audi­ence, right. Like to which audi­ences are you actu­al­ly form­ing these inter­ven­tions or refram­ings or whatever.

Bogost: Yeah. Right. Right. Like lets say you were to embrace just buy­ing the solu­tions, right. So we need a sort of Koch Brothers for the left or some­thing. Like you know, there are plen­ty of bil­lion­aires who are empa­thet­ic to this. But they are not going about using their mon­ey for influ­ence in the same way, right. It’s not as aggres­sive. I don’t know how you con­vince folks like that to do so. But instead they like buy media com­pa­nies and have like hob­by news­pa­pers or mag­a­zines or some­thing like that. 

Schnapp: I’m curi­ous, Ian, in the con­text of answer­ing Sara’s ques­tion you brought up the notion of per­sua­sion as a kind of core prob­lem. Like where and how…and of per­sua­sion is the object of rhetoric, which is the most ven­er­a­ble the­o­ry of com­mu­ni­ca­tion in the…certainly Western cul­tur­al tra­di­tion. And in your gam­ing work, per­sua­sion has also been a key issue. I guess I’m won­der­ing with respect to the ques­tion that Sara was ask­ing, also the pri­or ques­tion, where and how does per­sua­sion hap­pen and become effi­ca­cious in a kind of media space, a media ecol­o­gy like the one we inhab­it today.

Bogost: Right. I mean, so the rhetorician—the good rhetori­cian, whether it’s a Burkean or an Aristotelian, has some under­stand­ing and respect for their audi­ence, and acknowl­edges that audi­ence. And that may be the biggest miss­ing bit if I had to pick one. 

Schnapp: Yeah.

Bogost: And you know, there’s rea­sons for it. But with­out sort of under­stand­ing and com­ing to…you know, it’s not meet­ing them half-way it’s like meet­ing that audi­ence almost all the way, maybe even more than all the way, in order to then make an appeal of some kind. And in some ways these sys­tems sort of rein­force the bad habits that draw us fur­ther and fur­ther away. Like one of things we talk about a lot at The Atlantic and in media in gen­er­al these days is the sort of prob­lem of the coastal media elites” you know, the fake news media envi­ron­ment” that Trump and oth­ers have suc­cess­ful­ly antag­o­nized. Which was and remains an actu­al prob­lem. You know, you live in New York, or DC, or San Francisco or Los Angeles or wher­ev­er it is that media gets made, and then occa­sion­al­ly you drop ship a cou­ple folks into Ohio to do some sort of…it’s almost like this sort of like colo­nial­ist affair, right. Oh look at the strange behav­ior of mid­dle America. 

Schnapp: Natives.

Bogost: So you know, peo­ple are react­ing neg­a­tive­ly to that for a rea­son. It’s con­tin­u­ing. You know, The New York Times is par­tic­u­lar­ly expert at this even in light of everything. 

So that’s just one exam­ple, but I think maybe that’s the most impor­tant bit. And I don’t feel like I’m good at this yet. And so I feel sen­si­tive about call­ing it out as a bad habit. But maybe that’s the big one. Like what are peo­ple encoun­ter­ing and expe­ri­enc­ing. One rea­son we missed the Facebook stuff is peo­ple love Facebook. They love it. People love Google too. It allows them to do things that feel mag­i­cal and that give them imme­di­ate and enor­mous value. 

Watson: But what was like the best-case ver­sion of that? Like influ­enc­ing engi­neers to change things, or…

Bogost: Yeah. I mean, if we run these sce­nar­ios on the imme­di­ate past, we could prob­a­bly in hind­sight come up with some like­ly sce­nar­ios that might have avert­ed cer­tain kinds of effects that we might con­strue as neg­a­tive and that oth­ers might not. But I don’t know if that’s where we want to spend our time. It’s an inter­est­ing affair. Maybe some­one should be involved in think­ing through that as a way to move us into the present. I’m not try­ing to be ahis­tor­i­cal here, even though it’s hard to even call this…we’re talk­ing about like two years ago, you know. 

Schnapp: Yeah, exact­ly. What counts as his­to­ry today.

Bogost: But because of this speed busi­ness, the urgency of the near future sug­gests that maybe we don’t need to answer that question. 

Moira Weigel: Moira Weigel. Thanks so much for your talk. I want­ed to ask, fol­low­ing up on the stuff about pol­i­tics and tac­tics, what is the sig­nif­i­cance of that split you allud­ed to between VCs and man­age­ment and engi­neers, and sort of rank and file tech work­ers. Because it seems to me that that’s become…I think both because as a result of the elec­tion and the sort of tech CEOs meet­ing with Trump after the elec­tion, and the increased mate­r­i­al pres­sures of liv­ing in a place like the Bay Area, the real­i­ty that for the most part tech work­ers are labor and not cap­i­tal. Or what­ev­er, you know, the idea that they’re not…that most tech work­ers will nev­er be VCS or—

Bogost: Yeah. Well they’re not cap­i­tal own­ers.

Weigel: Right. That’s the—

Bogost: Even though they might appear to be because they have stock options or whatever. 

Weigel: And so I want­ed to ask in terms of pol­i­tics and tac­tics and strat­e­gy what do you see as you know, in terms of think­ing about civic respon­si­bil­i­ty or resist­ing the Philip K. Dick future. Like what’s the sig­nif­i­cance of that split.

Bogost: Yeah. It’s… [sighs] I guess the obser­va­tions I have is that when out­siders make this claim, whether it’s jour­nal­ists or schol­ars or even folks who’re sort of…you know…inside of the tech world but not nec­es­sar­i­ly in a…kind of what I’m call­ing a line work­er capac­i­ty, to kind of empha­size that, they don’t seem cred­i­ble for some rea­son, right. And also that crop of labor is very inac­ces­si­ble. They make them­selves inac­ces­si­ble and they’re very tight­ly con­trolled by their orga­ni­za­tions. So actu­al­ly it would be real­ly hard to do a sort of on-the-ground inves­tiga­tive report of work­er life at Google. 

And there’s there’s all sorts of rea­sons for it. So when we do see lit­tle bits and pieces of it, it’s usu­al­ly through finan­cial or busi­ness news. Just just this week in fact, there was an inter­est­ing lit­tle kind of exhaust emit­ted of the world that you’re draw­ing atten­tion to, where Uber work­ers who hold options are try­ing to unload them in order that they can turn them liq­uid. But unless there’s a cer­tain amount, because it’s all going through like SoftBank or some­thing, then there’s this option to do a cer­tain amount of—but then you have to also be reg­is­tered in the right way to be able to trans­act. And there’s all these kind of strange sec­ondary mar­kets for finan­cial instru­ments these days. 

So that’s where that reach­es the sur­face. It’s all about mon­ey. And no one like…like, how do you sell that to— You know okay, what you’re liv­ing on you know $250,000 a year in San Francisco and you have all these stock options, you want me to…empathize with that? And this is back to the busi­ness of audi­ence, actu­al­ly. So for the tech labor­er to appear as a labor­er like any oth­er kind of labor­er does, some­thing will have to bring them togeth­er as a group and cre­ate a kind of equiv­a­lence between their plight and the plight of ordi­nary peo­ple,” right. I don’t know how I would go about doing that. But again, it’s a tac­ti­cal ques­tion rather than a sort of ques­tion of ideals. I hear these sto­ries all the time. People talk to me pri­vate­ly about them constantly. 

Weigel: And there is a group at least in San Francisco that has a chap­ter here that like helped with the Facebook cafe­te­ria work­ers union­iz­ing and stuff—

Bogost: Yeah, but it’s always those kin­da work­ers, right? 

Weigel: I know, and they go to the con­tract work­ers, they all get fired.

Bogost: People know about, you know the Filipinos scrub­bing data, and peo­ple know about the cafe­te­ria work­ers. We’ve seen— And that’s because of this sort of New York Times effect, too. Like that’s the sto­ry that appears to be bring­ing the plight of the down­trod­den to light. But it’s just unseem­ly to say well yeah, I mean, there are these six-figure-earning knowl­edge work­ers who are also the down­trod­den. That sto­ry is just not gonna fly. How do we tell that sto­ry in a way that will? 

Audience 8: This may go back a lit­tle bit before you were born, but in 67 a Harvard econ­o­mist, John Kenneth Galbraith, pub­lished a book called The New Industrial State. And it’s a lit­tle sim­pli­fi­ca­tion, but his main the­sis was that per­haps the mar­kets aren’t so good allo­ca­tors of cap­i­tal, and you should have some kind of local deity that says basi­cal­ly where will cap­i­tal go, what will be devel­oped. And Galbraith was around six-six, very tall. He looked around and could­n’t find any local deity but him­self, so he— [laugh­ter] I’m an MIT econ­o­mist, so I’m not gonna say that. It’s true. 

So my ques­tion is then, giv­en your obser­va­tions and think­ing even of plowshares—what we talked about when nuclear pow­er was invent­ed and how it would turn the swords into plow­shares, some­thing for good, how do we pro­ceed to get the good out of tech­nol­o­gy and stop the evils? 

Bogost: I did this piece for The Atlantic…I don’t know, a few months back, about this kind of unas­sum­ing a New Zealander guy who lives most­ly in a boat. And he’s in and out of net­work access, right. He’s dis­con­nect­ed from the nor­mal infrastruc—not to men­tion that down in that part of the world they’re already disconnec—it’s already dif­fi­cult to get data of any kind in Australia and New Zealand. One of the great obser­va­tions he made to me is there’s a rea­son rsync was invent­ed in Australia. Just the infra­struc­ture of a con­nec­tiv­i­ty, even fiber, is not such that it was reli­able enough. 

So any­way, he has been sort of recon­struct­ing the same kinds of social media kind of tools, the same kind of consumer-facing tools that we use at a glob­al scale at this dis­trib­uted local scale. It was a real­ly inter­est­ing a set of exam­ples because it was like well, maybe the prob­lem isn’t in the prod­uct design itself but in the idea that all infor­ma­tion should be glob­al­ized and all access should be glob­al­ized. And if you kind of stop and think for a sec­ond about the encoun­ters that you have on a day-to-day basis that are bad, where these atroc­i­ties start to sort of bub­ble up, it’s often because you want to be work­ing at a…maybe not a local lev­el, maybe a local lev­el, but at least not at a glob­al lev­el, and you just can­not any­more. Everything is imme­di­ate­ly globalized. 

So this isn’t like, a suf­fi­cient answer to your ques­tion. But it’s one exam­ple of an inter­ven­tion that you know, okay, it’s still exper­i­men­tal. It’s hard­ly wide­spread. But I can kind of imag­ine folks adopt­ing, you start to see it hap­pen. And there are places where there are exam­ples at scale that are good and bad. I mean, there’s this lovely/awful social media ser­vice called Nextdoor? Which is you know, most­ly peo­ple complaining—mostly peo­ple demon­strat­ing that they are in fact racists. Or like com­plain­ing about dog poo. All the stuff that, you know— But as some­one who’s—I’m like very involved in local land use pol­i­tics in my com­mu­ni­ty. And that’s a hard sell to any­one, but actu­al­ly through an exam­ple like that which is glob­al­ized, so you sign up for this ser­vice but then it gets local­ized down to your kind of neigh­bor­hood or the near­by area, you start to see more kind of pro­duc­tive, positive…or at least func­tion­al out­comes take place, even though the sto­ry that peo­ple like to tell is you know, how racist every­one on Nextdoor is, or how they just comp— There’s a par­o­dy account on Twitter that’s hilar­i­ous, throw­ing all the ridicu­lous things peo­ple say. 

But now you can in fact bor­row a plan­er from some­one down the block, or kind of talk about what­ev­er local school issue is going on. And that sort of small-scale inter­ven­tion seems like peo­ple have giv­en up on that, almost. Like well, why even both­er. But the sort of all pol­i­tics are local” apho­rism is apho­ris­tic for a reason. 

So I think you know, in sum­ma­ry, if we have a bunch of these exper­i­ments that are not aspir­ing towards sin­gu­lar answers for every­one that are all billion-plus-dollar com­pa­nies that take over some entire sec­tor, that would be a good start. 

Shnapp: What about, just since you were focus­ing on a kind of new mod­el of local­ism. What about the pace issue? 

Bogost: Yeah.

Schnapp: Are there…you know, ways of slow­ing down, or cre­at­ing that frame­work that allows for a dif­fer­ent way of mod­el­ing, of build plat­forms of…yeah.

Bogost: I mean, I think local­i­ty is one way into slow­ness, actu­al­ly. So just the quan­ti­ty of stuff you have to deal with. Like think about all the things that hap­pen in the world every day. And now think about all the things that hap­pen with­in your sort of extend­ed glob­al com­mu­ni­ty every day. And then zoom all the way down to your block, or your floor, or what­ev­er. And there’s just far few­er things that hap­pen at the local scale. And one of the rea­sons that many ordi­nary peo­ple appre­ci­ate and enjoy a plat­form like Facebook is that unlike us, unlike peo­ple who are in a room like this, they are most­ly using it as a local con­ver­sa­tion tool with a small group of peo­ple. And then they’re extend­ing that to a kind of glob­al phe­nom­e­non, but that’s not the glob­al phe­nom­e­non every­one encounters. 

So, you know, the idea that we will slow down is not gonna… It’s not oh, let’s just kin­da pull the plug on this. And you can imag­ine these sort of experimental…you know like…Oulipian sorts of con­straints applied to these ser­vices that we use. But that’s just like an indul­gence of the elites to even pon­der, right. So we have to get at them side­ways, through oth­er means.

But, at the same time you know, reg­u­la­tion is one of the things that slows com­pa­nies down. There was great peace recent­ly about how Uber is in essence a reg­u­la­to­ry arbi­trage com­pa­ny. And so you know, if your core busi­ness is reg­u­la­to­ry arbi­trage, then kind of the more con­fu­sion you can throw at the appa­ra­tus while you’re try­ing to work out the rest, would work out. Of course enforc­ing reg­u­la­tion at the local and nation­al lev­el would be anoth­er way of going about it. 

But I think those answers, again they’re just all like super weird, very tac­ti­cal, very boring…like they’re not the kinds of things that ring of this kind of like, I found the answer,” that we’re used to hear­ing from tech and that we give an audi­ence to. 

Schnapp: Alright, well thank you very much every­body. Thank you Ian, especially.

Further Reference

Event page