danah boyd: The ener­gy here is absolute­ly spec­tac­u­lar. It is a total delight to be here with you today. You know, it was­n’t just that I’ve been want­i­ng to come here for many years? We joked that they’ve been ask­ing for five years and every time I’d be like, I’m preg­nant,” or, I’m hav­ing a baby.” And it was a real­ly awk­ward thing, so I promised that as soon as I stopped hav­ing babies I would come and join you. So there’s been three babies, but now I’m here, and I’m real­ly delight­ed to be here.

So, the talk that I’ve pre­pared for you today is sort of a strange col­lage of issues all meant to come and chal­lenge this ques­tion of algo­rithms. But it’s done in these dif­fer­ent pieces, so work with me and tell me how this works. 

Part One: Agenda Setting

By 2008, para­noia about online sex­u­al preda­tors had reached an all-time high. It was this con­stant refrain where peo­ple were say­ing that the Internet was dan­ger­ous for kids. You heard it in the US, you heard it in Europe. And it was the con­cern about orig­i­nal­ly MySpace and then Facebook; this was a site of seri­ous dan­ger. The US Congress had start­ed putting togeth­er laws like the Stopping Online Predators Act, had put togeth­er a task force about Internet safety. 

And as a researcher it was a real­ly frus­trat­ing moment. Because I actu­al­ly knew all of the data about risks to young peo­ple. I could tell you in gory detail what was actu­al­ly going on, what was actu­al­ly mak­ing young peo­ple unsafe. But that did­n’t mat­ter. Because it was­n’t about facts or evi­dence. That should sound famil­iar, right? It was about whether or not peo­ple could get upset about the Internet over some­thing, and at that moment it was sex­u­al crimes on the Internet that was the issue. 

And so it was an inter­est­ing moment to me when I was asked to put togeth­er a col­lec­tion of all of the data on online sex­u­al pre­da­tion for the reg­u­la­tors in America. And one of the lead­ing reg­u­la­tors came to me and said, Go find dif­fer­ent data. I don’t like what you found.” 

Okay. Teenagers on the oth­er hand were split on the top­ic. Some were absolute­ly con­cerned that the Internet was unsafe. They had heard the rumors. They had seen the TV show. They’d seen To Catch a Predator, and they were con­vinced some­where there were kids being abduct­ed, so they need­ed to be safe online. 

Of course, most young peo­ple sort of looked at this and went, Yet anoth­er way in which adults want to ruin all the fun,” right. And so they sat there and said, Okay. Whatever. We’re going to ignore the par­ents and try very hard not to be seen using the Internet, when in fact that’s actu­al­ly what we’re doing the whole time.” 

But at the same time, there was a beau­ti­ful site called 4chan. If you don’t know what 4chan is, I rec­om­mend the Wikipedia entry only. At the time it was seen as the under­bel­ly of the Internet. Since then we’ve pro­duce more and more under­bel­lies, so we can sort of think about those lay­ers. But at the time it was a site that was pri­mar­i­ly about ani­me, pornog­ra­phy, the two major inter­ests of 15-year-old boys. And they were hav­ing fun pro­duc­ing memes. So, they saw this con­ver­sa­tion around online sex­u­al pre­da­tion and decid­ed it was time to have a lit­tle bit of fun.

Along comes Oprah Winfrey. She decides to talk about how the Internet was a real­ly dan­ger­ous, dan­ger­ous place. But she pulls a lot of her sto­ries from online fora. And indeed, some of her pro­duc­ers were look­ing at this, and some of the 4chan folks were hav­ing fun. So what hap­pened is that they man­aged to get Oprah to say live on nation­al TV, a won­der­ful statement:

Let me read some­thing which was post­ed on our mes­sage boards from some­one who claims to be a mem­ber of a known pedophile net­work. It said this. He does­n’t for­give. He does­n’t for­get. His group has over 9,000 penis­es, and they’re rap­ing chil­dren. So I want you to know they’re orga­nized, and they have a sys­tem­at­ic way of hurt­ing chil­dren. And they use the Internet to do it. 

Now, if you know any­thing about 4chan, this is hys­ter­i­cal, right. It’s not only are you mak­ing beau­ti­ful ref­er­ences to all sorts of memes (“over 9,000,” the ref­er­ences to Anonymous, etc.), you man­aged to get Oprah Winfrey to say some­thing ridicu­lous on live nation­al tele­vi­sion. And this of course is the abil­i­ty to hack the atten­tion economy. 

Part Two: Algorithmic Influence

So. What hap­pens as we think about trolling? Let’s shift focus for a sec­ond, because we’re going to go back and forth. A decade lat­er, most of the sys­tems that we’re talk­ing about are shaped by algo­rithms. Search engines sift through con­tent in order to basi­cal­ly use machine learn­ing to deter­mine what might be most rel­e­vant to any giv­en query. Recommendation sys­tems, rank­ing algo­rithms, they under­pin huge chunks of our online experience. 

But mean­while, out­side of the core Internet prac­tices, we see the use of data-driven tech­nolo­gies affect­ing crim­i­nal jus­tice, affect­ing cred­it scor­ing, affect­ing hous­ing, the abil­i­ty to get employ­ment. And we’re start­ing to see start­up after start­up think algo­rithms are the solu­tion to every­thing. AI has become hot. Every com­pa­ny out there is posi­tion­ing itself as the AI solu­tion to…some­thing. No one can quite tell what they’re actu­al­ly argu­ing but some­thing will be solved through AI

Now, one of the things I was fas­ci­nat­ed by is what do we even mean by AI in that con­ver­sa­tion? And lots of peo­ple have dif­fer­ent the­o­ries, but what I real­ized quick­ly is that AI had just become the new big data.” It was the mythol­o­gy that if we just do some­thing more with data we can solve all of these oth­er­wise intractable problems. 

But a senior exec­u­tive at a tech indus­try com­pa­ny actu­al­ly explained it to me even more pre­cise­ly. He was like, Of course all of these tech­nol­o­gy com­pa­nies are invest­ing in AI. Because what’s the alter­na­tive? Natural stu­pid­i­ty? We don’t want to invest in nat­ur­al stu­pid­i­ty. Artificial intel­li­gence sounds great.” 

Well if that’s what we’re using as our basis we’ve got a prob­lem, because then we have this ques­tion about what does account­abil­i­ty look like? How do we think about chal­leng­ing these sys­tems? You know, of course here in Germany you are at the fore­front of try­ing to chal­lenge these sys­tems, ques­tion­ing how these tech­nolo­gies can be used, build­ing up dif­fer­ent forms of resis­tance. Really ask­ing hard and impor­tant ques­tions about where this is all going. And I love artists and jour­nal­ists, crit­ics and schol­ars who pushed at edges, and so thank you for all of you doing that hard work. Because that’s real­ly important.

But along­side what you’re doing there are also peo­ple with dark­er agen­das, who are find­ing oth­er ways of mak­ing these sys­tems account­able. They’re look­ing at ways of manip­u­lat­ing these sys­tems at scale. It’s not just about manip­u­lat­ing Oprah to say things on nation­al TV, it’s about the use of dirty polit­i­cal cam­paigns, the idea of black ops adver­tis­ing, the abil­i­ty to use any tool avail­able to you to manip­u­late for a par­tic­u­lar agen­da regard­less of the social costs. And this is where we’re start­ing to see a whole new ecosys­tem unfold that has become more and more sophis­ti­cat­ed over the last decade. 

And it rais­es huge ques­tions about what does it mean to hold account­able not just nation-states or cor­po­ra­tions, but large net­works of peo­ple who are actu­al­ly work­ing across dif­fer­ent tech­nolo­gies. And so a lot of what I’m going to do today is try to unpack that and build that through.

Part Three: Manipulating the Media

But let’s start with some exam­ples. In 2009 there was a pas­tor in Florida in the United States. He want­ed to spread the mes­sage that Islam is of the dev­il. And so, as a pas­tor of a small church—around fifty people—he put up this sign in front of his church in order to invite peo­ple to share his views. 

Now, gen­er­al­ly speak­ing not that many peo­ple right around Gainesville, Florida, in the scheme of things. And so it reached dif­fer­ent folks who grum­bled. But then he got a lit­tle smarter. He said I can actu­al­ly spread this mes­sage loud­er through media.” And so he threat­ens to burn the Koran. And in threat­en­ing to burn the Koran, all sorts of anti-prejudicial orga­ni­za­tions sort of stepped up and they release press releas­es telling him not to burn the Koran. 

So the news media starts cov­er­ing the press releas­es, orig­i­nal­ly in Florida but it starts to scale and scale. Once it hits Yahoo! News, once it hits Google, it becomes some­thing that peo­ple want to talk about. And so he ends up on CNN talk­ing about all of this, peo­ple putting pres­sure on him to not do this. 

And so he backs down because he’s got­ten a lot of atten­tion, and this is real­ly fun­da­men­tal­ly what he want­ed. But then the atten­tion went away. People stopped pay­ing atten­tion to the issue. And so he revised it. And in 2011 he was part of a group of peo­ple who burned the Koran.

So what hap­pened was that the news media said No. We will not cov­er this, because cov­er­ing this is ampli­fy­ing a mes­sage that’s not impor­tant. We will not go and wit­ness this act.” 

But, a little-known blog­ger decid­ed that they want­ed to wit­ness this act and write about it. And once that blog­ger wrote about it, it rip­pled its way up the chain. Back to CNN he went. It ends up on the front page of The New York Times. The New York Times of course is trans­lat­ed into many dif­fer­ent lan­guages around the world. And so what hap­pens is that riots start break­ing out in dif­fer­ent parts of the world. And notably one of those riots in Afghanistan results at least twelve peo­ple dead, includ­ing sev­en United Nations workers. 

Now why is this impor­tant? Journalists like to imag­ine that they are the neu­tral reporters of inci­dents. They have a moral respon­si­bil­i­ty to report on any­thing that is news­wor­thy. But I would argue that fail­ing to rec­og­nize that when they do they can cre­ate harm is where we see the irre­spon­si­bil­i­ty of this dynam­ic. Where jour­nal­ism has paired and become part of the broad­er infor­ma­tion and media land­scape. Where we need to think about the moral respon­si­bil­i­ty of being a voice, of being an amplifier. 

My col­league Joan Donovan and I have been putting a lot of time think­ing about what strate­gic silence looks like. And most of the time when we say that, peo­ple are like, You don’t want jour­nal­ists to be silent, do you?” The head of The New York Times actu­al­ly said to me that this is a ter­ri­ble idea. The US gov­ern­ment asks us to be silent all the time and we choose that we need to report.”

And I said, You’re empha­siz­ing the wrong word there.” The phrase is strate­gic silence.” The idea is to be strate­gic. And if you can’t be strate­gic, be silent. You need to be strate­gic not just about what it means to speak, but how the process­es of ampli­fi­ca­tion can rip­ple and cause harm. 

Now, this isn’t some­thing that’s just new. In fact, one of the rea­sons why Joan and I have been col­lab­o­rat­ing on this is that she has gone back through the his­to­ry of strate­gic silence with regard to the Ku Klux Klan in the United States. I have looked back into the his­to­ry of sui­cide. We know that when we report on suicide-related issues we ampli­fy the costs and con­se­quences. Actually, jour­nal­ists as a com­mu­ni­ty decid­ed to be strate­gi­cal­ly silent. The World Health Organization put out a report encour­ag­ing them to do so. 

That has all become undone because of the Internet. To give you a sense of the ampli­tude of harm, when Robin Williams died by sui­cide we saw a 10% increase in sui­cide over the next two months, with a 32% repli­ca­tion of his mech­a­nism of sui­cide. These are peo­ple’s lives that are at stake. 

The chain of infor­ma­tion flow is part of what gets exploit­ed. It’s not no longer just about going to the gate­keep­ers as a jour­nal­ist. It’s about find­ing ways of mov­ing things across the Internet, in through jour­nal­ists and back around, that becomes part of the gift. And I think back to the work of Ryan Holiday, who was a…basically a per­son who had manip­u­lat­ed these sys­tems for his adver­tis­ing pur­pos­es. He fig­ured out that the key to get­ting large amounts of atten­tion was not to pur­chase adver­tise­ments. And that was­n’t pos­si­ble if you were little-known and not rec­og­nized. It was to cre­ate spec­ta­cle that little-known blog­gers would feel the need to cov­er because they’re paid based on the num­ber of arti­cles they can write. And then to move up the chain. To get the blog­gers to be ampli­fied back into Twitter, back into Google News, up into main­stream jour­nal­ism. If you can cre­ate spec­ta­cle and move it across the chain, you can con­trol the infor­ma­tion infrastructure. 

And that of course is where the trolls come in. Trolls have fig­ured out exact­ly these strate­gies. They’ve fig­ured out how to get mes­sages to move sys­tem­at­i­cal­ly across it, whether they are paid at large scales by for­eign states, or whether they’re a group of white nation­al­ist look­ing to move a par­tic­u­lar ideology. 

And in mov­ing across it, what they do is pret­ty sys­tem­at­ic. They actu­al­ly cre­ate all sorts of sock­pup­pet accounts on Twitter—fake accounts. And all they have to do is write to jour­nal­ists and ask ques­tions. And what they do is they ask a jour­nal­ist a ques­tion and be like, What’s going on with this thing?” And jour­nal­ists, under pres­sure to find sto­ries to report, go look­ing around. They imme­di­ate­ly search some­thing in Google. And that becomes the tool of exploita­tion. Because they come across a Wikipedia entry that has been con­ve­nient­ly mod­i­fied to have phras­es that might be of inter­est to them. They end up get­ting to web sites or blogs that have been set up for them, and that’s how nar­ra­tives are moved. They’re moved by main­stream media, because they’re a tool of a vari­ety of trolls. 

And that’s why we have to ask ques­tions about account­abil­i­ty. Who is now to blame here? Is it Twitter, for let­ting you cre­ate pseu­do­ny­mous accounts? Is it the jour­nal­ist who’s look­ing for a scoop under pres­sure? What about the news orga­ni­za­tion whose stan­dards have declined because they’re try­ing to appease their pri­vate equi­ty mas­ters? Should we blame the Internet for mak­ing it so easy to upload videos, edit Wikipedia, or cre­ate your own web site? What about the reporters who feel as though they have to cov­er some­thing because it’s being cov­ered by some­body else? What you see is an account­abil­i­ty ecosys­tem, not one actor being in place. 

Now, let’s look at what that trolling envi­ron­ment looks like, and think through the respon­si­bil­i­ty of it. On November 5th, 201726-year-old walked into a church in Sutherland Springs, Texas and began shoot­ing. Most peo­ple had nev­er heard of the town; it’s a very small town. And now all of a sud­den they got all sorts of news alerts, as we get the United States, and as many of you get over­seas where you’re look­ing at our coun­try going, What are you doing?” But we get these news basi­cal­ly say­ing, Alert: there’s an active shoot­ing some­where.” And every major news out­let basi­cal­ly alerts us. You get this stream on your phone: active shoot­ing; mul­ti­ple dead; Sutherland Springs, Texas. 

So I looked at this and was like oh, I know what’s about to hap­pen. And I jumped online to watch it unfold. And it was a beau­ti­ful moment of explain­ing exact­ly what I expect­ed. These were far right groups who had coor­di­nat­ed online—they love live shoot­ings in the United States because they like to asso­ciate the shoot­er with some par­tic­u­lar agen­da. And at the time the goal was to asso­ciate live shoot­ers with antifa—particularly white shooters. 

Antifa is referred to as anti-fascism, and there are peo­ple who are a part of this net­work but it is actu­al­ly not that large. Far right com­mu­ni­ties have spent a lot of time prop­ping antifa up to look big­ger than it actu­al­ly is, and to look more vio­lent than it actu­al­ly is, with the idea that if they can cre­ate it as a false equiv­a­len­cy the news media will feel the need to report on them simul­ta­ne­ous­ly. So, far right com­mu­ni­ties pro­duce hun­dreds antifa accounts on Twitter, where they respon­si­bil­i­ty for dif­fer­ent actions. They look to be real. News orga­ni­za­tions con­stant­ly cite fake antifa accounts in doing so. 

But it’s not just about that, because what they real­ly want­ed to own was Google search. The abil­i­ty to own Google search dur­ing a live shoot­ing is extreme­ly impor­tant, because most peo­ple who are try­ing to fig­ure out what’s going on will go and search some­thing relat­ed to the inci­dent into a search bar. 

Now, the key with under­stand­ing Google is that Google does­n’t know how to respond to basi­cal­ly a break­ing news sto­ry. Prior to this moment if you had looked up Sutherland Springs, you would see detailed demo­graph­ic his­to­ries of the city, Zillow pages that told you about prop­er­ties; there was noth­ing of sub­stance about Sutherland Springs. 

But, what they did is they first tar­get Twitter and Reddit. Why? Because Google pulls in Twitter and Reddit dur­ing break­ing news events to try to under­stand and con­tex­tu­al­ize as fast as pos­si­ble. So by tar­get­ing Twitter and Reddit they can actu­al­ly get things going. They move across a ton of dif­fer­ent fake accounts. They pum­mel dif­fer­ent ques­tions. They throw at jour­nal­ists, ask­ing ques­tions. They’re like, What’s going on with this? This is just a sto­ry.” And they’re just gen­er­al­ly talk­ing about it. And they start ask­ing ques­tions. Is it antifa? Is it antifa? Is it antifa? 

And that ques­tion’s real­ly impor­tant because even­tu­al­ly peo­ple start cov­er­ing it in dif­fer­ent ways. And what was price­less is that the first news orga­ni­za­tion that they got was Newsweek. Newsweek, think­ing that they were being respon­si­ble, wrote an arti­cle said, “ Antifa’ Responsible for Sutherland Springs Murders, According to Far-Right Media” that detailed how this was com­plete­ly inaccurate. 

But the joke was on Newsweek. Because what these groups cared about was the fact that Google News is at the top of a Google search. So if there’s break­ing news, get­ting top-tier news media cov­er­age is impor­tant. And the thing about top-tier news cov­er­age? is that they can’t give you the whole head­line because there’s not space for it in Google. There’s only a char­ac­ter lim­it. And that char­ac­ter when it turns out to be the first six words of that sto­ry: Antifa Responsible for Sutherland Springs Murder. 

So the result of course is that every­body is look­ing at that as though that is the sto­ry. We see this time and time again. We see this for exam­ple in our shoot­ings relat­ed to Parkland; the idea of mov­ing that this is a cri­sis actor through the system. 

Exploiting the infor­ma­tion ecosys­tem like this requires a tremen­dous amount of dig­i­tal lit­er­a­cy. You need to know where the weak­ness­es are, and have the skills, tools, and time to exper­i­ment. So who is all invest­ed in that? There’s a vari­ety of peo­ple who’ve been build­ing those skills for the bet­ter part of a decade. It’s not just researchers who are moon­light­ing as for-profit strate­gic com­mu­ni­ca­tions com­pa­nies. It’s also Western teenagers who have been cur­tailed by their par­ents and find this to be some­thing fun regard­less of the ide­ol­o­gy. It’s frus­trat­ed twenty-somethings who’ve labeled them­selves as NEETs: Not cur­rent­ly in Education, Employment, or Training. Or betas, part of a male com­mu­ni­ty that does­n’t believe that they have the pow­er of alpha males—you may have heard the notion of an incel recently. 

It’s about find­ing dif­fer­ent peo­ple who are unem­ployed or dis­en­fran­chised, feel­ing as though they can game a sys­tem just like they’ve games in video games. The idea that they can come togeth­er and be some­thing big­ger than they were. It’s the gray side of what we once called call cen­ters, because in the Philippines and India, folks who’ve been trained on try­ing to address con­tent mod­er­a­tion have fig­ured out all the bound­aries of the major tech­ni­cal sys­tems, and as a result have found ways to make a lot more mon­ey exploit­ing those sys­tems for dif­fer­ent actors. It’s activists who’re try­ing to change the world, who are actu­al­ly play­ing in that. But of course, not all of those activists are progressive. 

So cre­at­ing memes to hack the atten­tion econ­o­my was real­ly begun as a youth cul­ture exper­i­ment. But it has become the for­mal basis of what we think of as dis­in­for­ma­tion today. 

Part Four: Epistemological Warfare

So let’s move back into the media to look at how this has begun. In 2010, Russia Today began dis­sem­i­nat­ing high­ly con­tro­ver­sial adver­tise­ments to ground it’s mot­to Question More.” It’s a beau­ti­ful mot­to. It fits into every media lit­er­a­cy nar­ra­tive we’ve ever had. And the resul­tant posters end­ed up trig­ger­ing all sorts of anx­i­ety. For exam­ple in London, posters were put up around this cam­paign and imme­di­ate­ly tak­en down as pro­pa­gan­da. And then there was all sorts of cov­er­age about whether or not they were pro­pa­gan­da, because they were just ask­ing you to ques­tion things.

So let’s see what that actu­al­ly looks like. Is cli­mate change more sci­ence fic­tion than sci­ence fact?” And the real­ly small print, Just how reli­able is the evi­dence that sug­gests human activ­i­ty impacts on cli­mate change. The answer isn’t always clear-cut. And it’s only pos­si­ble to make a bal­anced judg­ment if you are bet­ter informed. By chal­leng­ing the accept­ed view, we reveal a side of the news that you would­n’t nor­mal­ly see. Because we believe that the more you ques­tion, the more you know.”

Now, if you did­n’t already believe in cli­mate change, that archi­tec­ture of a ques­tion would be real­ly pow­er­ful to you. In fact, for those of you think­ing with­in crit­i­cal per­spec­tives, just start to sub­sti­tute the actu­al top­ic. Just how reli­able is the evi­dence that sug­gests that there’s a cor­re­la­tion between exer­cise and weight loss? That sounds like a research agen­da, right? 

This is the beau­ty of a cam­paign like this. It’s seed­ing doubt into the very fab­ric of how we pro­duce knowl­edge. It’s desta­bi­liz­ing knowl­edge in a sys­tem­at­ic way. It’s by cre­at­ing false equiv­a­len­cies that the media will pick up on. This is a project if you’re look­ing to actu­al­ly engage in epis­te­mo­log­i­cal war. 

Various tabloid covers and stories regarding aliens and UFOs

Fake news isn’t the right frame. Sure, there are financially-motivated actors pro­duc­ing patent­ly inac­cu­rate con­tent to make a buck. But have you seen tabloid media? This is a long his­to­ry of things. I mean, I love the UFO his­to­ry. It’s real­ly won­der­ful. But what peo­ple refer to when they talk about fake news is actu­al­ly a feel­ing of dis­trust towards news cre­ation that is pro­duced out­side of their world­view. News that’s pro­duced by peo­ple they don’t trust. News that seems stilt­ed from the van­tage point in which they oper­ate. And it’s why we see con­tent that is polit­i­cal like RT be per­ceived as fake news from this con­text. And why a huge chunk of Americans believe that CNN and New York Times is actu­al­ly fake news. Because here’s where epis­to­mol­o­gy kicks in. 

Epistemology con­cerns how we con­struct knowl­edge peo­ple con­struct knowl­edge through dif­fer­ent ways of mak­ing sense of the world. Some empha­size the sci­en­tif­ic method. Others focus on expe­ri­ence. Still oth­ers ground what they think about through reli­gious doc­trine. And that’s all fine and well when those epis­te­molo­gies align to result in the same nar­ra­tive. But if you are try­ing to engage in a cul­ture war? the best thing you can do is try to split those epis­te­molo­gies. Try to make it seem as though they have noth­ing in com­mon and they are at odds with one anoth­er. And so I love this quote by Cory Doctorow, which I’ll read in case you can’t from afar. We’re not liv­ing through a cri­sis about what is true, we’re liv­ing through a cri­sis about how we know whether some­thing is true.”

We’re not dis­agree­ing about facts, we’re dis­agree­ing about epis­te­mol­o­gy. The estab­lish­ment” ver­sion of epis­te­mol­o­gy is, We use evi­dence to arrive at the truth, vet­ted by inde­pen­dent ver­i­fi­ca­tion (but trust us when we tell you that it’s all been inde­pen­dent­ly ver­i­fied by peo­ple who were prop­er­ly skep­ti­cal and not the bosom bud­dies of the peo­ple they were sup­posed to be fact-checking).”

The alter­na­tive facts” epis­te­mo­log­i­cal method goes like this: The inde­pen­dent’ experts who were sup­posed to be ver­i­fy­ing the evidence-based’ truth were actu­al­ly in bed with the peo­ple they were sup­posed to be fact-checking. In the end, it’s all a mat­ter of faith, then: you either have faith that their’ experts are being truth­ful, or you have faith that we are. Ask your gut, what ver­sion feels more truthful?”
Cory Doctorow, Three kinds of pro­pa­gan­da, and what to do about them, Boing Boing, 2/25/2017 [pre­sen­ta­tion slide; bold­ing boyd’s]

That nar­ra­tive is so pow­er­ful in split­ting peo­ple. And it’s so con­fus­ing to peo­ple when they don’t see a world­view out­side their own, when they’re not engag­ing with epis­te­molo­gies that are dif­fer­ent than their own. Because what’s beau­ti­ful and trou­bling about this dynam­ic is that it’s a way of actu­al­ly ask­ing who gets to con­trol knowl­edge. And it’s some­thing that we strug­gle with, even those who are pro­duc­ing sci­en­tif­ic evidence. 

So RT’s cam­paign was bril­liant not because it con­vinced peo­ple to actu­al­ly change their world­view, but because it slow­ly intro­duced a wedge into a system. 

Part Five: Bias Everywhere

Now, part of what it intro­duced us to is a sys­tem that has bias baked all the way through it. And here’s what we need to look at this ques­tion from anoth­er angle, which is what are the fun­da­men­tal bias­es that under­pin a huge amount of our tech­ni­cal sys­tems, and how do we see that then ampli­fied through these dis­in­for­ma­tion cam­paigns, and where do they come come to con­verge in under­min­ing all of our data infrastructure? 

So let me start with an exam­ple. Latanya Sweeney is a Harvard pro­fes­sor. And she was try­ing to fig­ure out where she could get a paper she had writ­ten before. And for any aca­d­e­m­ic in the room, you know you’ve writ­ten a paper in a while, so you search your name and the paper in a search bar hop­ing you’ll find it. And that’s what she did. 

But she was sur­prised because up came a whole set of ads. Wanna see Latanya Sweeney’s arrest record?” And she was sit­ting here going, I’ve nev­er been arrest­ed. What’s this about?” 

And so, she was with a jour­nal­ist at the time. They start­ed ask­ing ques­tions try­ing to fig­ure out what was going on. And Latanya had a hunch. She fig­ured that what was going on had more to do with the fact that she was search­ing for a name than it had to do with her name in par­tic­u­lar. And she guessed accu­rate­ly that if she threw a vari­ety of dif­fer­ent names into the search engine she would get dif­fer­ent ads from this company. 

It seems as though the com­pa­ny had been pro­duc­ing six dif­fer­ent ads with six dif­fer­ent approach­es to things. So, she took the most pop­u­lar baby names in the United States that are asso­ci­at­ed with African American cul­ture, and the most pop­u­lar baby names that are very clas­si­cal­ly and tra­di­tion­al­ly asso­ci­at­ed with Caucasian cul­ture, and she threw them at Google to see what ads would pop up. 

I’m sure it does­n’t take a PhD to fig­ure out very quick­ly that the names that were white baby names were not like­ly to get crim­i­nal justice-related prod­ucts. They tend­ed to get ads for back­ground checks.” But the ads that were asso­ci­at­ed with black names tend­ed to be far more like­ly to be crim­i­nal justice-related products. 

So what’s going on here? Google does­n’t actu­al­ly let you piv­ot off of race, and Latanya knew that. She was the for­mer Federal Trade Commissioner’s tech­nol­o­gy expert. So she knew some­thing else was hap­pen­ing. What hap­pened is that the way that AdWords works is that you can piv­ot off of some­thing like name” and then to fig­ure out which names are bet­ter depends on who clicks on things. 

So what hap­pened is that peo­ple, aver­age peo­ple, you, search­ing for dif­fer­ent names were more like­ly to click on crim­i­nal justice-related prod­ucts when search­ing for black names than when search­ing for white names, and so the result is that Google learned the racist dynam­ics of American soci­ety and propped it back out at every sin­gle per­son, right. That ampli­fi­ca­tion pow­er is tak­ing the local­ized cul­tur­al bias and ampli­fy­ing it towards everyone. 

And this of course is the cor­ner­stone of a huge set of prob­lems when we look at search engines. Now, take a search like baby” or CEO.” I’m sure you’ve fig­ured out by now that when you search for baby” you get this strange col­lage of white babies in per­fect­ly posed, beau­ti­ful sen­si­bil­i­ties. CEO” gives you a bunch of men wear­ing suits, usu­al­ly white.

But what’s going on here? Are these search engines try­ing to give you these results? No. Here’s what’s going on or how this hap­pens. Why do peo­ple use image search? They use image search to pro­duce PowerPoint slides like this one, right. And so what hap­pens is that they search a gener­ic term like baby” not because they’re look­ing to see what cat­e­gories of babies exist, but because they want a pic­ture of a baby for their PowerPoint deck. 

And so what they’re look­ing for is stock pho­tos. Stock pho­to com­pa­nies have fig­ured that this is a real­ly good way of mak­ing mon­ey. Put small stock pho­tos into the process, get peo­ple to click, they’ll buy the big high-res one for their PowerPoint. And so they’ve mas­tered SEO. They’ve gone and opti­mized by label­ing pic­tures with search terms that are like­ly for PowerPoint decks, and throw­ing that into the search process. And as a result for a query like baby” you don’t get nat­ur­al babies. Because most peo­ple don’t label pho­tos that they’ve tak­en of their own chil­dren with baby,” right. They get it because it’s stock photos. 

Now, what’s prob­lem­at­ic here is that stock pho­tos are extreme­ly prej­u­di­cial in the way they’re con­struct­ed. Not only are they per­fect­ly poised in like, weird ways that like—who puts a head­phone like that on a baby, right? Like who does that? Not only are they done in this par­tic­u­lar con­struct­ed man­ner, but they are high­ly nor­ma­tive towards dif­fer­ent ideas of who is valu­able,” attrac­tive,” etc. with­in society. 

And so you have a prob­lem, because the pho­tos com­ing through the stock pho­to com­pa­nies for a term like baby” are pri­mar­i­ly white. And so then what hap­pens for a search com­pa­ny, right? They see all of this stuff. It looks like it’s highly-ranked because the SEO has been very much manip­u­lat­ed in dif­fer­ent ways. They can’t actu­al­ly see race of an image like this, so they don’t know what’s going on. But all of a sud­den there’s a media blowup telling them that they’ve done some­thing bad so they start to look. 

The prob­lem is that they can’t actu­al­ly see what’s going on here. And so then they rely on their users. They start try­ing to ran­dom­ize oth­er pic­tures to put up and see what what the users will click on. Well we’re back to real­ly racist users. Because the users don’t click on pic­tures that are not pho­to­geni­cal­ly per­fect. They don’t click on pic­tures of more diverse babies. They click on pic­tures of white babies under the age of one. And so how then do you deal with a soci­ety that’s extreme­ly prej­u­di­cial at mul­ti­ple angles feed­ing the entire data system?

And here’s where we have a fun­da­men­tal flaw of any kind of machine learn­ing sys­tem. Machine learn­ing sys­tems are set up to dis­crim­i­nate. And I don’t mean that in the cul­tur­al sense that you’re used to. When we think of dis­crim­i­na­tion we think of the abil­i­ty to assert pow­er over some­one through cul­tur­al dis­crim­i­na­tion. What these sys­tems are set up to do is to seg­ment data in dif­fer­ent ways, to clus­ter data. That is the fun­da­men­tal act of data analy­sis, is to cre­ate clus­ters. But when those clus­ters are laden with cul­tur­al prej­u­dice and bias, that ends up rein­forc­ing the dis­crim­i­na­tion that we were just discussing.

Now, grap­pling with cul­tur­al prej­u­dices and bias­es is extreme­ly impor­tant, but patch­ing them isn’t as easy as one might think. Because fig­ur­ing out how to deal with peo­ple who are spend­ing tons of mon­ey and ener­gy gam­ing sys­tems, cou­pled with prej­u­di­cial activ­i­ties of search by aver­age peo­ple, makes it a real­ly hot place to try to fig­ure out what’s going on.

Part Six: Hateful Red Pills

It’s also where things get gamed. Now let’s talk about some of the gam­ing prob­lems here. In 2012 in the United States, it was real­ly hard to avoid the names of Trayvon Martin and George Zimmerman. Zimmerman had mur­dered the teenaged Martin in Florida, and he had claimed self-protection under our ridicu­lous laws called stand-your-ground. I’m not going to defend US gun laws. I know the feel­ings here. The feel­ings are shared, but in the United States many peo­ple believe that these laws are important. 

And so what hap­pened was that there was a whole set of dynam­ics going on around gun laws, around bru­tal mur­ders of young black men in our coun­try. And so this hit a lev­el of fer­vor in the United States. Every media out­let was cov­er­ing it. 

Well, not all young peo­ple pay atten­tion the media out­lets. And in South Carolina, a white teenag­er by the name of Dylan Roof was­n’t pay­ing much atten­tion to the media, but he kept hear­ing this in the ether some­where. And final­ly he decid­ed to search the names Trayvon Martin” into Google to fig­ure out what was going on. And he decid­ed to read the Wikipedia entry. And the Wikipedia entry was writ­ten, as Wikipedia entries are, in a very neu­tral voice detail­ing the dynam­ics. Coming from the posi­tion he was com­ing from, he con­clud­ed by the end of the Wikipedia arti­cle that George Zimmerman was clear­ly in the right and that Trayvon Martin was at fault. 

But more impor­tant­ly, there was a phrase that was put inside that Wikipedia entry for him to stum­ble upon. That phrase was black on white crime.” He took that phrase and he threw it into Google. We know this because he detailed in it in his man­i­festo. He took that phrase into Google, and by run­ning a search on black on white crime” he came to a site called The Council for Conservative Citizens, which is a white suprema­cist, white nation­al­ist site. 

Results is that he spent the next two years going into forums on white nation­al­ism, explor­ing these dif­fer­ent issues, com­ing to his own for­ma­tive ideas of white pride in ways that are deeply dis­turb­ing and deeply, deeply racist. And on June 17th, 2015 he sat down with a group of African American church­go­ers dur­ing a Bible study in South Carolina. He sat there for an hour before he opened fire on them, killing nine and injur­ing one. And he made it very clear both in his man­i­festo and his sub­se­quent tri­al that he want­ed to start a race war. 

So what’s going on there? There are two things that you need to unpack when you under­stand the manip­u­la­tion of these sys­tems. The first is the notion of red pill. A red pills is a phrase like black on white crime” that is meant to entice you into a learn­ing more. That phrase does­n’t actu­al­ly mean some­thing until it means some­thing, right. Which is to say that no process of Wikipedia would’ve actu­al­ly exclud­ed the term black on white crime” from a Wikipedia entry before this case had sort of unfolded. 

Now, of course the red pill comes from The Matrix, and it’s the moment where Morpheus says to Neo, You take the blue pill, the sto­ry ends. You wake up in your bed and believe what­ev­er you want. You take the red pill, you stay in Wonderland and I show you how deep the rab­bit hole goes.” And from a white nation­al­ist per­spec­tive, the idea is to set red pills all over the place that invite peo­ple to come and explore ideas of white nation­al­ism. It’s a large invi­ta­tion. Most peo­ple don’t trip over them. Some do, and the moment they trip over them they’ve got an invitation. 

And that’s a real­ly inter­est­ing moment for rad­i­cal­iz­ing peo­ple. Because the group is try­ing to stage rad­i­cal­iza­tion through serendip­i­tous encoun­ters, right. And huge amounts of the Internet are flush with all sorts of these terms meant to invite you into these frames. It’s also true for talk radio and oth­er environments. 

And then you go back to search, right. And the prob­lem with search is that there’s a whole set of places where search queries are crap. Michael Golebiewski likes to talk about these as data voids. Data void are the space where nor­mal peo­ple don’t con­struct web sites to actu­al­ly give coun­ternar­ra­tives to white nation­al­ist ide­olo­gies. People do not begin a search query with Did the Holocaust hap­pen?” They don’t cre­ate web­sites with white pride” when they’re try­ing to push back against rad­i­cal­iza­tion. Until of course this becomes visible. 

Once you actu­al­ly have these terms, you have an abil­i­ty to roll them out and to con­stant­ly use search engine opti­miza­tion to mas­ter them. And one of the prob­lems is we don’t fill these data voids until we find out about them. And then we fill with very seri­ous con­tent. The Southern Poverty Law Center, factcheck​.org, Snopes, all tell you that this is white suprema­cist, white nation­al­ist pro­pa­gan­da. But the prob­lem is that if you’re already down these paths, if you’ve already entered the fun­nel, you don’t care about those sites, you want to learn more. So there’s a ques­tion of how you stop those fun­nels in the first place. 

Now, one of the things we start to see in this process of data voids is that even when some­thing starts to get filled—so black on white crime is some­thing that start­ed to get filled by the Roof case and peo­ple start­ed flesh­ing it out—all you see is a shift. And so white nation­al­ist talk radio now talks about white vic­tims of black crimes, which is a new data void out there. 

Alright. We’ve gone through back into dark­ness, let’s come back out into try­ing to fig­ure out where this fits into broad­er sets of con­text. The more pow­er that a tech­ni­cal sys­tem has, the more that peo­ple are intent on abus­ing sys­tems. And the way that you actu­al­ly rem­e­dy this is not just by try­ing to patch­work across that, but it’s by try­ing to under­stand the dynam­ics of pow­er and the align­ments of context. 

And let’s do this by mov­ing away from the search and social media space, because it’s impor­tant to see this in oth­er envi­ron­ments, and to look at how incen­tives stack up. So con­sid­er a main like pre­ci­sion med­i­cine. My guess is that there’s no one in this room who’s like, gung ho we should not solve can­cer,” right. Generally speak­ing, we believe in address­ing med­ical chal­lenges. We may not want to pay for them, we may not want to deal with the con­se­quences of try­ing to do that work, we may not want to invest in it in dif­fer­ent ways. But we gen­er­al­ly believe that med­i­cine is a good thing to pur­sue. A sci­en­tif­ic process that’s impor­tant. And what we then care about is how do we do it eth­i­cal­ly. How do we do it respon­si­bly. We look back to the his­to­ry of abus­es in med­i­cine. Things like Tuskegee, or things like the Nazi exper­i­ments, and we say we’ll nev­er do that again. But if we can bound the process of col­lect­ing and man­ag­ing data in an eth­i­cal man­ner, we gen­er­al­ly believe in it. 

Well, where does that data come from? I argue that there’s sort of three dif­fer­ent ways in which most of the data ends up in the sys­tem. The first is what you would think of as data by choice. Data by choice is the idea that you’ve giv­en over that data by con­sent­ing, with full knowl­edge of every­thing that we’ll use by that data. This in many ways is what a lot of the tech indus­try hopes is how they col­lect data. It’s what a lot of sci­en­tists hope about how they col­lect data. And in an ide­al world it’s how we would get data in the first place. The dif­fi­cul­ty is that even those con­sen­su­al moments start to get cor­rupt­ed in dif­fer­ent ways. 

Now, most data is not done in terms of con­sen­su­al form. Most data is not actu­al­ly done in the extreme oppo­site. But let’s look to the extremes in order to under­stand this. 

At the oppo­site end of the extreme is data by coer­cion. Data by coer­cion is best exem­pli­fied by a real­ly mor­bid American pol­i­cy. So in the United States, there was a rul­ing by the Supreme Court that said that col­lect­ing DNA at the point of arrest was equiv­a­lent to col­lect­ing a pho­to­graph or a fin­ger­print. Our jus­tices clear­ly have not tak­en basic biol­o­gy. And so they missed the idea that there’s maybe a lot more in DNA than a rec­og­niz­able identifier. 

And so the police depart­ment in Orange County in California decid­ed to take this to a log­i­cal extreme. They cre­at­ed a pro­gram called Spit and Acquit” which is, the moment in which some­body might be arrest­ed, they asked them to pro­vide genet­ic mate­r­i­al, and if they pro­vid­ed genet­ic mate­r­i­al they would not be arrest­ed. They would be acquitted. 

And so they built out huge data­bas­es of genet­ic mate­r­i­al. And in build­ing out those huge data­bas­es of genet­ic mate­r­i­al, we have sit­u­a­tions like what we saw last week, which actu­al­ly was just a vol­un­tary process, where it’s used then for law enforce­ment. Only one of the things that’s hap­pened, for exam­ple, from the Spit and Acquit pro­grams in Los Angeles is that law enforce­ment offi­cers show up ask­ing to meet your broth­er, only you did­n’t know you had a broth­er. That’s at the coer­cive end, because it’s not just coer­cive for you, it’s coer­cive for your entire social network. 

Now, the vast major­i­ty of data that we deal with in this ecosys­tem is data by cir­cum­stance. This is you post­ing things on Instagram because you’re hang­ing out with your friends and you hope every­thing will be okay. And the real­i­ty is that most of that mess of data is in that cir­cum­stan­tial envi­ron­ment. So when we get out­side of pre­ci­sion med­i­cine we don’t nec­es­sar­i­ly think about data that’s been done for consent. 

Now, we look at pre­ci­sion med­i­cine as an oppor­tu­ni­ty, a place where we can actu­al­ly pur­sue things. Because gen­er­al­ly we believe in the pur­suit. There are oth­er realms where we don’t nec­es­sar­i­ly agree on what the pur­suit is. Consider the realm of polic­ing. Not every­body has the same view on police. To give you two extreme ver­sions on this, some peo­ple believe that polic­ing is entire­ly about weed­ing out bad actors and pun­ish­ing them. It’s a puni­tive mod­el through the sys­tem. Other peo­ple believe that is about dif­fer­ent kinds of cre­at­ing com­mu­ni­ty trust and sup­port. Obviously what you want is some…where in the mid­dle, but prob­a­bly where is real­ly complicated. 

So the dif­fi­cul­ty is that depend­ing on what your goal is, you’ll do rad­i­cal­ly dif­fer­ent things with a data ana­lyt­ics process like by pre­dic­tive polic­ing. If your goal is to arrest peo­ple, if that is your tar­get, you will send police to a very dif­fer­ent set of direc­tions. You will also dis­cour­age them from spend­ing time in the com­mu­ni­ty, because that way they will be less biased. And that means that the goal of your sys­tem actu­al­ly affects your data ana­lyt­ics process. 

Now, crim­i­nal jus­tice is a fraught area in gen­er­al. And let me twist it in a slight­ly dif­fer­ent way, because there’s a whole ecosys­tem called risk assess­ment scores.” Risk assess­ment scores are the idea of tak­ing infor­ma­tion and deter­min­ing whether or not some­body is a risk. The dif­fi­cul­ty with risk assess­ment scores is that we end up with biased data, as were not sur­prised about. 

But we also end up with this chal­lenge about how peo­ple take that infor­ma­tion to being. Judges, in the­o­ry, are sup­posed to receive those scores and be able to over­ride the infor­ma­tion, to be able to make a human judg­ment on top of what­ev­er that score is. And that’s part of what gets the scor­ing process off the hook, right. Human judg­ment steps in. 

But there’s some­thing fun­ny that hap­pens. Judges are elect­ed. Or they’re employed in one way or anoth­er. And as a result, what hap­pens is that when those risk assess­ment scores are put into motion and judges look at them, they’re incen­tivized to go in align­ment with them. To basi­cal­ly work with in what­ev­er those scores are. Because if they over­ride it, they could be in big polit­i­cal heat for it. 

And that’s where humans in the loop aren’t all that we think they might be. Part of what’s fas­ci­nates me about humans in the loop is that we often use it as a solu­tion as though human judg­ment is the best thing ever, and that it would actu­al­ly be help­ful to get humans there. 

Well, my col­league Madeleine Elish went through the his­to­ry of the Federal Aviation Administration in the United States. This is the admin­is­tra­tion that over­sees things like autopi­lot. And in the 1970s there were a series of hear­ings about whether or not we should have autopi­lot in place. Autopilot was unques­tion­ably safer for fliers. 

To this day, any of you who fly on a reg­u­lar basis, you’ve heard that per­son come on the loud­speak­er telling you that they’re your pilot and they’re there to make it a safe and won­der­ful place. And that per­son most like­ly has­n’t flown a plane in a long time, alright. Because that per­son is there to babysit an algo­rith­mic sys­tem. And they’re only respon­si­bil­i­ty is to step in when that algo­rith­mic sys­tem fails and be able to suc­cess­ful­ly absorb all of the pain and respond fast. 

Most peo­ple can’t do that. They’ve been deskilled on the job. They haven’t been fly­ing planes, and cer­tain­ly not in a high-risk sit­u­a­tion. They don’t know why the fail­ure occurred. And as a result, most of those planes col­lapse because of human error. 

Now, that’s the fun­ny thing. Because what Madeleine argues is that real­ly, the whole point of that pilot isn’t real­ly to help out in an emer­gency. It’s to be an orga­ni­za­tion­al lia­bil­i­ty sponge. To absorb all the pain and respon­si­bil­i­ty for the orga­ni­za­tion. To be the one account­able for the tech­nol­o­gy, in some sense. And she describes this as a moral crum­ple zone,” crum­ple zone being the part of the car that receives impact. And we put humans in that process. 

If you want humans in the loop, a lot of it is about under­stand­ing again that moment of align­ment and that moment of pow­er. I’m on the board of a orga­ni­za­tion called Crisis Text Line, where peo­ple can write to trained coun­selors when they’re in the moment of cri­sis in order to get sup­port. And we actu­al­ly use a ton of data ana­lyt­ics to try to make cer­tain that our coun­selors are more sophis­ti­cat­ed in every­thing that we do. We try to fig­ure out ways of train­ing the coun­selors based on the things that every­body else has learned. And we can do this, first because we’re a non­prof­it, sec­ond because we’re try­ing to actu­al­ly make things bet­ter for those in cri­sis at any giv­en point. And it’s bet­ter because every­body agrees on that mis­sion. And that align­ment is beau­ti­ful. It’s also very rare. 

Power is what’s at stake. Whenever you look at these sys­tems it’s not just about how the tech­nol­o­gy is struc­tured, but it’s how we actu­al­ly under­stand it holis­ti­cal­ly with­in the broad­er environment. 

Take sched­ul­ing soft­ware. This is the soft­ware that deter­mines who gets jobs at any giv­en point. The real­i­ty is that you can actu­al­ly design sched­ul­ing soft­ware to be about opti­miz­ing the expe­ri­ence for work­ers. I can promise you that’s not how it’s being deployed in major retail out­lets. It’s being deployed in order to break unions by guar­an­tee­ing peo­ple don’t actu­al­ly work with one anoth­er. In the United States it’s being deployed to make cer­tain that peo­ple work no more than thirty-two hours so they don’t get health ben­e­fits, etc. It’s not about the algo­rithm, it’s about the process.

It’s also where we see new forms of manip­u­la­tion. Because we talk about all the ways in which this data can get biased or things can go wrong. But it can also be exploit­ed. This is right now in the world of exper­i­men­tal research but it’s fas­ci­nat­ing for all of you are fol­low­ing autonomous vehi­cles. Nick Papernot and his col­leagues have found ways of mod­i­fy­ing stop signs so that autonomous vehi­cles believe them to be yield signs. Imagine this as a way of inter­ven­ing massively. 

Part Eight: The New Bureaucracy

Okay. So we’re com­ing to the end. I’ve giv­en you this crazy tour through issues of bias, dis­crim­i­na­tion, ways in which con­text mat­ters, the ways in which things get exploit­ed. But how do we then think about account­abil­i­ty here? How do we think about respon­si­bil­i­ty? What of course mat­ters the most has to do with pow­er. But who is set­ting the agen­da, under what terms, with what lev­el of over­sight? And who is hack­ing the sys­tem to achieve what agendas?

On both sides of the Atlantic there’s grow­ing con­cern about the pow­er of tech com­pa­nies in shap­ing the infor­ma­tion land­scape. And I think this is pret­ty amaz­ing to watch, espe­cial­ly here in Germany. There’s rea­sons that we’re watch­ing this, because there’s a mas­sive recon­fig­u­ra­tion of the social con­tract and pow­er dynam­ics in soci­ety, and a lot of that is incor­po­rat­ing through tech­nol­o­gy. But it’s not sim­ply about the plat­forms. It’s about a set of new cul­tur­al log­ics which I like to think about through the frame of bureaucracy. 

Because I would argue that today’s algo­rith­mic sys­tems are actu­al­ly an exten­sion of what we under­stand bureau­cra­cy to be. And part of the deci­sion­mak­ing process­es is about the ways in which we can actu­al­ly shift respon­si­bil­i­ty, moral respon­si­bil­i­ty, across a very com­plex sys­tem. And if we look about it as the abil­i­ty to under­stand the mak­ing of cul­tur­al infra­struc­ture, we start to see the way in which data is con­fig­ured in ways of con­trol­ling a larg­er sys­tem. It also means that the mech­a­nisms of account­abil­i­ty, the mech­a­nisms of reg­u­la­tion, can’t look sim­ply at an indi­vid­ual tech­nol­o­gy but they need to look at it with­in a broad­er set of ecosys­tems. We’ve spent the last hun­dred years obsess­ing about how to cre­ate account­abil­i­ty pro­ce­dures around bureau­cra­cy, but we haven’t fig­ured out how to do this well with­in technology. 

Let’s also acknowl­edge that the abil­i­ty to reg­u­late bureau­cra­cy has not actu­al­ly worked so well through­out his­to­ry. And bureau­cra­cy is some­thing that has been sys­tem­at­i­cal­ly abused at dif­fer­ent points. In 1963, Hannah Arendt con­tro­ver­sial­ly pub­lished her analy­sis of the Israeli tri­al of Adolf Eichmann. She described in excru­ci­at­ing detail how this mid-rank SS offi­cer played a sig­nif­i­cant role in facil­i­tat­ing the Holocaust through his logis­tics processes. 

But not because he actu­al­ly was smart. In fact, he was real­ly dumb. It was the way in which he became part of a mil­i­tary bureau­cra­cy and believed him­self to be fol­low­ing orders. And the way in which the dis­tri­b­u­tion of respon­si­bil­i­ty could be some­thing that would be exploit­ed for a vari­ety of peo­ple with neg­a­tive agendas. 

She of course famous­ly sub­ti­tled her book the banal­i­ty of evil,” which is some­thing I think that we should med­i­tate on, espe­cial­ly over the next three days when we think about these tech­nolo­gies. How is it that it’s not nec­es­sar­i­ly their inten­tions but the struc­ture and con­fig­u­ra­tion that caus­es the pain? 

And so from Kafka’s The Trial in 1914, to Arendt’s ideas of Nazi infra­struc­ture, we’ve seen some of the dynam­ics of where bureau­cra­cy can be mun­dane­ly awful to actu­al­ly hor­ri­bly abu­sive. And I would argue that the same algo­rith­mic sys­tems are intro­duc­ing that wide range of chal­lenges for us right now. 

It also means that the roots of today’s prob­lems are much deep­er than how an algo­rithm pri­or­i­tizes con­tent, or max­i­mizes towards a present goal. Just as adver­sar­i­al actors are learn­ing to con­struct and twist data voids, the new tech­nolo­gies we’re see­ing are actu­al­ly becom­ing a log­i­cal exten­sion of neolib­er­al cap­i­tal­ism. The result is that we give peo­ple what they want, with lit­tle con­sid­er­a­tion to how indi­vid­ual inter­ests, how finan­cial inter­ests, may end up result­ing in under­min­ing of social struc­tures. Because what’s at stake is not with­in the bounds of that tech­nol­o­gy but the broad­er social configuration. 

So I’m a big believ­er in Larry Lessig’s argu­ment that sys­tems are reg­u­lat­ed by four forces: the mar­ket, the law, social norms, and archi­tec­ture. And this is where this ques­tion of reg­u­la­tion comes into being. Because I’m con­fi­dent you all will basi­cal­ly be push­ing the direc­tions reg­u­lat­ing major tech plat­forms. But to what end? Are you pure­ly look­ing to reg­u­late the archi­tec­ture of these sys­tems, with­out dis­turb­ing the finan­cial infra­struc­ture that makes them pos­si­ble? Or are you will­ing to dis­man­tle the sur­veil­lance infra­struc­ture that props up tar­get­ed adver­tis­ing, even if it means that your politi­cians and oth­er cap­i­tal­ist struc­tures are going to lose their abil­i­ty to mean­ing­ful­ly adver­tise to people? 

Are you will­ing to dis­man­tle the finan­cial­ized cap­i­tal­ist infra­struc­ture that push­es com­pa­nies to hock­ey stick prof­its quar­ter over quar­ter? And what will you do to grap­ple with the cul­tur­al fac­tors that manip­u­late these sys­tems? What are you will­ing to do to make cer­tain that prej­u­di­cial data does­n’t get ampli­fied through them? Because what’s at stake is not just what we can set up legal­ly in terms of a com­pa­ny, but how we can actu­al­ly struc­ture the right kinds of norms and cul­tur­al struc­tures around it.

Fear and inse­cu­ri­ty are on the rise both here and in the US. And tech­nol­o­gy is not the cause, tech­nol­o­gy is the ampli­fi­er. It’s mir­ror­ing and mag­ni­fy­ing the good, bad, and the ugly of every aspect of this. And just as bureau­cra­cy was manip­u­lat­ed to mali­cious pur­pos­es, we’re watch­ing tech­nol­o­gy be a tool for both good and evil. What we’re fun­da­men­tal­ly see­ing is a new form of a social vul­ner­a­bil­i­ty, secu­ri­ty vul­ner­a­bil­i­ty, sociotech­ni­cal vulnerability. 

"Power is in tearing human minds to pieces and putting them back together again in new shapes of your own choosing." —George Orwell, 1984

Fundamentally, what comes back over and over again is ques­tions about pow­er. And this is where I want to sort of leave you for the next cou­ple of days to real­ly think about things. Because sta­bi­liz­ing the abil­i­ty to real­ly grap­ple with an infor­ma­tion war, an epis­te­mo­log­i­cal war, goes far beyond think­ing about how to con­trol or make sense of a par­tic­u­lar tech­nol­o­gy. It requires us to think about all of the inter­faces between tech­nol­o­gy and the news, tech­nol­o­gy and finan­cial instru­ments, tech­nol­o­gy and opti­miz­ing for social inter­est, tech­nol­o­gy and our pub­lic good. 

And frankly, I’m not actu­al­ly sure how we mean­ing­ful­ly reg­u­late these sys­tems. Because what I know for sure? is that reg­u­lat­ing the tech com­pa­nies alone isn’t going to get us where we want. Thank you.

Further Reference

Session page