Zeynep Tufekci: Thank you so much for that kind intro­duc­tion. I’m thrilled to be here. And, part of writ­ing a book is a lot of things I want­ed to say are in the book. So I want to talk a lit­tle bit about some of the things I’ve been wor­ried about and think­ing about over the past year. My book, Twitter and Tear Gas (it got its title from a Twitter poll; that’s how bad I am at titles), is about social move­ments and how they ebb and flow, and how we seem to have so many tools empow­er­ing us. And yet, author­i­tar­i­an­ism seems to be on the rise all around us. So that’s the book. I was puz­zling about that, and now I’m think­ing more about how are the tech­nolo­gies we’re build­ing com­pat­i­ble and aid­ing the slide into author­i­tar­i­an­ism, and also think­ing why aren’t social move­ments that we have able to counter all this since we are indeed empow­ered.

So a lit­tle bit about me. My name is Zeynep. It is real­ly Tufekci. (Yes, the K and C—the C comes after the K.) And I start­ed out as a for­mer pro­gram­mer. I was a kid real­ly into math and sci­ence. I loved physics. I loved math. And I thought, I am going to be a physi­cist when I grow up!” Because that’s what a lot of kids think when they’re young. And I grew up in Turkey in the shad­ow of the 1980 mil­i­tary coup. So I grew up in a very cen­sored envi­ron­ment where there’s one TV station—we didn’t real­ly get to see much on that. 

And as I was kin­da mak­ing my way through think­ing about my own future as a lit­tle kid, you know, you imag­in­ing myself as a physi­cist, I found out (as it hap­pens to a lot of kids who are into math and sci­ence), I found out about the atom bomb and nuclear weapons and nuclear war. And I thought whoa, this isn’t good. I could become a physi­cist, I could be maybe good at it, but then would I be doing some­thing real­ly moral­ly ques­tion­able? Would I have these huge ques­tions to answer? 

And because of fam­i­ly cir­cum­stances I also need­ed a job as soon as I could. And I was a prac­ti­cal kid that way, too. So I thought instead of physics let me pick a pro­fes­sion that I enjoy, that’s got some­thing to do with math and sci­ence, that doesn’t have eth­i­cal issues. So I picked com­put­ers. [laughs]

So that’s…I think I just told every­body that my pre­dic­tive abil­i­ty is not that good. But you know, right now my friends in physics, they’re in CERN debat­ing who’s going to get the Nobel Prize and all of that for the Higgs boson research, while my com­put­er sci­en­tist friends are build­ing oh, killer robots, manip­u­lat­ing algo­rithms, all of that. So here we are. I did pick a top­ic with a lot of eth­i­cal impli­ca­tions.

Also I start­ed work­ing as a pro­gram­mer pret­ty ear­ly on. And one of the first com­pa­nies I worked for was IBM. And it had this amaz­ing glob­al intranet. And back then there wasn’t even Internet in Turkey, right. So all of a sud­den I could just sort of go on a forum and email peo­ple around the world. And I had this main­frame that had to be some­thing some­thing to local­ize a [MIDI?] machine. It was this mul­ti­plat­form com­plex thing. And I could find the per­son who’d writ­ten the orig­i­nal pro­gram and he’d be like, Oh here you go.” Someone in Japan would answer me. I thought, This is amaz­ing. This is gonna change the world. This is great!”

So I felt real­ly empow­ered and hope­ful with the idea, because again in Turkey, very cen­sored envi­ron­ment. And then the Internet came to Turkey. And then I came to the United States. And I’d been study­ing this in most­ly very hope­ful— I’m per­son­al­ly pret­ty… I’m an opti­mist by per­son­al­i­ty, too. But I’m get­ting more and more wor­ried about this tran­si­tion we’re at that you see in a lot of his­to­ry of tech­nol­o­gy, where the ear­ly tech­nol­o­gy starts with you know the rebels, the pirates, it’s great, it’s amaz­ing, it’s gonna change the world, we think all these great things. 

And it has that poten­tial, very often it has that poten­tial. You know radio was a two-way thing where peo­ple around the world talked ear­ly on. And then World War I came and it became a tool of war. So I start­ed think­ing more about are we at that inflec­tion point. And increas­ing­ly I think we are. I don’t think it’s too late, but I think we have a lot of things that we should wor­ry about. So that’s what I’m going to talk about in my slides.

So. I want to talk a lit­tle bit about how we nor­mal­ize and social­ize our­selves in emer­gen­cies. Because I think it’s impor­tant, because I think the tech world is not pan­ick­ing enough. And my first expe­ri­ence with how we nor­mal­ize emer­gen­cies comes from a pret­ty awful expe­ri­ence. This is a 1999 earth­quake in Turkey in my child­hood home­town, that I was by coin­ci­dence in Istanbul for dur­ing the earth­quake. And then hear­ing about it—feeling it; it was a very strong one—I rushed to the area with some res­cue teams.

See, around the world when you have an earth­quake, res­cue teams around the world rush to it because it kind of makes sense, right. No one coun­try can have all the res­cue teams you need. Earthquakes are rare events. They’re not simul­ta­ne­ous. So every coun­try has a few of those. And then when some­thing hap­pens peo­ple all rush. So, so far so good.

Unfortunately Turkey was real­ly under­pre­pared for this quake. This is sort of the icon­ic pic­ture of it. And you can see it was very capri­cious. A build­ing would be stand­ing, the next one would be down. And I rushed back with a team from the US who had just arrived. It was real­ly chaot­ic, because the coun­try wasn’t pre­pared even though it was on a fault line. So we’re in a shock and the infrastructure’s not there. For exam­ple, teams need­ed trans­porta­tion because when earth­quake teams arrive, they bring their earthquake-specific equip­ment, but they expect trans­porta­tion, fuel, lights, to be pro­vid­ed local­ly because why would you car­ry all that from around the world? It makes sense.

Even trans­porta­tion wasn’t arranged. What hap­pened is a friend of mine, she just flagged a city bus—regular bus. She said, I’ve got an earth­quake team who doesn’t have trans­porta­tion.” And the bus dri­ver was like, Oh, okay,” and just told all the pas­sen­gers get out, please, thank you. Took the team in. Drove three hours to the area. And like a week lat­er I saw him sleep­ing on a lit­tle bench, all stub­bled, his bus still trans­port­ing. He acted…he made a rea­soned choice in an emer­gency. And I hope he nev­er got pun­ished for it. 

One of the things we need­ed was lights. So I spent a lot of time that week acquir­ing lights and fuel. And by acquir­ing I mean break into hous­es that were still stand­ing. I teamed up with a police­man because he seemed to know how to break into houses—I don’t know that sto­ry, how that hap­pened but what­ev­er. We broke into hous­es and we stole espe­cial­ly torchiere halo­gen lights. They were great. And we were in a rush because with an earth­quake you have this gold­en peri­od. Every hour counts if you can pull people—under the rub­ble before they per­ish. That’s amaz­ing, that’s great. But every hour they’re fac­ing dan­ger they may die.

So it was real­ly inter­est­ing to sort of step back and think, Well, I’m break­ing into hous­es, with a police­man, and nobody’s bat­ting an eye.” Why should they, right? And in fact if any­thing they’re just cheer­ing us on and say­ing, Oh, just be quick,” because there’s still after­shocks.

Three days into it, espe­cial­ly with peo­ple who hadn’t lost imme­di­ate fam­i­ly, peo­ple are just com­plete­ly… Not com­plete­ly, they’re still trau­ma. Let me cor­rect­ly say this. People had become nor­mal­ized into rou­tines. Three days, four days. It was just real­ly eye-opening for me to see. They’d just go to the rub­ble and pull a chair and sit down and have a cup of tea. We’d do all these things—aundry, dish­es, kids play­ing, chat­ting. That expe­ri­ence got me to look into how do peo­ple react in emer­gen­cies? How do peo­ple react to crises? How do peo­ple react to moments where there’s dan­ger?

Political cartoon of a man trying to control a horse labeled "panic" on its side, who has thrown another man to the ground

And your image of it might be this. People pan­ick­ing and there’s this cri­sis and let’s just sort of try to calm peo­ple down. If you actu­al­ly talk to cri­sis peo­ple, they tell you again and again we do not pan­ic cor­rect­ly. We don’t pan­ic in time. We pan­ic very late, and then all at once. And this is true for acute emer­gen­cies when there’s a cri­sis moment. Also for an earth­quake. Also for things that are hap­pen­ing slow­ly over a few years, five years, ten years. People don’t pan­ic in time and orga­nize in time.

Why am I talk­ing about this to a bunch of tech­nol­o­gy peo­ple? Because I think we are not pan­ick­ing prop­er­ly at this his­toric inflec­tion point that has many scary down­sides. But we are not there yet. There’s a lot to be done and this is why I’m giv­ing these talks that are more like how do we deal with this emer­gency sit­u­a­tion the way it is. We kind of let things get to this point, how do we deal with this?

A dug sitting in a body of water

Right. This is what I don’t want us to do, all the calm.

So the author­i­tar­i­an slide, let’s talk about this. It hap­pens slow then fast. People nor­mal­ize and social­ly reas­sure each oth­er as it hap­pens. He won’t win the nom­i­na­tion. He won’t win the elec­tion. He won’t do what he says. Turkey will be fine. Le Pen won’t win. EU will stand. You know? We tell each oth­er these things and they don’t hap­pen.

But they hap­pen. And you can sort of read his­to­ry and see how that hap­pens. And the tech world is turn­ing from rebel­lious pirates to com­pli­ant CEOs, increas­ing­ly. It is going to hap­pen. History says it’s going to hap­pen. I have a lot of friends in the tech world as a for­mer pro­gram­mer, as some­body in this world. I trust a lot of them. I trust their val­ues. And they keep com­ing to me and say­ing, Trust me.” And I’m like, you shouldn’t trust your­self because you’re not in charge. There are big, pow­er­ful forces that work here. Those com­pa­nies, they may be run by well-meaning peo­ple right now. They will com­ply. They will be coerced. They will be pur­chased. They will be made to. World War I came and the Navy was like, Radio? It’s ours. Everybody off.” And that was that.

The idea that the work­ers in this space tend to be more pro­gres­sive, more lib­er­a­to­ry, is some sort of guar­an­tee, his­to­ry tells us is illu­so­ry. And yet I hear this again and again from peo­ple who tell me, I work here. I trust the peo­ple. We have inter­nal dia­logue,” etc. I don’t want to rest on that.

All the signs are here. We talked about them—recent elec­tions, we have polar­iza­tion, elite fail­ure. And tech is involved in every bit of this. I talk about mis­in­for­ma­tion, polar­iza­tion, loss of news. But it’s also the econ­o­my. And we’ll talk a lit­tle bit about this. The tech­nol­o­gy econ­o­my is lop­sided and has cre­at­ed a lot of ten­sions. A lot of you in this room are on the ben­e­fi­cia­ry side of this, more or less. But to out­side the tech world, it is a very scary moment. What will my kids do for a liv­ing?” is a very scary moment for peo­ple.

So any­thing about tech­nol­o­gy is the cur­rent thing. The tech world wants to talk about any­thing but how tech­nol­o­gy may be com­pat­i­ble with and aid­ing this kind of author­i­tar­i­an slide. 

Now, every­thing is multi-causal. So you go to tech peo­ple and they say it’s polar­iza­tion, it’s this, it’s that. It’s true. Anything that you want to talk about is multi-causal. But this is a big part of it. Imagine if ear­ly 20th cen­tu­ry, we talked about cars as an ecol­o­gy. What will they do? We imag­ined dif­fer­ent kinds of cities. We thought about the social con­se­quences of sub­ur­ban­iza­tion and maybe didn’t go that way. We thought about cli­mate change, which was talked about even in the 19th cen­tu­ry. We talked about oth­er things, pub­lic trans­porta­tion. What if we hadn’t had cars go the path they did. We’d prob­a­bly be in a much bet­ter world, for lots of things.

So the warn­ings we have. These are my things—the elite tech that right now a lot of peo­ple tell me, I’m here. We’re real­ly good. They can’t replace us.” That’s going to change. You’re going to have machine learn­ing off the shelf that’s going to be eas­i­er and eas­i­er and more com­mod­i­fied. Toolmakers ideals do not rule their tools. Especially some­thing like com­put­ing. And most man­age­ment his­to­ry tells us we’ll either suc­cumb, or be coerced, or become com­pli­ant. It just hap­pens his­tor­i­cal­ly.

And the moon­shots tell you a lot, right. The tech world is very focused on oth­er­world­ly things. Colonizing Mars, liv­ing for­ev­er, upload­ing… I mean, these are fun things to chat about, maybe 2am in a dorm room. But they’re not real­ly the moon­shots. We have com­pli­cat­ed prob­lems here.

So let’s talk about what is this— Why am I talk­ing about tech­nol­o­gy as com­pat­i­ble with author­i­tar­i­an­ism? Let’s talk about what I mean. So the rest of the talk I’m going to go into these things. The case I’m going to make for how we’ve got­ten to this par­tic­u­lar point and what are the fea­tures of tech­nol­o­gy that are com­pat­i­ble, endorse, already aid­ing, our slide into emer­gent author­i­tar­i­an­ism around the world.

One, sur­veil­lance is baked into every­thing we’re doing. It’s just baked into every­thing. Ads… I mean think about it, when was the last time you bought some­thing from an ad? Not very many times. The ads on the Internet are not worth a lot unless you deeply pro­file some­one and/or silent­ly manip­u­late them. Or nudge them. That means that any­thing that’s ad-supported is nec­es­sar­i­ly dri­ven towards this. 

And even when they’re not dri­ven towards this, they are very very tempt­ed to sell the data they have, because that’s where every­thing is going. We’ve built an econ­o­my in the tech­nol­o­gy world that is sur­veil­lant by nature. And it’s going to become worse with sen­sors and IoT. We’re such a sur­veil­lant world right now that you can turn off your phone you can do every­thing, and there are so many cam­eras around and very good face recog­ni­tion. The peo­ple you’re with will post about you. It is not pos­si­ble at the moment to escape the sur­veil­lant econ­o­my of the Internet and be a par­tic­i­pant in the civic sphere.

I work with refugees in North Carolina. Most every­thing hap­pens on Facebook, or WhatsApp, or a few things like that. If I want to be part of that work, I can’t avoid the sur­veil­lant plat­forms. I’m not going to get peo­ple to sort of com­mu­ni­cate with me out­side of those.

Now the busi­ness mod­el of these ad-driven com­pa­nies is increas­ing­ly sell­ing your atten­tion. This is real­ly sig­nif­i­cant because for most of human his­to­ry we had, the prob­lem wasn’t too much infor­ma­tion. The prob­lem was too lit­tle infor­ma­tion. It’s a lit­tle like food, right. For most of human his­to­ry we didn’t have enough to eat. So if your great-great-great-great-great-great-grandparents liked eat­ing, knew how to eat and gained weight real­ly well, it worked great for you. It was a sur­vival skill. But right now we live in a world where there’s too much food and not enough sort of mov­ing around. We’re seden­tary. So we face an issue with how do we deal with this? How do we deal with the cri­sis of an envi­ron­ment that’s com­plete­ly dif­fer­ent from the one that we evolved for for so long?

We have the same prob­lem now in that we’re still the peo­ple who do the pur­chas­ing. We’re still the peo­ple who do the vot­ing. The day is twenty-four hours. And get­ting our atten­tion has become this cru­cial bot­tle­neck, this gate­keep­ing thing to so many things. So, we don’t just have sur­veil­lance, we have these struc­tures that are get­ting bet­ter and bet­ter at cap­tur­ing our atten­tion in all sorts of effi­ca­cious ways. Social media, games, apps, con­tent, pol­i­tics, any­thing you want talk about. Getting our atten­tion not just by doing some­thing ran­dom, but pro­fil­ing us and manip­u­lat­ing us and under­stand­ing us in ways that are asym­met­ric, that we don’t under­stand but they do, has become impor­tant.

And these aren’t— Like I’m just sort of lay­ing out some of the dynam­ics. They’re all going to come togeth­er.

So we have increas­ing­ly smart, sur­veil­lant per­sua­sion archi­tec­tures. Architectures aimed at per­suad­ing us to do some­thing. At the moment it’s click­ing on an ad. And that seems like a waste. We’re just click­ing on an ad. You know. It’s kind of a waste of our ener­gy. But increas­ing­ly it is going to be per­suad­ing us to sup­port some­thing, to think of some­thing, to imag­ine some­thing.

I’d give you two exam­ples. A cou­ple of years ago, I think 2012, four years ago, I wrote an op ed about the big data prac­tices of the Obama smart cam­paign. A lot of peo­ple said this is one of the smartest cam­paigns, used dig­i­tal data, micro­tar­get­ing, iden­ti­fied, gave every­body a score, knew who was per­suad­able, who was mobi­liz­able. A lot of A/B test­ing. All the sort of top of the line stuff.

And I thought, know what? It doesn’t mat­ter whether you sup­port this can­di­date or not. These things are dan­ger­ous for democ­ra­cy. Not because per­suad­ing peo­ple is bad, but doing it in an envi­ron­ment of infor­ma­tion asym­me­try where you’ve got all this stuff about them and then you can talk to them pri­vate­ly with­out it being pub­lic, right. If I sort of tar­get a Facebook ad just at you? That’s…just that you. Like, there’s no pub­lic counter to it. If you saw it on TV, so does the oth­er side and maybe they can try to reach you and counter it. But here you can’t do it. 

And I start­ed giv­ing exam­ples. For exam­ple we know that when peo­ple are fear­ful, they tend to vote for author­i­tar­i­ans. When they’re scared, they tend to vote for strong men and women. And look at how much, for exam­ple, con­tent about ter­ror­ism occu­pies both our social media and our mass media. Now, I’m from Turkey, right. This is a prob­lem, this is a real prob­lem for the Middle East. This is a hor­ri­ble prob­lem. There’s been so many acts of ter­ror­ism, so many mass casu­al­ties. In the Western world, it’s not even a round­ing error to a weekend’s traf­fic fatal­i­ties.

I’m not say­ing I’m not hor­ri­fied. It’s horrible—I hate every sin­gle inci­dent. It’s a cri­sis in that every act of ter­ror­ism is hor­rif­ic. But the dis­pro­por­tion­ate amount of atten­tion it gets is a way that peo­ple get fear­ful. And we know that as peo­ple get fear­ful, a lot of them vote for author­i­tar­i­ans. But that’s not true for every­one. Some peo­ple get pissed off at being manip­u­lat­ed and being scare­mon­gered. So in the past, you kin­da had to do it to every­one.

So in 2012 I said what if you could just find the peo­ple who could psy­cho­log­i­cal­ly, personality-wise, be moti­vat­ed by fear to vote for an author­i­tar­i­an and just tar­get them silent­ly? They can’t even counter this. So back then a lot of my friends who worked in the Obama team and oth­er sort of polit­i­cal cam­paigns said, No! This won’t hap­pen. We’re just per­suad­ing peo­ple. This is fine.” And I said look, I’m not talk­ing about your can­di­date. I’m talk­ing about the long-term health of our democ­ra­cy if you’ve got pub­lic com­mu­ni­ca­tion that has become private-tailored poten­tial manip­u­la­tion.

So fast-forward to [2016]. There’s already talk that the Trump cam­paign data team says they did this. That they silent­ly tar­get­ed on Facebook so you didn’t see it, but just the peo­ple they tar­get­ed saw. They tar­get­ed young black men espe­cial­ly in places like Philadelphia, and some Haitians in Florida, and a few oth­er key dis­tricts, to scare them about Hillary Clinton specif­i­cal­ly. They weren’t try­ing to per­suade, they were just try­ing to demo­bi­lize.

Now, they may be exag­ger­at­ing how good they were at this. But that’s where things are. So I can’t vouch how much they did it, although I have some inde­pen­dent con­fir­ma­tion. Heard from young black men in Philly. They did get tar­get­ed like this. And we have an elec­tion that was decid­ed by less than a 100,000 votes. How many peo­ple were demo­bi­lized in ways that we didn’t see pub­licly? What were they told? We don’t know. Only Facebook knows.

It’s just four years lat­er and you’re already see­ing this idea. And they also tried—this Trump data team tried—to do per­son­al­i­ty analy­sis. Just using Facebook Likes you can ana­lyze people’s big five personality—openness, extro­ver­sion, intro­ver­sion, all of that. So they already tried to use that. Again, some­body might say well they weren’t that good at it. But I’m telling you it’s get­ting bet­ter and bet­ter. This is where things are going. 

So hav­ing smart sur­veil­lant per­sua­sion archi­tec­tures con­trol­ling the whole envi­ron­ment, exper­i­men­tal A/B test­ing, social sci­ence, social sci­ence aware and dri­ven, these are sort of these… They sound like great tools. They are not only great tools, they’re also tools for manip­u­lat­ing the pub­lic. And since they’re so cen­tral­ized and so non-public, we don’t know how much more wide­spread this is going to get.

So I’ll give you an exam­ple from yes­ter­day that I was rant­i­ng a lot on Twitter. Did you see Amazon has this Echo Look that it’s a lit­tle cam­era that you’re sup­posed to put in your bed­room? And it’s going to use machine learn­ing stuff to tell you which of your out­fits is bet­ter. Like you don’t know enough judg­ment in your life, right? You need to have some machine learn­ing algo­rithm…

So there’s going to be all these sort of— Because it’s going to work on this train­ing data, so I will just await the first scan­dal about sort of the bias­es about race, bias­es about weight, and all of that. So that will…almost pre­dictable. But there’s some­thing else hap­pen­ing. If you upload a pic­ture of your­self to Amazon every day, cur­rent machine learning-y algo­rithms (that’s usu­al­ly the best way) can iden­ti­fy the onset of some­thing like depres­sion, like­ly months before any clin­i­cal sign. If I have a pic­ture of you every day, smile, the sub­tle things, machine learn­ing algo­rithms can pick up on this. They can already do this.

Just your social media data… Which isn’t very rich, espe­cial­ly on Twitter—it’s just short… I have a friend who’s done it. She can pre­dict the onset of depres­sion with high prob­a­bil­i­ties months before any clin­i­cal symp­toms. She’s think­ing post­par­tum depres­sion inter­ven­tion, right. So she’s think­ing great things.

I’m think­ing about the ad agency copy I read about the adver­tis­ers who were open­ly pon­der­ing how do we best sell make­up to women. And I’m quot­ing. They said, We know it works best,” they’ve test­ed it, when women feel fat, lone­ly, or depressed.” That’s when they’re ready for their beau­ty inter­ven­tion. I love read­ing trade mag­a­zines where peo­ple are kin­da hon­est. It’s real­ly the best place. 

So, Amazon’s going to know when you’re feel­ing some­what depressed or like­ly they’re going to know whether you’re los­ing or gain­ing weight. They’re going to know how’s your pos­ture. They’re going to know a lot of things. If it’s a video they can prob­a­bly ana­lyze your vital signs, how much blood flow to your face. You can mea­sure heart­beat. It’s kind of amaz­ing with high enough res­o­lu­tion.

Where will this data go? What will Amazon do with it? And who else will even­tu­al­ly maybe get access to it? You know, Amazon peo­ple will say, We’re com­mit­ted to pri­va­cy and we’ll do this and we’ll do that.” And I’m like, once you devel­op a tool you don’t get to con­trol all that will go with it. So, how long before these per­sua­sion archi­tec­tures we’re build­ing are used for more than sell­ing us beau­ty inter­ven­tions” or what­ev­er else they want to sell us? How long before they’re also work­ing on our mind, the pol­i­tics, they’re already here.

So the algo­rith­mic atten­tion manip­u­la­tion, which is engage­ment and pageview-driven, has a lot of con­se­quences. If you go on YouTube— See, I watch a lot of stuff on YouTube for work. I keep open­ing new GMail accounts or going on incog­ni­to because it pol­lutes my own GMail rec­om­men­da­tions. This is some­thing I’ve noticed about it and I’ve talked to lots of peo­ple, and we noticed the same thing.

If you watch some­thing about veg­e­tar­i­an­ism, YouTube says, Would you like to watch some­thing about veg­an­ism?” Not good enough. If you watch Trump it’s like, Would you like to watch some white suprema­cists?” If you watch a some­what you know, rad­i­cal­ly but not vio­lent Islamic… You know, some­body who’s kind of a dis­si­dent in some way, maybe even, you gets sug­gest­ed ISIS-y videos. It’s con­stant­ly push­ing you to the edge of wher­ev­er you are, and it’s doing this algo­rith­mi­cal­ly.

So I kept pon­der­ing why is it doing this, because this isn’t YouTube peo­ple sit­ting down and say­ing let’s push peo­ple. And if you watch some­thing about the Democrats you get con­spir­a­cy left. You get these sug­ges­tions. Why are they doing this? I think this is what’s happening—we know from social sci­ence research. If you’re kind of in a polar­ized moment, and if you feel like you took the red pill and your eyes have been opened. You got some deep truth. You go down that rab­bit hole. So if I can get you obsessed or some­what more inter­est­ed in a more extreme ver­sion of what you are sort of [meh?] inter­est­ed in. If I can pull you to the edge, you’re prob­a­bly going to spend a lot of time click­ing on video after video.

So our algo­rith­mic atten­tion manip­u­la­tion archi­tec­tures do two things. They high­light things that polar­ize, because that dri­ves engage­ment. Or they high­light things that are real­ly sac­cha­rine, syrupy, spir­it of like human soar­ing kind of stuff that are unre­al­is­tic. Also cat videos but that’s fine. No objec­tions. Puppies are fine, too. So what you have here is this real­iza­tion by the algo­rithms that if we get a lit­tle upset about some­thing we go down the rab­bit hole, and that’s good for engage­ment, pageviews. Or if we feel real­ly warm and sweet about some­thing, we kind of go awww,” that’s good. 

So we’re hav­ing this weirdo thing where my Facebook is either peo­ple quar­rel­ing about the crazy stuff, or real­ly sweet stuff. And like, is there noth­ing in the mid­dle? Can we have some sort of middle—well there is of course. But the mun­dane stuff isn’t as engag­ing. And if this is what’s dri­ving my engage­ment and out of this what I see, you have the swing. And it is not healthy for our pub­lic life or per­son­al lives to be on this con­stant swing. It pulls to the edges.

So all of this encour­ages fil­ter bub­bles and polar­iza­tion. Not because we’re not already prone to it. The cur­rent thing I hear is, Well every­thing encour­ages— because that’s kind of humans.” Well yes. It is our ten­den­cy. You know what that’s a lit­tle bit like? We have a sweet tooth for a very good rea­son. Your ances­tors who didn’t like sug­ar and salt? I don’t know, you prob­a­bly wouldn’t be here. It was a very good thing to like sug­ar and salt, where you had to hunt and gath­er and had no fridges. 

So we have a ten­den­cy to like sug­ar and salt. That doesn’t mean we should serve break­fast, din­ner, and lunch com­posed of only sug­ar and salt food. So we have a ten­den­cy, and these per­sua­sion archi­tec­tures and algo­rith­mic atten­tion manip­u­la­tion are feed­ing us our sweet tooth. And that’s not good for us.

We’re also, at the same time, dis­man­tling struc­tures of account­abil­i­ty through our dis­rup­tive tech. Right now I think like 89% of all and when he goes to Facebook, Google. So we’ve got all the ad mon­ey going to these algo­rith­mic engagement-driven, pageview-driven, ad-driven pro­fil­ing, sur­veil­lant sys­tems. Now, I use them both, right. I’ve writ­ten about how good they were for many things. So I’m not unaware of all the good things that come out of hav­ing this con­nect­ed­ness.

But hav­ing con­nect­ed­ness be dri­ven by algo­rithms that pri­or­i­tize pro­fil­ing, sur­veil­lance, and serv­ing ads is not the only way to get con­nect­ed. We could have many oth­er ways where we could use our tech­nol­o­gy to con­nect to one anoth­er in deep ways, but yet we are here and all the money’s going there, and local news­pa­pers to nation­al news­pa­pers are being hol­lowed out. 

And this is real­ly cru­cial, espe­cial­ly at the local lev­el because if you don’t have local news­pa­pers (I’m watch­ing this hap­pen all over the coun­try), local cor­rup­tion starts going unchecked. And then it starts kind of fil­ter­ing upwards from there. The local cor­rup­tion gets unchecked and you have cor­rupt local politi­cians. And then you have state-level cor­rup­tion. And then you have the nation­al thing.

And I see all this sort of in the tech world, Oh, let’s spend $10 mil­lion to cre­ate a research insti­tute…” And look, I’m a pro­fes­sor. Research grants sound great to me. But here, let me save you the mon­ey. Take that mon­ey, divide it into every local news­pa­per, how­ev­er many they are, and just give it to them. Right now, we don’t real­ly need a lot of research. We need fund­ing for the work itself that we know what it is. I mean, again, it real­ly sounds great to do more research for me as a pro­fes­sion­al research per­son. I’m like, I have lit­tle to add. The prob­lem is that ad mon­ey that used to finance—in a his­toric acci­dent—used to finance infor­ma­tion that was sur­round­ed by jour­nal­is­tic ethics, now fuels mis­in­for­ma­tion on social plat­forms.

I don’t mean to say news­pa­pers were per­fect, okay. I spend a lot of time crit­i­ciz­ing how hor­ri­ble they are in so many ways. But they were a cru­cial part of a lib­er­al democ­ra­cy. We need­ed to make them bet­ter, instead we knocked out what­ev­er was good for them.

We also have a lot of tech­nol­o­gy that’s explic­it­ly knock­ing out struc­tures of account­abil­i­ty. Now, I think it’s a very good idea to call a taxi from your smart­phone. I have zero prob­lem with the idea of an Uber, of sorts. And I think if you want to com­plain about taxis and what a monop­oly they were and good rid­dance to them you know, I’m with you. But the prob­lem is the cur­rent dis­rup­tive mod­el, like Uber’s and the rest of them, also the way they’re struc­tured, is not just cre­at­ing the con­ve­nience, it is try­ing to escape any kind of account­abil­i­ty and over­sight we had over these things.

So there are worse things than hav­ing to have taxis that are crap­py. It is hav­ing a sys­tem that’s escap­ing the insti­tu­tion­al account­abil­i­ty struc­tures we built. For exam­ple you have some duties to your [employ­ees]. And if you can just make them all con­trac­tors and pre­tend they’re not your [employ­ees], and you pre­tend you don’t have them… And that’s kind of lack of account­abil­i­ty struc­tures. And the way Facebook says, Oh, we’re not a media com­pa­ny, it’s just the users,” you con­stant­ly push account­abil­i­ty away from you. That is not healthy. 

This is even true for things like Bitcoin, which is inno­v­a­tive and dis­rup­tive and inter­est­ing. But you know what? There’s a rea­son that we use the mon­ey we use, because it’s tied to yes imper­fect, yes not great— You know, I’m a move­ment per­son my whole life. I under­stand the sort of objec­tions to the nation-state. But you know what’s worse than a nation-state? A bunch of peo­ple with zero account­abil­i­ty are like, Let’s just do this.” 

This lack of account­abil­i­ty is a core prob­lem even if the unac­count­able peo­ple in the begin­ning are kin­da good peo­ple and they’re our friends and we’re like, Oh, it’s okay. It’s in their hands. They’ll do good.” That is not how his­to­ry works. 

So the labor real­i­ties of the new econ­o­my are not com­pat­i­ble with a mid­dle class-supported democ­ra­cy. This is this cru­cial, huge, polit­i­cal prob­lem. This is not a prob­lem that can be solved with­in tech­nol­o­gy, but this a cru­cial prob­lem. Facebook right now employs what? Ten, twen­ty thou­sand people—maybe a cou­ple thou­sand are engi­neers. It’s very top-heavy. General Motors at its height, when it was the dom­i­nant com­pa­ny, employed maybe half a mil­lion peo­ple? in pret­ty good jobs. And with a sup­ply chain with a lot of good jobs. 

Right now you have a cou­ple thou­sand engi­neers at a com­pa­ny like Facebook mak­ing real­ly good mon­ey, real­ly smart peo­ple, a lot of good peo­ple. And the rest is basi­cal­ly min­i­mum wage. Think of Amazon, right. It’s a bunch of great engi­neers. They can cre­ate some­thing like Echo Look. (To give you fash­ion judg­ment.) And ware­house jobs. This is not an econ­o­my struc­ture that can sup­port a mass base democ­ra­cy. It just won’t. People will vote for whoever’s going to burn things down, or promise to burn things down, on their behalf.

So the skilled labor. The oth­er thing is this is dis­trib­ut­ing the labor around the world so you have a race to the bot­tom rather than race to the [top] for many many jobs. The sort of out­sourc­ing— I’m for jobs being dis­trib­uted, in some sense. We don’t want them all here, we want all the rest of the world— But it has to be lift­ing every­body up more.

So we also have increas­ing cen­tral­iza­tion as part of this. One of it is net­work effects. And the oth­er is data hunger. I’m on Facebook because so much of my friends and fam­i­ly are on Facebook. It’s also a great tool in many ways. The prod­uct keeps get­ting bet­ter. The alter­na­tives just aren’t fea­si­ble for me because I can’t get my friends to email me. I can’t—just…doesn’t work. I’ve tried. They can use Facebook, they can use Messenger. There are a lot of parts of the world where Facebook is the de fac­to com­mu­ni­ca­tion mech­a­nism. And Messenger. 

And also for some­thing like Google to work so well, it needs all that data. That means once you’ve got all the data, you’re in this real­ly dom­i­nant place where the peo­ple with­out the data can’t com­pete with you. Their algo­rithms won’t work well. Their pro­fil­ing won’t work well.

And also secu­ri­ty. I work on move­ment stuff a lot, as I said. And I work with a lot of peo­ple in pre­car­i­ous sit­u­a­tions and repres­sive regimes. I tell them if your threat mod­el isn’t the US gov­ern­ment or Google, use GMail. I’m like don’t use any­thing that doesn’t have a 5,200 per­son secu­ri­ty team, because it’s not safe.

Long sto­ry, I don’t have to explain to this crowd the way Internet start­ed TCP/IP was built as a trust­ed net­work and all of that. But that is also dri­ving cen­tral­iza­tion because you don’t feel secure. How many major plat­forms have yet not been hacked? Like, count with one hand maybe, that have not been hacked. So you grav­i­tate towards them. So this is also dri­ving why you get Facebook, Google, Amazon, eBay, a cou­ple more. They’re just not easy to knock down. Which means when they do what they do you don’t have a means to use the mar­ket to pun­ish them. Because there isn’t mean­ing­ful con­sumer choice.

There’s asym­me­try in infor­ma­tion. They know so much about us, how many of us have any clue how much data’s about us and how it’s being used? We have no access to it. When Facebook CEO Mark Zuckerberg, he bought a house, he bought the hous­es around him. Why? He want­ed pri­va­cy. I don’t blame him. I don’t begrudge him his pri­va­cy, right. But we don’t have it. We have a sort of com­plete asym­me­try in what we get to see ver­sus what these cen­tral­ized plat­forms and increas­ing­ly gov­ern­ments get to see about us. This asym­me­try is deeply dis­em­pow­er­ing.

Machine intel­li­gence deployed against us. So, algo­rithms, com­put­er pro­grams… Algorithm’s got this sec­ond mean­ing now, it just means com­plex com­put­er pro­grams, so here I can just say… Especially machine learn­ing pro­grams, right. What’s hap­pen­ing is that we’ve got this real­ly inter­est­ing pow­er­ful tool that can chew through all that data and do some lin­ear alge­bra and some regres­sion, and spit out pret­ty pow­er­ful clas­si­fi­ca­tions.

Who should we hire? Give it some train­ing data, divide it into peo­ple who work high per­for­mance, low per­form­ers. Churn churn churn, train. And then you give it a new a new batch. It says hire those, don’t hire those.

The prob­lem is you don’t under­stand what it’s doing. It’s pow­er­ful, but it’s kind of an alien intel­li­gence. We tend to think of it as a smart human? It’s not, okay. It’s a com­plete­ly dif­fer­ent intel­li­gence type. It’s kin­da alien. And it’s also pow­er­ful. So for all we know, it’s churn­ing through that social media data and fig­ur­ing out who’s like­ly to be clin­i­cal­ly depressed in the next six months, prob­a­bilis­ti­cal­ly. You have no idea if it’s doing that or not, or what else it’s pick­ing up on. We’re putting an alien intel­li­gence we don’t ful­ly under­stand, that has pret­ty good pre­dic­tive prob­a­bil­i­ties, in charge of decision-making in a lot of gate­keep­ing sit­u­a­tions, with­out under­stand­ing what on earth they’re doing.

And already in the tech world, you talk to peo­ple at high tech com­pa­nies and they’re like, No, we don’t under­stand our machine learn­ing algo­rithms. We’re try­ing to dive into it a lit­tle bit.” It’s now spread­ing to the ordi­nary cor­po­rate world. They don’t under­stand it at all, and they don’t even care. It’s cheap, it works, let’s use it to clas­si­fy.

But what is it pick­ing up on? What is the decision-making? Do we real­ly want a world in which a hir­ing algo­rithm weeds out every­body prone to clin­i­cal depres­sion with some good prob­a­bilis­tic abil­i­ty, 90% of them? It’s a prob­lem if it’s right. It’s a prob­lem when it’s wrong.

And these things can infer non-disclosed pat­terns. Even if you nev­er told Facebook your sex­u­al ori­en­ta­tion… I mean, it’s 90%+ prob­a­bil­i­ty it can guess it. It can guess your race if you nev­er told it. It can guess…well, for­get the pic­tures. It can guess just from your per­son­al­i­ty types. There’s so many things that can be com­pu­ta­tion­al­ly inferred about you, pre­dictably pow­er­ful, that you’ve nev­er dis­closed. And peo­ple can’t think like this. I didn’t dis­close it but it can be inferred about me. 

And bias laun­der­ing. A lot of the train­ing data has human bias­es built into it and now it goes through the machine learn­ing algorithm—you’re like, Oh, the machine did it.” So this is the opac­i­ty and the error pat­tern that we don’t under­stand, how are they going to fail? This isn’t a human intel­li­gence. It’s going to fail in non-human ways. These are all our big chal­lenges with bias­es.

Now why is this com­pat­i­ble with author­i­tar­i­an­ism? Think Orwell—ah! Think Huxley, not Orwell. This is my sub­con­scious error. Orwell thought about this total­i­tar­i­an state where they dragged you and sort of tor­tured you. That is not real­ly the mod­ern author­i­tar­i­an­ism. Modern author­i­tar­i­an­ism is increas­ing­ly going to be about nudg­ing you in this bal­ance between fear and com­pla­cen­cy. And manip­u­la­tion, right. If they can pro­file each one a few, under­stand your desires… What are you vul­ner­a­ble about? What do you like? How do we keep you qui­et? How do we keep you mov­ing polit­i­cal­ly? The more they under­stand it one by one, the more they can manip­u­late this. Because again, it’s asym­met­ric. It’s an immer­sive envi­ron­ment, they con­trol it.

Effects of sur­veil­lance. In a lot of cas­es, and I know this from work­ing on move­ments for so long, is that once you become real­ly aware of sur­veil­lance it’s not nec­es­sar­i­ly this sud­den thing where every­body shuts up. It’s self-censorship. People stop post­ing stuff. They stop talk­ing pol­i­tics. They stop even think­ing to them­selves because you’re not going to real­ly feel com­fort­able shar­ing it. Surveillance cou­pled with repres­sion is a very effec­tive tool for spi­ral of silence. And con­trol, manip­u­la­tion, and per­sua­sion in the hands of author­i­tar­i­ans.

I want to give just one exam­ple from his­to­ry before I con­clude. Films, movies. I like watch­ing a good movie, I like doc­u­men­taries, I like the craft. It’s real­ly inter­est­ing. But ear­ly 20th cen­tu­ry film­mak­ing craft was devel­oped by peo­ple who end­ed up in the ser­vice of fas­cism, vio­lent and vir­u­lent racism, and there are two very strik­ing exam­ples.

One of them is The Birth of a Nation. This is ear­ly 20th cen­tu­ry. It’s this hor­ri­ble, racist…just mur­der­ous movie. It is the rea­son the KKK got restart­ed. It spread like wildfire—it went viral. It also used the tools of the craft very well. The way we think of A/B test­ing, dynam­ic archi­tec­ture, exper­i­men­tal stuff, social sci­ence, engage­ment, all the things we think of our tools of our craft, the craft of moviemak­ing, it was a leap for­ward. And it end­ed up with light­ing the fire that recre­at­ed the KKK. Of course it was on a bedrock of racism. But you have to light those fires for them to go some­place.

The sec­ond one is a scene from Triumph of the Will, which was shot by Leni Riefenstahl. There’s a great doc­u­men­tary, I rec­om­mend it to every­body in tech, The Wonderful, Horrible Life of Leni Riefenstahl. She lived to be 99. She was this film­mak­er, artist, actress, gor­geous woman. And real­ly, she devel­oped the craft of film­mak­ing. She was real­ly into it. So when Hitler asked her, when the Nazis asked her to film that she was like, Great! I can prac­tice my craft.” And you got The Triumph of the Will and the pro­pa­gan­da regime that was so con­se­quen­tial and effi­ca­cious in help­ing the rights of the Third Reich. And for years after she was like, I was just prac­tic­ing my craft.” But craft is not a tool that you can con­trol. It’s not some­thing that’s neu­tral.

So! Why am I so depres­sive this morn­ing? I’m not. Because I don’t think we’re there. I just see this inflec­tion point and I feel like we don’t have to do things this way. It’s going that way. We don’t have to do things this way. And one of the things that I real­ly feel hope­ful about is that there is this big diver­gence at the moment between where the world is going; and what in gen­er­al the tech com­mu­ni­ty thinks; and that tech­nol­o­gy work­ers are still a very priv­i­leged, unique group in that they can most often walk out of a job and walk into anoth­er one.

I talk to peo­ple at very large tech com­pa­nies, they say all sorts of things. They have no fear. I’m like, Aren’t you afraid?” They’re like, No, I’ll just walk into anoth­er job.” They make a lot of mon­ey. So there’s demand. There’s still huge demand. And what I’m say­ing is this will change. Ten, twen­ty years, fif­teen years, I don’t know when. Like with oth­er tech­nolo­gies, it will start requir­ing less and less spe­cial­iza­tion. The deskilling of the work will move upward the chain. You’ll get more and more minor­i­ty.

But right now there’s a large num­ber of tech­nol­o­gy peo­ple who are the ones cre­at­ing these tools. Who are the ones that have the abil­i­ty for the most part to walk into a job that pays a rea­son­able wage. And these com­pa­nies can’t do this with­out them. 

So this is what I’m think­ing. I’m not going to end up with sort of an answer, but how do we do this so that this great tool, the one that I had this hope for when I dis­cov­ered it in Turkey, how do we do this so that we take the ini­tia­tive and not just cre­ate anoth­er algo­rithm to get peo­ple to click on more ads, anoth­er sur­veil­lant archi­tec­ture, anoth­er thing that will be used poten­tial­ly in the future by author­i­tar­i­ans?

I just want to think, how do we become like that bus dri­ver? The one that said Oh wait. You know, an earth­quake team needs trans­porta­tion. Alright, every­body get out.” I want that ini­tia­tive. And I think we can do it, because once again we have lever­age as peo­ple in this sec­tor. We have lever­age that a lot of oth­er peo­ple don’t have. 

So this is where I’m going to end. And I’m easy to find. Thank you for lis­ten­ing. And I just hope to have this con­ver­sa­tion with more and more peo­ple. Thank you.


Discussion

Presider: Wow. Uh.

Zeynep Tufekci: I'm an optimist. I promise you. If I didn't have a lot of hope, I just wouldn't bother with all of this. I just…I'm projecting into the future, and partly… I'm from Turkey, so you know the person who's seen a movie, a horror movie, and that's yelling, "Basement! Don't go into the basement. You with the red shirt, do not go into the basement." That's how I feel like. I'm also from North Carolina, so I kind of feel like let's not go into that basement. So I think there's hope and time.

Presider: Wow. So I took so many notes my pen is dry now. But, I will focus on a couple of questions here. You mentioned a lot of things in a light that is harmful to many people. And I think many of us in the audience, and certainly myself, have heard a lot of these technologies described as revolutions in marketing technology. As great new ways to unlock customer value. And these paths forward to evolve your business, right. That "digital transformation." How do you reconcile, as a technology worker, being told one narrative and then living this other reality?

Tufekci: Well, see the thing is when you're building something, you're just thinking oh, I'm building value, right. I'm just building value. I'm just helping somebody…some customer value. You know, finding consumers. But every time you collect data about someone, there are these ethical questions. What's the youth policy? How are you keeping it secure? Do you need to retain it? How much retention are you going to do?

So there are all these ethical questions that I feel like we're skipping over. We're kind of not even thinking about it. We're just saying let's do it this way. And what I want to encourage people to think is, every time you measure something about someone and write it down or record it and they don't even know that you did it, you create this situation that's, once again, compatible with a lot of other things.

So yes, you may be also creating for the consumer. So that may well be true. Those things can both be true. But you're also building the infrastructure for all these other things.

Presider: There are a lot of folks here who are individual developers, individual engineers, that might be asking themselves what role do I play in this? I'm just— And I don't want to say I'm just taking orders—

Tufekci: We are.

Presider: I'm just filing an issue. I'm just filling this task, you know. I don't get a say in how my tool gets uses, I just build it.

Tufekci: We do get a say in how our tools get used, though, right. I mean this is the thing. And I think we've got to do a couple of things. I think we've got to assert more interest in how our tool get used. So if you're working in a large company, there's a lot of ways to assert. But even if you're working as an individual person, you can think of how this is this tool going to be used. And maybe more importantly, can we build alternative tools? Can we build systems that create the convenience and value that we want but don't have all the overhead that comes with it?

I mean I've been sort of…asking for this. It's not going to happen, apparently. If you look at even Facebook, right, one of the largest companies in the world. What is it? $400 billion or something like that? If you read its filings, it's making like ten, twenty dollars a year, per person? So all this data surveillance and persuasion architecture and manipulation and misinformation and fake news and all the stuff that we deal with and all its harmful effects? Give me a subscription version of this. Make me the customer, to have your connected world. You know, even if it meant you were worth I don't know, ten billion instead of four hundred billion, I think people can survive on that?

There are all these things that are these alternative paths that we haven't taken, we haven't explored. And I think everyone from sort of the individual user to someone who's building a tool, to people working in big tech companies, can say, "Is really our imagination going to be so limited that we're going to build all the stuff that is so compatible with authoritarianism just to get people to click on a…shoe ad?" I mean, we're selling ourselves really cheap. We're building authoritarianism for such a cheap, cheap thing. That's what I'm thinking, that maybe we can build alternatives.

Presider: One tension that I struggled with with this talk is this notion of authoritarianism, and maybe a comical reduction would be to think of them as this Big Bad, in Joss Whedon terms. There's this big, potentially malevolent force that wants to manipulate us. But it's hard to reconcile that with some of the public imagery of let's say the buffoonery that surrounds some of the authoritarian forces in the global stage. So these people can barely finish a speech sometimes, but we're also trying to think of them as ones who would actively manipulate us. How do we reconcile these possibilities.

Tufekci: Well, if you read a— I mean, I don't want to say we're in the Weimer Republic and we've got a new Hitler—we don't. Okay, so it's kinda clear I don't want to trip on Godwin's Law here. But, if you read Hitler's biographies, he was a buffoon. He was an idiot in so many ways. But he was very good at one thing, and one thing, and that was firing up people in speeches. If you read his biography, he's a failed painter…he's a failed everything. And then he gets in a beer hall and starts spitting out conspiracy craziness. And holds people's attention. And Hubris is a really good book on this. And then all of a sudden there's one thing he's good at and he's really good at it.

So I'm not going to say our authoritarians are these super smart clever things that are going to be these omnipotent, omniscient things that do this. But they could at the same time be very good at tapping into a fear, a sense of instability, a lot of complicated things. Legacy of racism, a lot of things that kinda exist, kind of converging. They could tap into that very effectively. And while you would be saying—and I have said this so many times in the past few years—how could people believe this? They can. It's complicated and we can both sympathize and not sympathize we can do whatever we want to. But it is quite possible for authoritarianism to be ruled by not the smartest people, but very often they will find a Leni Riefenstahl to hire.

So you know, Hitler may not have been very smart about a lot of things. But he was good at the beer hall speech. And that regime did hire one of the most talented filmmakers of the day. So I'm like, we're the talented people of our day. How do we not go work for them? Because some people will work for them directly, and some people will work for them in directly in that the tool you built will then be used to further authoritarianism. And we saw this. Facebook's algorithm is so happily prone to misinformation… It monetizes misinformation. You can go viral with something crazy, the UX is flat so you can't tell Denver Guardian—the fake one—from a reputable newspaper. If it starts an argument in your mentions, its engagement. Facebook's going to push it to more people.

So you built a tool that's supposed to connect people, well it did connect people. But around this. It was very compatible with it. So it doesn't really matter if you thought the tool was going to be used for something, if it has these features this is where it's going to go. And people have warned them about this, for example. Before the election they were going ehhh. But here we are.

And I'm not saying— Everything's multi-causal. In a close election, it wasn't one thing. But I follow social media pretty closely on this thing. Misinformation was… It had a spike. It had a huge spike in this election. And it was partly spammers just monetizing Facebook's algorithm, Google's ad networks. And it was part of this polarizing environment feeding into it. And it was partly dark targeting ads. It was all of this combined. And technology was at the center of enabling new things, furthering existing difficult things.

Presider: So… And this'll be my last question. Give me a second to formulate this here. As technology workers who have quite a bit of privilege already, and whose tools are being potentially used to strike at those who are less privileged than us, what then… Oh man, how do I even put this?

For those who are being targeted, for those who are manipulated by information easily, for those who are less savvy about technology than we are, what is our responsibility to them, as technology workers, given that we have access to this ecosystem that they may not?

Tufekci: Right. So this is where I get into my wide diversity is important spiel. And I don't mean diversity of like, token diversity in that you just bring more Stanford CS women into the room. I mean, I identify as a geek person. I love my tribe. But we kinda have our own sort of tendencies and specialties?

What we really need is more people in the room where design is being done. More designers. And just people we talk to in all sorts of… From all levels of society, that can help us think of what's happening. Because it's not always easy to imagine how something's going to play out. But very often, the people from those communities, they're going to be the canaries in the mine. Because you can sit with them either, if we had more of them, making the technology. And until then we're just listening to more of them.

I'll give you an example from Facebook. I give Facebook examples because it's more familiar with people. It's not that Facebook's the only problem, not at all. But a couple years ago do you remember when Facebook had a year-end thing where it said "It's been a great year!" And it algorithmically picked a picture and put a party theme around it.

So, it came to people's attention, especially Facebook's attention, because someone from our community, a CSS creator, a technologist very well-loved, had a very unfortunate, horrific tragedy that year. He lost his 6 year-old daughter to cancer. And a lot of us knew about it because he was blogging, and he was a blogger already, good writer, technologist. And you know, everybody was heartbroken.

So what does Facebook's algorithm do at the end of the year? Picks up a picture of his daughter, puts a party theme around it, says it's been a great year. Right during the holidays, you know. It's been six months, they're struggling. He wrote this heartbroken post about algorithmic cruelty. He was very kind, in fact, what he wrote.

So how could this happen? Well, head of Facebook product at that point was a 26 year-old Stanford graduate, whose stock options might've been great. He's surrounded by other Stanford graduates who's stock options are great. They work at Facebook. They live in San Francisco. It's been a great year! Right?

The kind of failure of imagination, that it might not have been a great year for a billion and a half people. This is so elementary. I mean you do not need a Stanford CS degree. I am telling you, literally walk out, pull three people, and ask them, "Do you think it's been a great year for a billion and a half people?" You will get the correct answer.

And this is what I mean. We can be so narrow. Some of the smartest people in the world can make such obvious, horrible mistakes, right. And this should call for humility. We need to be talking not just among ourselves. We need to broaden who gets to be a technologist. And there are a lot of issues about doing it as fast as we can. And even if we can't do it right now and it'll take some time, I am literally saying pull people off the street and say, "Is this a good idea?" and you will get insights that we as a geeky community, a well-paid community, more male than female, less some minorities… I mean, we won't have those life experiences to envision all the potentials. And I think that's a really healthy way to go, is to try to think through.

As I said, a lot of times I think through things partly because I'm from Turkey and that kinda informs my understanding. Making those intersections is really important, and that's why broadening, really bringing more life experiences into how we think about technology, would help us do it better and would also help avoid such horrible mistakes that we keep seeing again and again, again. And that's kinda my little pitch about that.

Presider: Thank you.

Tufekci: Thank you.

Presider: Thank you so much.

Further Reference

Session description at the DrupalCon site


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.