Tarleton Gillespie: I’m real­ly excit­ed to be here. I’m very excit­ed to hear some of the work that you’re all doing. And I’m real­ly proud to be kind of just a piece of what CivilServant is doing and can do. 

What I’d like to do just with the few min­utes that I’m up here is to set the stage. This is a huge set of ques­tions, and I think a set of ques­tions that are explod­ing into pub­lic view in a way that they had­n’t even just a few years ago. So I want to sort of like, set the broad place that some of these ques­tions kin­da live. 

So social media plat­forms arose out of the exquis­ite chaos of the Web. And many were designed by peo­ple who were hop­ing, inspired by, or at least hop­ing to prof­it from the free­dom that the Web promised. To host and maybe extend all that par­tic­i­pa­tion, expres­sion, and social con­nec­tion. But as these plat­forms grew, all that chaos and con­tention quick­ly found their way back to them as well. 

And as I said, in the past few years there’s been a grow­ing atten­tion and pub­lic debate about how and why plat­forms mod­er­ate. But as many of the peo­ple in the room know know very well, these prob­lems are not new, and the chal­lenge of man­ag­ing a spon­ta­neous, het­ero­ge­neous, and unruly com­mu­ni­ty are not new. Community man­age­ment has been a cen­tral con­cern since the Web began. And as far back as Usenet mod­er­a­tors, web­mas­ters, and the man­agers of online forums, they all knew that healthy com­mu­ni­ties and live­ly dis­cus­sion could some­times devolve. 

Those who cham­pi­oned online com­mu­ni­ties quick­ly dis­cov­ered that com­mu­ni­ties need care. They have to address the chal­lenges of harm and offense but they also have to devel­op forms of gov­er­nance that pro­tect their com­mu­ni­ty but also embody the demo­c­ra­t­ic pro­ce­dures that match the val­ues of the man­agers and the val­ues of their users. 

Now, the fan­ta­sy of a com­plete­ly open plat­form is a pow­er­ful one. It res­onates with deep and utopi­an notions of com­mu­ni­ty and democ­ra­cy. But it is just that: a fan­ta­sy. There’s no plat­form that does­n’t impose rules to some degree—that would sim­ply be unten­able. And this audi­ence knows that. Although I think it’s still not wide­ly appar­ent to many users. 

And, while we as a pub­lic some­times decry the intru­sion of con­tent mod­er­a­tion on plat­forms, at oth­er moments we decry its absence, right. So we’re ask­ing for mod­er­a­tion too, and ask­ing for it in vary­ing forms. 

So the chal­lenge for plat­forms is exact­ly when, how, and why to inter­vene. Where to draw the line between the accept­able and the pro­hib­it­ed. And these ques­tions rehearse centuries-old debates about the prop­er bound­aries of pub­lic expres­sion while also intro­duc­ing new ones. And also, try­ing to fig­ure out that the par­tic­u­lar ways in which you police and enforce these rules and guide­lines have their own con­se­quences for the shape of a com­mu­ni­ty, for what’s pos­si­ble, and for the kind of mis­steps that a plat­form can take. 

I want to make a quick dis­tinc­tion, just because I think it’s use­ful and peo­ple often blur these togeth­er. A dis­tinc­tion between the gov­er­nance of plat­forms and gov­er­nance by plat­forms. So gov­er­nance of plat­forms, I mean the poli­cies that have emerged in the last decade or two, spec­i­fy­ing the lia­bil­i­ties or the lack there­of that plat­forms may have for user con­tent and the activ­i­ty that they engage in. And that would be poli­cies that are imposed by law, by reg­u­la­tors, by stan­dards organizations. 

And when I say gov­er­nance by plat­forms, what I mean is the way in which social media plat­forms have increas­ing­ly tak­en on this respon­si­bil­i­ty for curat­ing con­tent and polic­ing the activ­i­ty of their users. 

These are relat­ed but they’re not the same. Sometimes the gov­er­nance of plat­forms pro­duces gov­er­nance by plat­forms. So, when the law oblig­ates plat­forms to do some­thing on the law’s behalf… So an exam­ple might be remov­ing child pornog­ra­phy. Then the law imposed on the plat­form cre­ates a law from the platform. 

But US law in par­tic­u­lar made an ear­ly deci­sion to impose very lit­tle oblig­a­tion on social media plat­forms, and in fact to pro­tect them from lia­bil­i­ty. And I think they did so in a way that not only allowed them to build up very com­pli­cat­ed and often opaque gov­er­nance sys­tems, but also to do so with with almost zero oblig­a­tion or oversight. 

So those of you who know the law, I’m talk­ing about the safe har­bor pro­tec­tions that are built into Section 230 of US tele­com reg­u­la­tion. This law offers the broad­est safe har­bor in the world, a kind of immu­ni­ty from lia­bil­i­ty for plat­forms as well as Internet ser­vice providers and search engines, for what their users cir­cu­late and do, right. So, clas­sic ques­tions like defama­tion and obscen­i­ty; the users may do it, the plat­forms are not held liable for that. 

But Milton Mueller points out a real­ly inter­est­ing aspect of this rule that often gets for­got­ten, is that the law has two parts. The first part says if users are engaged in prob­lem­at­ic behav­ior, the plat­form will not be held liable. That’s the clas­sic safe har­bor. That’s the part we think about. 

The sec­ond part said, if plat­forms inter­vene, if they are polic­ing con­tent, if they are mak­ing choic­es, that won’t then make them any more liable, right. The wor­ry was that if they start­ed to pick and choose, then that would cre­ate a height­ened oblig­a­tion. They would look like pub­lish­ers, and they would then be held account­able. So the law says you don’t have to police; you won’t be held liable. And if you do police, that does­n’t make you any more liable, these two parts. 

And it made a lot of sense at the time. It creates…there’s a phrase that often shows up in terms of ser­vice, the right but not the respon­si­bil­i­ty.” Platforms will say we have the right to police but not the respon­si­bil­i­ty to police.” That’s a very lux­u­ri­ous posi­tion to take, right? This is a very dif­fer­ent legal posi­tion than oth­er forms of media and com­mu­ni­ca­tion that have a pub­lic foot­print. And it lets plat­forms mod­er­ate in what­ev­er way they see fit, with­out inde­pen­dent over­sight or pub­lic responsibility. 

In the his­to­ry of US media and tele­com law, by and large when an indus­try is offered a sort of gen­er­ous oppor­tu­ni­ty like this…you could think about broad­cast spec­trum, you could think about man­aged monop­o­lies in telecom­mu­ni­ca­tions, it often comes with some­thing. It comes with some kind of oblig­a­tion to the pub­lic inter­est, right; uni­ver­sal ser­vice. You’re gonna get this priv­i­lege and it’s going to ben­e­fit you eco­nom­i­cal­ly, indus­try, but with that comes a cer­tain set of oblig­a­tions. And lots of peo­ple have argued that those oblig­a­tions are often thin, or they fall away—that’s true. But at least the idea is we’re going to grant you a right or a priv­i­lege but we’re also going to cre­ate a sense of obligation. 

Section 230 basi­cal­ly passed on that oppor­tu­ni­ty. And we could imag­ine all sorts of things, right. Some kind of pub­lic inter­est oblig­a­tion. Some kind of min­i­mum stan­dards. Some kind of best prac­tices. Some kind of pub­lic input, right. At the time it was very hard to see that what seemed extreme­ly impor­tant was to pro­tect inter­me­di­aries from lia­bil­i­ty, to sort of avoid squelch­ing innovation. 

Now we can see just like the grant of a care­ful monop­oly for the tele­com com­pa­nies or cable, or the grant of spec­trum space for broad­cast­ing, this was a very pow­er­ful offer. And it allowed an indus­try to build up and to build a huge appa­ra­tus that man­ages con­tent mod­er­a­tion on their own terms, with none of that kind of frame­work of oblig­a­tion that it might have come with. 

Platforms are eager to keep and enjoy those safe har­bor pro­tec­tions, but they all take advan­tage of that sec­ond half. They all police in good faith, which is what the law asks. 

Nearly all plat­forms impose their own rules, police their sites for offend­ing con­tent and behav­ior. And more impor­tant­ly they’ve cob­bled togeth­er a con­tent mod­er­a­tion appa­ra­tus: rules and guide­lines and the ani­mat­ing prin­ci­ples behind them; com­plaint process­es; appeals process­es; com­plex logis­tics for review and judgment. 

And that logis­tics draws on labor. Company employ­ees. Temporary crowd work­ers. Outsourced review teams. Legal and expert con­sul­tants. Community man­agers. Flaggers, admins, mods, super­flag­gers, non­prof­its, activist orga­ni­za­tions, and some­times the entire user pop­u­la­tion. As well as algo­rith­mic tech­niques, soft­ware for detec­tion, fil­ter­ing, queu­ing, and reporting. 

Not all plat­forms depend on all of these, and no two do it exact­ly the same way. But across the promi­nent social media plat­forms these rules, pro­ce­dures, labor, and logis­tics have coa­lesced into a func­tion­ing tech­ni­cal and insti­tu­tion­al sys­tem, some­times fad­ing into the back­ground, some­times becom­ing a vex­ing point of con­tention between users and platform. 

Users, whether they know it or not, are swarm­ing with­in, around, and some­times against this mod­er­a­tion appa­ra­tus. Maybe some of the con­cerns that are emerg­ing in the pub­lic debate are not just about which rules plat­forms set, right. Is this the right rule; is this the right line to be drawn in the sand? But also what does it mean to approach pub­lic dis­course in this way and at this scale? What are the nature and the impli­ca­tions of the sys­tems that they’re putting in place? Not just the deci­sions but the work and the logis­tics and the arrange­ment that they require. 

The very fact of mod­er­a­tion is shap­ing social media plat­forms as tools, as insti­tu­tions, as com­pa­nies, and as cul­tur­al phe­nom­e­na. And many of the prob­lems that we are ask­ing ques­tions about may lie in the uncer­tain­ties of this dis­trib­uted and com­plex sys­tem of work. And, they often breed in the shad­ow of an appa­ra­tus that remains dis­tinct­ly opaque to pub­lic scrutiny. 

That appa­ra­tus is being test­ed. And it’s being test­ed in a num­ber of ways. First our clas­sic con­cerns about pornog­ra­phy, harass­ment, bul­ly­ing, self-harm, ille­gal activ­i­ty are grow­ing more robust, more tac­ti­cal­ly sophis­ti­cat­ed, and more unbearable. 

More than that, out­side the United States ques­tions are aris­ing that don’t hold that same notion of safe har­bor that would like to hold plat­forms part­ly respon­si­ble for the cir­cu­la­tion of hate speech, the cir­cu­la­tion of ter­ror­ist con­tent and ter­ror­ist recruit­ing. As well as pres­sure from coun­tries that are more inter­est­ed in restrict­ing things like polit­i­cal speech under the guise of reg­u­lat­ing intermediaries. 

And here I think the most press­ing chal­lenge is that we’re mov­ing from a con­cern about indi­vid­ual harms to a con­cern about pub­lic ones is what I mean. So, we could talk about the grow­ing con­cern about misog­y­nis­tic harass­ment, the grow­ing con­cern about white suprema­cy on these plat­forms, not only as a harm that can affect an indi­vid­ual user—which it does. But also the kind of cor­ro­sive effect it has as a whole. Both of those prob­lems are emerging. 

We could talk about non­con­sen­su­al pornography—revenge porn—as impli­ca­tions not only for the user who might look at it or might be the receiv­er of it, but some­one else who’s in a pho­to who isn’t even a user of that plat­form. But now they’re being affected. 

And then cer­tain­ly the ques­tions that we’ve been hear­ing in the last year or two about fake news and polit­i­cal manip­u­la­tion raise a new set of ques­tions. I may nev­er see a fraud­u­lent head­line. I may nev­er have for­ward­ed a fraud­u­lent head­line. But I may be trou­bled by my par­tic­i­pa­tion in a sys­tem that is allow­ing it to cir­cu­late. That’s hav­ing a pub­lic effect even if it did­n’t have an indi­vid­ual effect. And that’s a much hard­er ques­tion to grap­ple with. 

According to John Dewey this is the very nature of a pub­lic. He says the pub­lic con­sists of all those who are affect­ed by the indi­rect con­se­quences of trans­ac­tions to such an extent that it’s deemed nec­es­sary to have those con­se­quences sys­tem­at­i­cal­ly cared for.” That’s the chal­lenge of a pub­lic, and that’s what plat­forms and mod­er­a­tors are facing. 

Platforms tend to dis­avow con­tent moderation—they don’t want to talk about it too much. And they hide it behind the mythos of open par­tic­i­pa­tion. That is still their key sell­ing point. But far from being occa­sion­al, or ancil­lary, or sec­ondary, or back­ground, I want to argue that mod­er­a­tion is essen­tial, it’s con­stant, and it’s a def­i­n­i­tion­al part of what plat­forms do. 

A cou­ple of ways. It’s a sur­pris­ing­ly large part of what plat­forms do, in a day-to-day sense. In terms of time, in terms of resources, in terms of peo­ple. If you just want­ed to say, What are most of the peo­ple work­ing for Facebook doing?” a very large part of them are han­dling moderation. 

Second, mod­er­a­tion shapes how plat­forms think about users. And I don’t just mean the peo­ple who are vio­lat­ing rules and the peo­ple who might be vic­tims of that. If you hand over part of the labor of mod­er­a­tion to peo­ple to flag con­tent, then you begin to think of your users not just as par­tic­i­pants, or con­sumers, or…sellable data, but also as part of the labor force. And that changes the role that users get to play. 

But most impor­tant­ly I would say that con­tent mod­er­a­tion con­sti­tutes the plat­form. Thom Malaby says that plat­forms are hinged on the val­ue of unex­pect­ed con­tri­bu­tions. That’s exact­ly what makes them valu­able, right. But if your val­ue is based on unex­pect­ed con­tri­bu­tions then your job, your com­mod­i­ty, is the tam­ing of those, the tun­ing of those, into some­thing that can be deliv­ered and can be sold. Moderation is the com­mod­i­ty that plat­forms offer. Though plat­forms are part of the Web, they offer to rise above it. And they promise a bet­ter expe­ri­ence of all this infor­ma­tion and sociality. 

In fact, if we want to expand the def­i­n­i­tion of mod­er­a­tion just a lit­tle bit, we could say that polic­ing is just a com­po­nent of the ongo­ing cal­i­bra­tion that social media plat­forms engage in. Part of a three-pronged tac­tic. Moderation: the removal, fil­ter­ing, sus­pen­sion, ban­ning. Recommendation: news­feeds, trend­ing lists, per­son­al­ized sug­ges­tions. And curation: fea­tured con­tent, front page offer­ings. Platforms are con­stant­ly using these three levers to tune the par­tic­i­pa­tion of users, to pro­duce the right feed for each user, the right social exchanges, and the right kind of com­mu­ni­ty. And right” here can mean a lot of things: eth­i­cal, legal, healthy; pro­mot­ing engage­ments, increas­ing ad rev­enue; facil­i­tat­ing data col­lec­tion. Not only can plat­forms not sur­vive with­out mod­er­a­tion, they aren’t plat­forms with­out it. 

So. The hard ques­tions being asked now: free­dom of expres­sion, vir­u­lent misog­y­ny, trolling, breast­feed­ing pho­tos, pro-anorexia, ter­ror­ism, fake news. I see these as part of a fun­da­men­tal recon­sid­er­a­tion of social media plat­forms. A moment of chal­lenge to what they’ve been thus far. And if con­tent mod­er­a­tion is the com­mod­i­ty, if it’s the essence of what plat­forms do, it does­n’t make sense for us to treat it like a ban­dage that gets applied or a mess that gets swept up. Which is still how plat­forms talk about it. Rethinking con­tent mod­er­a­tion might begin with this recog­ni­tion that it’s a key part of how they tune pub­lic dis­course that they pur­port to mere­ly host. 

Moderators, whether it’s com­mu­ni­ty mod­er­a­tors or the teams that are behind the scenes at com­mer­cial plat­forms, are attempt­ing to answer the hard­est ques­tion of mod­ern soci­ety: how can the com­pet­ing con­cerns of a pub­lic be fair­ly attend­ed to? And many plat­forms are fail­ing at this task. Doing it thought­ful­ly is essen­tial. And that means an eye for the pub­lic con­se­quences of dif­fer­ent choic­es. An ear for the dif­fer­ent voic­es, includ­ing the ones that often go unheard. A recog­ni­tion that there is no neu­tral posi­tion; that every arrange­ment car­ries with it an implic­it idea of social­i­ty, democ­ra­cy, fair­ness. And (this is where CivilServant comes in) a delib­er­ate com­mit­ment to sci­en­tif­i­cal­ly test­ing these arrange­ments and pur­su­ing through this process. 

I’m extreme­ly excit­ed to hear all the work that you all are doing. Thank you.