Tarleton Gillespie: I’m really excited to be here. I’m very excited to hear some of the work that you’re all doing. And I’m really proud to be kind of just a piece of what CivilServant is doing and can do.
What I’d like to do just with the few minutes that I’m up here is to set the stage. This is a huge set of questions, and I think a set of questions that are exploding into public view in a way that they hadn’t even just a few years ago. So I want to sort of like, set the broad place that some of these questions kinda live.
So social media platforms arose out of the exquisite chaos of the Web. And many were designed by people who were hoping, inspired by, or at least hoping to profit from the freedom that the Web promised. To host and maybe extend all that participation, expression, and social connection. But as these platforms grew, all that chaos and contention quickly found their way back to them as well.
And as I said, in the past few years there’s been a growing attention and public debate about how and why platforms moderate. But as many of the people in the room know know very well, these problems are not new, and the challenge of managing a spontaneous, heterogeneous, and unruly community are not new. Community management has been a central concern since the Web began. And as far back as Usenet moderators, webmasters, and the managers of online forums, they all knew that healthy communities and lively discussion could sometimes devolve.
Those who championed online communities quickly discovered that communities need care. They have to address the challenges of harm and offense but they also have to develop forms of governance that protect their community but also embody the democratic procedures that match the values of the managers and the values of their users.
Now, the fantasy of a completely open platform is a powerful one. It resonates with deep and utopian notions of community and democracy. But it is just that: a fantasy. There’s no platform that doesn’t impose rules to some degree—that would simply be untenable. And this audience knows that. Although I think it’s still not widely apparent to many users.
And, while we as a public sometimes decry the intrusion of content moderation on platforms, at other moments we decry its absence, right. So we’re asking for moderation too, and asking for it in varying forms.
So the challenge for platforms is exactly when, how, and why to intervene. Where to draw the line between the acceptable and the prohibited. And these questions rehearse centuries-old debates about the proper boundaries of public expression while also introducing new ones. And also, trying to figure out that the particular ways in which you police and enforce these rules and guidelines have their own consequences for the shape of a community, for what’s possible, and for the kind of missteps that a platform can take.
I want to make a quick distinction, just because I think it’s useful and people often blur these together. A distinction between the governance of platforms and governance by platforms. So governance of platforms, I mean the policies that have emerged in the last decade or two, specifying the liabilities or the lack thereof that platforms may have for user content and the activity that they engage in. And that would be policies that are imposed by law, by regulators, by standards organizations.
And when I say governance by platforms, what I mean is the way in which social media platforms have increasingly taken on this responsibility for curating content and policing the activity of their users.
These are related but they’re not the same. Sometimes the governance of platforms produces governance by platforms. So, when the law obligates platforms to do something on the law’s behalf… So an example might be removing child pornography. Then the law imposed on the platform creates a law from the platform.
But US law in particular made an early decision to impose very little obligation on social media platforms, and in fact to protect them from liability. And I think they did so in a way that not only allowed them to build up very complicated and often opaque governance systems, but also to do so with with almost zero obligation or oversight.
So those of you who know the law, I’m talking about the safe harbor protections that are built into Section 230 of US telecom regulation. This law offers the broadest safe harbor in the world, a kind of immunity from liability for platforms as well as Internet service providers and search engines, for what their users circulate and do, right. So, classic questions like defamation and obscenity; the users may do it, the platforms are not held liable for that.
But Milton Mueller points out a really interesting aspect of this rule that often gets forgotten, is that the law has two parts. The first part says if users are engaged in problematic behavior, the platform will not be held liable. That’s the classic safe harbor. That’s the part we think about.
The second part said, if platforms intervene, if they are policing content, if they are making choices, that won’t then make them any more liable, right. The worry was that if they started to pick and choose, then that would create a heightened obligation. They would look like publishers, and they would then be held accountable. So the law says you don’t have to police; you won’t be held liable. And if you do police, that doesn’t make you any more liable, these two parts.
And it made a lot of sense at the time. It creates…there’s a phrase that often shows up in terms of service, “the right but not the responsibility.” Platforms will say “we have the right to police but not the responsibility to police.” That’s a very luxurious position to take, right? This is a very different legal position than other forms of media and communication that have a public footprint. And it lets platforms moderate in whatever way they see fit, without independent oversight or public responsibility.
In the history of US media and telecom law, by and large when an industry is offered a sort of generous opportunity like this…you could think about broadcast spectrum, you could think about managed monopolies in telecommunications, it often comes with something. It comes with some kind of obligation to the public interest, right; universal service. You’re gonna get this privilege and it’s going to benefit you economically, industry, but with that comes a certain set of obligations. And lots of people have argued that those obligations are often thin, or they fall away—that’s true. But at least the idea is we’re going to grant you a right or a privilege but we’re also going to create a sense of obligation.
Section 230 basically passed on that opportunity. And we could imagine all sorts of things, right. Some kind of public interest obligation. Some kind of minimum standards. Some kind of best practices. Some kind of public input, right. At the time it was very hard to see that what seemed extremely important was to protect intermediaries from liability, to sort of avoid squelching innovation.
Now we can see just like the grant of a careful monopoly for the telecom companies or cable, or the grant of spectrum space for broadcasting, this was a very powerful offer. And it allowed an industry to build up and to build a huge apparatus that manages content moderation on their own terms, with none of that kind of framework of obligation that it might have come with.
Platforms are eager to keep and enjoy those safe harbor protections, but they all take advantage of that second half. They all police in good faith, which is what the law asks.
Nearly all platforms impose their own rules, police their sites for offending content and behavior. And more importantly they’ve cobbled together a content moderation apparatus: rules and guidelines and the animating principles behind them; complaint processes; appeals processes; complex logistics for review and judgment.
And that logistics draws on labor. Company employees. Temporary crowd workers. Outsourced review teams. Legal and expert consultants. Community managers. Flaggers, admins, mods, superflaggers, nonprofits, activist organizations, and sometimes the entire user population. As well as algorithmic techniques, software for detection, filtering, queuing, and reporting.
Not all platforms depend on all of these, and no two do it exactly the same way. But across the prominent social media platforms these rules, procedures, labor, and logistics have coalesced into a functioning technical and institutional system, sometimes fading into the background, sometimes becoming a vexing point of contention between users and platform.
Users, whether they know it or not, are swarming within, around, and sometimes against this moderation apparatus. Maybe some of the concerns that are emerging in the public debate are not just about which rules platforms set, right. Is this the right rule; is this the right line to be drawn in the sand? But also what does it mean to approach public discourse in this way and at this scale? What are the nature and the implications of the systems that they’re putting in place? Not just the decisions but the work and the logistics and the arrangement that they require.
The very fact of moderation is shaping social media platforms as tools, as institutions, as companies, and as cultural phenomena. And many of the problems that we are asking questions about may lie in the uncertainties of this distributed and complex system of work. And, they often breed in the shadow of an apparatus that remains distinctly opaque to public scrutiny.
That apparatus is being tested. And it’s being tested in a number of ways. First our classic concerns about pornography, harassment, bullying, self-harm, illegal activity are growing more robust, more tactically sophisticated, and more unbearable.
More than that, outside the United States questions are arising that don’t hold that same notion of safe harbor that would like to hold platforms partly responsible for the circulation of hate speech, the circulation of terrorist content and terrorist recruiting. As well as pressure from countries that are more interested in restricting things like political speech under the guise of regulating intermediaries.
And here I think the most pressing challenge is that we’re moving from a concern about individual harms to a concern about public ones is what I mean. So, we could talk about the growing concern about misogynistic harassment, the growing concern about white supremacy on these platforms, not only as a harm that can affect an individual user—which it does. But also the kind of corrosive effect it has as a whole. Both of those problems are emerging.
We could talk about nonconsensual pornography—revenge porn—as implications not only for the user who might look at it or might be the receiver of it, but someone else who’s in a photo who isn’t even a user of that platform. But now they’re being affected.
And then certainly the questions that we’ve been hearing in the last year or two about fake news and political manipulation raise a new set of questions. I may never see a fraudulent headline. I may never have forwarded a fraudulent headline. But I may be troubled by my participation in a system that is allowing it to circulate. That’s having a public effect even if it didn’t have an individual effect. And that’s a much harder question to grapple with.
According to John Dewey this is the very nature of a public. He says “the public consists of all those who are affected by the indirect consequences of transactions to such an extent that it’s deemed necessary to have those consequences systematically cared for.” That’s the challenge of a public, and that’s what platforms and moderators are facing.
Platforms tend to disavow content moderation—they don’t want to talk about it too much. And they hide it behind the mythos of open participation. That is still their key selling point. But far from being occasional, or ancillary, or secondary, or background, I want to argue that moderation is essential, it’s constant, and it’s a definitional part of what platforms do.
A couple of ways. It’s a surprisingly large part of what platforms do, in a day-to-day sense. In terms of time, in terms of resources, in terms of people. If you just wanted to say, “What are most of the people working for Facebook doing?” a very large part of them are handling moderation.
Second, moderation shapes how platforms think about users. And I don’t just mean the people who are violating rules and the people who might be victims of that. If you hand over part of the labor of moderation to people to flag content, then you begin to think of your users not just as participants, or consumers, or…sellable data, but also as part of the labor force. And that changes the role that users get to play.
But most importantly I would say that content moderation constitutes the platform. Thom Malaby says that platforms are hinged on the value of unexpected contributions. That’s exactly what makes them valuable, right. But if your value is based on unexpected contributions then your job, your commodity, is the taming of those, the tuning of those, into something that can be delivered and can be sold. Moderation is the commodity that platforms offer. Though platforms are part of the Web, they offer to rise above it. And they promise a better experience of all this information and sociality.
In fact, if we want to expand the definition of moderation just a little bit, we could say that policing is just a component of the ongoing calibration that social media platforms engage in. Part of a three-pronged tactic. Moderation: the removal, filtering, suspension, banning. Recommendation: newsfeeds, trending lists, personalized suggestions. And curation: featured content, front page offerings. Platforms are constantly using these three levers to tune the participation of users, to produce the right feed for each user, the right social exchanges, and the right kind of community. And “right” here can mean a lot of things: ethical, legal, healthy; promoting engagements, increasing ad revenue; facilitating data collection. Not only can platforms not survive without moderation, they aren’t platforms without it.
So. The hard questions being asked now: freedom of expression, virulent misogyny, trolling, breastfeeding photos, pro-anorexia, terrorism, fake news. I see these as part of a fundamental reconsideration of social media platforms. A moment of challenge to what they’ve been thus far. And if content moderation is the commodity, if it’s the essence of what platforms do, it doesn’t make sense for us to treat it like a bandage that gets applied or a mess that gets swept up. Which is still how platforms talk about it. Rethinking content moderation might begin with this recognition that it’s a key part of how they tune public discourse that they purport to merely host.
Moderators, whether it’s community moderators or the teams that are behind the scenes at commercial platforms, are attempting to answer the hardest question of modern society: how can the competing concerns of a public be fairly attended to? And many platforms are failing at this task. Doing it thoughtfully is essential. And that means an eye for the public consequences of different choices. An ear for the different voices, including the ones that often go unheard. A recognition that there is no neutral position; that every arrangement carries with it an implicit idea of sociality, democracy, fairness. And (this is where CivilServant comes in) a deliberate commitment to scientifically testing these arrangements and pursuing through this process.
I’m extremely excited to hear all the work that you all are doing. Thank you.
Further Reference
Gathering the Custodians of the Internet: Lessons from the First CivilServant Summit at CivilServant