Hi. So, what I’m talk­ing about here is not what we need to do cul­tur­al­ly or polit­i­cal­ly, it’s not the roots of online harass­ment. It’s the design tools that we can use to shape the envi­ron­ments that peo­ple inter­act in to reduce the impact. 

This is mit­i­ga­tion. It’s not going to solve any­thing. That’s a larg­er cul­tur­al change. That’s a big­ger ques­tion, but hav­ing tech­ni­cal tools that let us actu­al­ly help peo­ple’s lives makes the tools that we’re build­ing more use­ful. The free­dom from abuse is a core part of effi­ca­cy for a plat­form. If you can’t use a plat­form with­out get­ting mas­sive­ly attacked when you go on it, then it’s not actu­al­ly very use­ful for you, now is it?

So if you are a design team, you have a respon­si­bil­i­ty, in the same way that you have a respon­si­bil­i­ty to make sure that you’re not ship­ping bro­ken fea­tures. You can think of it as a bug report that says, 50% of the time when I click on this but­ton, the data­base con­nec­tion breaks. 50% of the time when I click on this but­ton, there’s a mas­sive out­pour­ing of hate.” This is a bug that needs to be fixed, and it can be fixed at the design lev­el, or at least mitigated.

When we talk about reduc­ing the impact of online harass­ment, a lot of what we’re talk­ing about is help­ing par­tic­i­pants help them­selves. If you are in a sit­u­a­tion where you’ve got a lim­it­ed forum and you can afford heavily-engaged mod­er­a­tors— MetaFilter is a great exam­ple of this here. It’s an amaz­ing­ly well-moderated com­mu­ni­ty and always has been. However, this has tak­en prob­a­bly hun­dreds of thou­sands of pro­fes­sion­al hours over the life of the site with some incred­i­bly skilled, empath­ic mod­er­a­tors. That does­n’t scale, and it also isn’t real­ly appro­pri­ate if you look at some­thing like Twitter, you can’t have that same kind of mod­er­a­tion on Twitter. That’s fun­da­men­tal­ly a dif­fer­ent struc­ture. So instead we look at tools to give peo­ple agency and con­trol over their own environment.

This things like access con­trol lists, being able to say who gets access to my con­tent and do so in a flex­i­ble way. There’s a prob­lem here where you trade com­plex­i­ty for capa­bil­i­ty. But for instance, on Facebook you can set up pri­vate lists of peo­ple and con­trol okay these posts go here, and these posts go to every­one, and these posts are vis­i­ble only to this real­ly small set.” The prob­lem is that they keep push­ing peo­ple to be more and more pub­lic, and we’ll get back to this a lit­tle bit lat­er. But if you’re going to build tools like access con­trol lists, if you’re going to give peo­ple these promis­es that they get to con­trol these sorts of things, you need to do so in a way that’s actu­al­ly usable by them and you need to respect peo­ple’s deci­sions there.

Access con­trol lists are kind of the most blunt tool, though. There’s a lot of more sub­tle stuff we can do, for instance giv­ing flex­i­ble tools, say on Twitter, I don’t want to see @-replies from accounts that are less than a month old. Having that but­ton now all of a sud­den means all of your sock­pup­pet accounts that are cre­at­ed quick­ly are much less use­ful. And you’re giv­ing peo­ple that kind of flex­i­ble filtering.

Any of these kinds of tools. Bayesian fil­ters that we use to train for spam can be trained on any­thing. If you give them exam­ples of a type of mes­sage, they will learn to rec­og­nize that type of mes­sage over time. You can do this on abu­sive mes­sages just as well. The fil­ters aren’t as well-tuned for that. There are a lot of hacks that have been added to Bayesian fil­ters over the years to make them bet­ter and more effec­tive specif­i­cal­ly at spam, but there’s no rea­son why we can’t build that same sort of smart tool that says, Well I get a lot of @-replies, espe­cial­ly if you’re a pub­lic fig­ure. Show me the ones that look rel­e­vant, show me the ones that look like they’re not spam, but let me train it.” And again this is one of the things, any time you can give the user agency instead of tak­ing it on your site, you’re going to end up with empow­ered users instead of a site that’s con­trol­ling what they see. 

Let peo­ple mon­i­tor abusers. For instance on Facebook when you block some­one, that’s it, you can’t see any­thing they post. If you have some­one who’s been an ongo­ing stalk­er, who has been abu­sive in the past, who has been mak­ing threats, maybe you actu­al­ly need to be able to as a user see what they’re post­ing, even if you don’t want to see it and you don’t want to inter­act with them. That’s a very impor­tant safe­ty con­sid­er­a­tion for peo­ple in a lot of sit­u­a­tions. So giv­ing that kind of trade-off and main­tain­ing user agency.

Privacy tools like Tor are anoth­er real­ly impor­tant thing here. If you are being seri­ous­ly stalked, one of the rec­om­men­da­tions is use Tor, main­tain your anonymi­ty, don’t give peo­ple your home IP address because that can often be linked to an address, espe­cial­ly if you’ve had to leave your apart­ment and go some­where else. You then don’t want to leak that out again. This means deal­ing with anony­mous accounts, and this means shift­ing the land­scape of abuse again. As we heard in the first talk, anonymi­ty does­n’t gen­er­ate abuse in and of itself. But what you can do if you want to dis­cour­age peo­ple from using quick throw­away iden­ti­ties (which is one of the dynam­ics that does cause prob­lems) is you make account cre­ation more heavyweight. 

So instead of it being I’m going to gen­er­ate a hun­dred dif­fer­ent accounts and when each of them gets blocked seri­al­ly…” (Someone has to block a hun­dred accounts.) if I can restrict that abuser to a sin­gle, or two or three anony­mous accounts, it’s a mas­sive impact on the ease of the vic­tim’s abil­i­ty to use the tools that they’ve been giv­en. Tor isn’t the prob­lem, anonymi­ty isn’t the prob­lem, light­weight accounts that are throw­away can be a problem.

One of the oth­er things you want to look at is doing less but doing it more often. It’s one thing when you’re in a forum that has a pub­lish­er and a spe­cif­ic mod­er­a­tion pol­i­cy and that kind of thing. If you have a general-purpose social media site, you need to tread light­ly around abuse. There is a real pub­lic val­ue in keep­ing that con­tent as open as pos­si­ble, and you’re going to have very dif­fer­ent com­mu­ni­ties there, some of which may have polit­i­cal opin­ions sig­nif­i­cant­ly dif­fer­ent from the devel­op­ment team. 

If you default to sim­ply tak­ing down con­tent, block­ing accounts, you have to have a very high bar for how bad it has to get before you can use those tools. Instead, say for instance you’ve got dif­fer­ent access con­trol lists. You’ve got a only peo­ple peo­ple who sub­scribe to you specif­i­cal­ly will see this” mode, when you get a com­plaint on con­tent, you just drop it in that buck­et. Or you have to click through to see it, or any of these kinds of things where you haven’t delet­ed the con­tent entire­ly. Then that means you can have a much low­er bar for that inter­ac­tion because you’re not com­plete­ly pre­vent­ing peo­ple from see­ing it, you’re just shap­ing the conversation.

One of the things which is real­ly impor­tant here is as soon as you have tools for pre­vent­ing abuse and pre­vent­ing harass­ment, they’re also going to be used as weapons against the vic­tims. So if you have report this account for spam” guess what, you’re going to get 10,000 spam reports because some­body wants to knock some­body offline on an account that’s total­ly non-abusive. So it’s impor­tant to have trans­par­ent and clear process­es, and it’s impor­tant to under­stand and care­ful­ly design those tools to min­i­mize the harm that they can do. For instance, Instagram is often very aggres­sive about block­ing accounts and tak­ing down con­tent if they think there’s nudi­ty in it because they’ve decid­ed that’s not accept­able for adults. (It’s their Internet, we just live there.) One of the prob­lems is that when they do that, if they decide that your account is bad, all the data’s gone, and get­ting that restored is non-trivial even when they can do it. 

So instead you want to under­stand, Hey, we’re going to get false reports, or we’re going to get ques­tion­able reports. Maybe let’s not irrev­o­ca­bly delete mas­sive amounts of user data right away.” So it’s just kind of min­i­miz­ing the dam­age that you’re doing so you can do it more eas­i­ly. This then means that you can make the vic­tims jump through few­er hoops. You don’t have to have the real­ly heavy­weight process.

The next thing is help com­mu­ni­ties help them­selves. Abuse occurs in com­mu­ni­ties. There may be one vic­tim, like Anita, who’s kind of the per­son who’s get­ting every­thing thrown at them, but she exists in a com­mu­ni­ty. She has friends. People in com­mu­ni­ties that are receiv­ing abuse can take dif­fer­ent roles. For instance, you may have some­one who has the time to com­pile, Okay, I’m going to spend every morn­ing and go through check­ing these are all the new accounts,” and then dis­trib­ute a col­lec­tive block list. There are tools that you can use that let peo­ple work togeth­er. And it is very dif­fer­ent for peo­ple to be work­ing togeth­er and build­ing those tools for their com­mu­ni­ty, rather than the site doing that for every­one. These have very dif­fer­ent impli­ca­tions. They have very dif­fer­ent legal impli­ca­tions, among oth­er things. But build­ing tools that let com­mu­ni­ties help each oth­er is an incred­i­bly impor­tant tac­tic in effec­tive­ly resist­ing abuse at scale.

Letting them do their own mod­er­a­tion also reduces the mod­er­a­tor load and the mod­er­a­tor cost for the site, and this kind of engi­neer­ing trade-off mat­ters for scal­a­bil­i­ty. When you design these use case for abuse pre­ven­tion, you have to treat them as seri­ous­ly as all of the oth­er use cas­es that you build. For instance, Twitter now has some­thing which was sort of a vague ges­ture at allow­ing peo­ple to have more con­trol of their block lists. You can man­u­al­ly export a CSV file of blocked users, and then man­u­al­ly import it again. This is not very use­ful. They also have lists that you can just click on and sub­scribe to, but that’s for get­ting more con­tent. Having a block list that you could just click on and sub­scribe to and say, Yes, I trust this user. I want to del­e­gate that,” well, that would be too easy. But that’s the kind of ease that you need if you’re going to make these kind of tools for com­mu­ni­ties actu­al­ly functional.

One of the places where abuse occurs more specif­i­cal­ly is when you get what’s called con­text col­lapse. When you have a gam­ing com­mu­ni­ty that has one set of norms and a fem­i­nist com­mu­ni­ty that has anoth­er set of norms, one of the abuse struc­tures, one of the places where abuse will be gen­er­at­ed, is they both think they’re in their liv­ing room and then there’s all these weirdos in it, and they have very dif­fer­ent cul­tures. So con­text col­lapse is one of the things that makes these sys­tems very use­ful to us, but it’s also one of the things that dri­ves abuse. So let­ting com­mu­ni­ties mark their bor­ders, let­ting com­mu­ni­ties enforce their bor­ders, build­ing tools that let there be a, Hey, you know, here we play by these rules” kind of stan­dard set up will make a big dif­fer­ence for the lev­el of abuse that’s generated. 

This isn’t quite enforc­ing a fil­ter bub­ble. People can still choose to go and walk into some­body else’s liv­ing room, but there’s a mark­er that says, Hey, I walked through a door. I’m some­where else now.” and that kind of struc­ture makes a big dif­fer­ence for shap­ing community.

Lastly, stop get­ting rich off abuse. Abuse leads to engage­ment because real harm is hap­pen­ing and peo­ple have to spend more time on the site or with the app to deal with and fight off that abuse, which trans­lates to ad views and mon­ey and rev­enue, and this is one of the rea­sons why a lot of sites are so bad at design­ing for abuse: because they’re mak­ing mon­ey off of it. When you design for engage­ment, for instance the more time you spend on Facebook the more con­tent it shows you, that is specif­i­cal­ly reward­ing engage­ment but that can also be reward­ing abuse. So look­ing at the algo­rithms that you use to shape par­tic­i­pa­tion also has a real impact on the algo­rithms that you use to shape abuse.

Over-collection of data just helps stalk­ing and doxxing. If you col­lect a bunch of infor­ma­tion from your users, you’re also cre­at­ing a big tar­get to be hacked and then to see that data being used to harm your users. So don’t gath­er infor­ma­tion that you don’t need to gath­er. Ad net­works are being active­ly used to spread mal­ware, some­times tar­get­ed mal­ware. If you can run with­out ads, run with­out ads, because ads are one of the most evil things on the Internet right now as far as real secu­ri­ty risks.

And last­ly, kill your VCs. All of the VC fund­ing and the struc­tures that they enforce around con­tin­u­al rapid growth are the things that are dri­ving peo­ple towards these kinds of evil tac­tics or ignor­ing or kind of paper­ing over the dam­age that hap­pens. If you can build an invest­ment mod­el that does­n’t require you to abu­sive­ly dri­ve growth, then you’re going to have less abuse on your plat­form, too. These things are kind of inseparable.

Hopefully some of these were use­ful. Thank you very much.

Further Reference

Description at The Conference’s site of the ses­sion this talk was part of.

The orig­i­nal video for this pre­sen­ta­tion can be found at The Conference’s site.