Jillian C. York: We all know that hate speech is a huge prob­lem online but we don’t real­ly nec­es­sar­i­ly agree on what hate speech” is. Same goes for harass­ment, same goes for vio­lent extrem­ism. A lot of the top­ics that we’re try­ing to tack­le” or try­ing to deal with on the Internet, we’re not actu­al­ly defin­ing ahead of time. And so what we’ve end­ed up with is a sys­tem where­by both com­pa­nies, and gov­ern­ments alike, are work­ing some­times sep­a­rate­ly, some­times togeth­er, to rid the Internet of these top­ics, of these dis­cus­sions, with­out actu­al­ly delv­ing into what they are.

So, I work on a project called onlinecen​sor​ship​.org, and what we do is look at the ways in which social media com­pa­nies are restrict­ing speech. Now, you could argue that not all of this is cen­sor­ship, and I might agree. We look at every­thing from, as you may have seen last year, the cen­sor­ship of nudi­ty, which I firm­ly believe is cen­sor­ship, to take­downs around harass­ment, hate speech, and vio­lent extremism—some of which bor­der the line between incite­ment and poten­tial­ly law­ful speech in cer­tain juris­dic­tions.

So I looked at the dif­fer­ent def­i­n­i­tions of hate speech that the major social media plat­forms give. Twitter for exam­ple says that you may not pro­mote vio­lence against, or direct­ly attack­er or threat­en oth­er peo­ple on the basis of race, eth­nic­i­ty, etc. All of the dif­fer­ent cat­e­gories that you might imag­ine. It’s very sim­i­lar cat­e­gories to before.

So, we know that these com­pa­nies, these places where most of our speech takes place and spaces online…we know that they’re already com­mit­ted, or at least they say so, to tak­ing down or reg­u­lat­ing that kind of con­tent. But gov­ern­ments haven’t nec­es­sar­i­ly agreed, and I think Kirstin spoke about this a bit ear­li­er today. Governments have felt that what the com­pa­nies [record­ing skips] not enough. And so last year we saw the German gov­ern­ment at the end of last year form an agree­ment with the com­pa­nies to take down hate speech by get­ting them to remove it with­in twenty-four hours of being report­ed.

So essen­tial­ly how this works is you already have these flag­ging mech­a­nisms that exist on Facebook. When I report con­tent, I can report it as…there’s a num­ber of dif­fer­ent, and it actu­al­ly is quite gran­u­lar. I just looked when it comes to hate speech.

But then the German gov­ern­ment want­ed them to go a step fur­ther and make sure that they’re doing it with­in twenty-four hours. We’re talk­ing about 1.6 bil­lion users, con­tent mod­er­a­tors of…some num­ber. We don’t actu­al­ly know how many peo­ple are employed by these com­pa­nies to look at those num­bers, and hope­ful­ly we’ll have more trans­paren­cy about that soon—fingers crossed.

But, essen­tial­ly they’re ask­ing for a twenty-four hour turn­around for all of this 1.6 bil­lion users. This is almost impos­si­ble. And it’s not just Germany. We saw the European Commission of course, which I think was dis­cussed ear­li­er, as well as more recent­ly a poten­tial agree­ment between the Israeli gov­ern­ment and Facebook to deal with incite­ment. Now, that one’s less clear. There’s some denial as to whether or not there is in fact an agree­ment. But it remains to be seen what kind of content’s tak­en down. So far, how­ev­er, I would note that with­in the past two weeks since this went pub­lic, we’ve seem two edi­tors of pop­u­lar Palestinian pub­li­ca­tions cen­sored by Facebook for rea­sons unknown. Facebook apol­o­gized, said it was a mis­take, but nev­er­the­less, when­ev­er there’s that addi­tion­al scruti­ny put on a cer­tain cat­e­go­ry of peo­ple or a cer­tain geo­graph­ic loca­tion, you’re bound to see more erro­neous take­downs, as I like to call them.

And so then what do we do about this? Because if the gov­ern­ments feel that the com­pa­nies aren’t doing enough, and we as soci­ety have no input into that, then essen­tial­ly what we’re see­ing is this quick, reac­tionary attempt to you know, like I said before, get rid of all of the con­tent with­out actu­al­ly assess­ing what we’re look­ing at. What is it that we’re talk­ing about. And I think that that’s the first step, is that we haven’t agreed on a def­i­n­i­tion of hate speech. My vision of it might be dif­fer­ent from yours. As we’ve seen by dif­fer­ent gov­ern­ments, the vision is dif­fer­ent. And so if we want a free and open Internet—and I’m not say­ing that we shouldn’t tack­le hate speech. We should absolute­ly tack­le hate­ful speech. But, if we want a free and open Internet where we all have equal access and equal oppor­tu­ni­ty, then this addi­tion­al frag­men­ta­tion that we’re see­ing through these dif­fer­ent pri­va­tized agree­ments is not the way for­ward.

And so first we need to find a def­i­n­i­tion that actu­al­ly works for us, before we even talk about what to do about the speech itself. And then of course vol­un­tary back­door agree­ments between gov­ern­ments that we’ve elect­ed demo­c­ra­t­i­cal­ly, in all of the exam­ples that I’ve giv­en so far. We’ve also seen some…less-demo­c­ra­t­ic gov­ern­ments try to strike deals with com­pa­nies, and that’s anoth­er prece­dent that we might be set­ting with these. But regard­less, they deny us input. And by us I mean cit­i­zens, I mean cit­i­zens of both our coun­tries and of the Internet, cit­i­zens” of these plat­forms inso­far as you could make that argu­ment.

But we have no input into this. We’re not part of these con­ver­sa­tions. Not only have civ­il soci­ety groups, NGOs, been exclud­ed from the actu­al table where these agree­ments are being decid­ed, but the aver­age user has no actu­al say in how these spaces are gov­erned. And so I’m not going to talk about nudi­ty this year—I know I’ve talked about it the past two years. But I will throw it in there as an exam­ple that I think it’s real­ly inter­est­ing that com­pa­nies have just decid­ed for us that this is an unac­cept­able thing. Now, their rea­sons might be valid. It might be real­ly dif­fi­cult for them to tack­le the dif­fer­ence between pornog­ra­phy and nudi­ty. There are all sorts of tech­ni­cal rea­sons why that might be a real­ly hard ques­tion, and I respect that. But. They’ve already made the deci­sion to go beyond the law there, how do we know they’re not doing that in this case, too.

And then I would also go a step fur­ther and say that cen­sor­ship alone doesn’t actu­al­ly solve the prob­lem of hate­ful speech. It doesn’t. And I’ll give you a cou­ple exam­ples.

I was in Budapest a cou­ple years ago, just walk­ing around in the sum­mer. And this was in the mid­dle of the debate in the United States around the Confederate flag. And so for those who might not be as famil­iar, the Confederate flag was of course the flag of the sep­a­ratist South, and has now become large­ly known as a…well, at least where I’m from it’s known as a hate­ful, racist sym­bol. And in a lot of the south of the coun­try it’s a sym­bol of pride for the Confederacy. But nev­er­the­less it’s pret­ty known for what that is.

But when I see it inter­na­tion­al­ly in anoth­er con­text, my reac­tion is oh, maybe they’re just you know, try­ing for some Americana. And so I saw this mil­i­tary shop in Budapest and I saw the Confederate flag, and I thought oh, well maybe it’s just like an Army Navy store. So I post­ed it on Twitter and I asked some friends in the coun­try, and they were like, No no no, that’s a Nazi sym­bol.”

I wouldn’t have known that. Because what hap­pens when you cen­sor some sym­bols is that oth­er ones crop up in their place. And we’re start­ing to see that on Twitter and Facebook now, with secret codes to avoid cen­sor­ship. I’m not going to get into the actu­al def­i­n­i­tions, but if you look at this arti­cle, essen­tial­ly you’ve got real­ly far-right right wing com­mu­ni­ties online that are using inno­cent words like Google,” Skittle,” and Skype” as sub­sti­tu­tions for racist words.

And so this hap­pens, and then we see this in China to get around cen­sor­ship there in more pos­i­tive con­texts. But it’s only a mat­ter of time if com­pa­nies are build­ing in algo­rith­mic meth­ods to fil­ter or cen­sor speech, it’s only a mat­ter of time before peo­ple just come up with new sub­sti­tutes. That’s how peo­ple have always got­ten around cen­sor­ship. I don’t see how that will not con­tin­ue.

But then last­ly I would also say that cen­sor­ship isn’t the solu­tion to hate­ful speech. It might be a solu­tion; it may be a com­po­nent of the solu­tion. I don’t know. That’s some­thing for demo­c­ra­t­ic process­es to decide. But, it doesn’t solve the prob­lem. To solve the prob­lem we have to get at the root caus­es of it. And this is why I find this title for this talk real­ly chal­leng­ing. Because I’m not the expert on how we deal with hate­ful com­mu­ni­ties and hate­ful speech and all of the right wing groups that are crop­ping up in my coun­try and yours.

But, I do know that we should be look­ing some­where else, and I think we’re ask­ing some of the wrong ques­tions as to the ori­gins of this. And I’ll just flip through these real quick to note all of these are peo­ple who are ver­i­fied on Twitter and who engage in hate­ful speech on a reg­u­lar basis. These are our lead­ers. These are the peo­ple that we should be ask­ing the ques­tion, how do we get rid of hate­ful speech? It’s not nec­es­sar­i­ly let’s just strike it from the record,” it’s let’s get to the root cause and then we can talk about what we do with it on our online com­mu­ni­ties. So thank you, and thank you for hav­ing me.

Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.