Jonathon Penney: I Jon Penney. I’m a legal aca­d­e­m­ic and social sci­en­tist. I teach law in Canada at Dalhousie University. I’m also a research fel­low at University of Toronto’s Citizen Lab and a research affil­i­ate at Princeton’s Center for Information Technology Policy.

Merry Mou: Hi, I’m Merry, and I worked with Nathan over the last year on CivilServant while a mas­ter’s stu­dent at MIT and now I’m a net­work secu­ri­ty engi­neer at a net­work secu­ri­ty start­up.

Penney: So before I begin I just want­ed to first say thanks to Nathan for giv­ing us the chance to come here and talk about our research and being part of this incred­i­ble com­mu­ni­ty that he brought togeth­er. So thank you, Nathan.

Identifying & Reducing Side-Effects of Automated Legal Enforcement of Copyright on Twitter

So this is the title of our project and you know, it sounds a bit com­pli­cat­ed. It sounds a bit aca­d­e­m­ic. But actu­al­ly under­ly­ing this project is a pret­ty sim­ple and we think pow­er­ful idea that pro­vides a solu­tion to a com­plex chal­lenge that’s fac­ing online com­mu­ni­ties like Twitter, like Reddit, with­in the CivilServant uni­verse. That chal­lenge is the increas­ing automa­tion of the enforce­ment of legal rules and norms online.

A small robot reading a book labeled "Law Journal"

Our solu­tion… No, it’s not send­ing robots to law school. To bor­row Ethan’s line, we’re not that evil and inhu­mane. But it does involve build­ing our own auto­mat­ed process­es, even bots, that can pro­vide a means of pro­tect­ing and reduc­ing some of the neg­a­tive effects asso­ci­at­ed with this automa­tion of legal norms.

So that’s sort of the idea behind the project and we intend to car­ry it out through the mod­el of cit­i­zen sci­ence asso­ci­at­ed with CivilServant. That is we hope our results and find­ings, which will be gained with hav­ing our bots essen­tial­ly study copy­right bots, and through those insights will pow­er empow­er online com­mu­ni­ties to build their own solu­tions and deal with some of these neg­a­tive effects.

So that’s the gen­er­al idea. Let me say a lit­tle bit more about some of the specifics of this par­tic­u­lar study.

So, a lot of peo­ple talk about arti­fi­cial intel­li­gence and the social rev­o­lu­tion it’s going to fos­ter. But when you think about legal rules and norms that rev­o­lu­tion is already hap­pen­ing. Everywhere we look around us, laws and legal norms are increas­ing­ly being enforced and implied through tech­ni­cal, tech­no­log­i­cal, and auto­mat­ed process­es. From very rudi­men­ta­ry police spots, to red light cam­eras and speed­ing cam­eras, to DRM. And in the case of our study, copy­right. Which is the focus of what we’re look­ing at in this study.

Now, as many of you are aware, the Digital Millennium Copyright Act—or at least I hope you’ve heard of it—also known as the DMCA, it’s essen­tial­ly the statute or the reg­u­la­to­ry regime by which copy­right law in the United States is enforced online. It is done so pri­mar­i­ly through DMCA copy­right take­down notices being sent to users, which are effec­tive­ly personally-received legal threats con­cern­ing con­tent that users post online. And it’s a demand to have that con­tent removed.

What’s hap­pened today, and this was cer­tain­ly not con­tem­plat­ed by the peo­ple who draft­ed the DMCA in 1998, is that now that enforce­ment has become increas­ing­ly auto­mat­ed. Essentially you have pri­vate enti­ties that own and oper­ate bots and auto­mat­ed pro­grams that send out mil­lions, and I mean lit­er­al­ly mil­lions, of these DMCA notices to Internet users all around the world on a dai­ly basis.

So why is this a chal­lenge? Well, because pri­or research includ­ing research that I have done and oth­er social sci­en­tists have done, have shown that this kind of legal threat that’s received, or knowl­edge that some­one is watch­ing or mon­i­tor­ing you to deliv­er this kind of a notice, has a sig­nif­i­cant chill­ing effect on what peo­ple say and do online. That is, pro­motes a kind of self-censorship.

This is actu­al­ly a graph from a paper that I pub­lished in 2016 where I used Wikipedia data to show that aware­ness of online sur­veil­lance actu­al­ly has a chill­ing effect on what peo­ple are will­ing to read on Wikipedia online.

So, that’s a sense of some of the chal­lenges asso­ci­at­ed with this research. The actu­al work, what we’re deal­ing with in this study, this notion of self-censorship is actu­al­ly based on foun­da­tion­al behav­ioral the­o­ries asso­ci­at­ed with con­cerns about sur­veil­lance, con­cerns about social norms. But the point here is that these notices, mil­lions of these threats being sent out on a day-to-day basis, we pre­dict is hav­ing and pro­mot­ing a sig­nif­i­cant cli­mate of self-censorship in online com­mu­ni­ties.

Mou: So with that back­ground of quan­ti­ta­tive research and behav­ioral the­o­ries, we want­ed to use CivilServant to answer two ques­tions. The first is, does receiv­ing a DMCA copy­right take­down notice on Twitter cause that user to tweet less often in the future? And we’re hop­ing that this will pro­vide addi­tion­al quan­ti­ta­tive evi­dence for the afore­men­tioned neg­a­tive chill­ing effects that might occur.

The sec­ond ques­tion we want to answer with CivilServant is, can we design inter­ven­tions, basi­cal­ly on Twitter—Twitter bots—that are based on the behav­ioral the­o­ries that might change how peo­ple react to these take­down requests?

So right now, if you’re a user on Twitter and you make a tweet, there’s a lot of com­pa­nies and bots who are inter­est­ed in tak­ing down poten­tial vio­la­tions. They’ll send a notice to Twitter and Twitter will often just take down these tweets with­out con­test, due to the safe har­bor pro­vi­sions in DMCA. And at this point the user makes a deci­sion both con­scious­ly and sub­con­scious­ly about how to react to this notice.

With CivilServant, we want­ed to extend the soft­ware to detect when a user receives a notice by using Lumen, which is a pub­lic data­base of take­down notices across the Web, and you CivilServant to send a tweet back to that user with infor­ma­tion about DMCA take­down notices. This con­tent is designed based off of behav­ioral the­o­ries in the hopes that it will change how that user will respond to that take­down request.

So this study is still in progress and we’re excit­ed to share the results of the study with you in the future. And we’re excit­ed to be work­ing with Nathan and with CivilServant on this idea that in an ecosys­tem of increas­ing auto­mat­ed legal enforce­ment and per­va­sive Web infra­struc­ture, we as cit­i­zens can still build our own bots that could pro­tect each oth­er from these poten­tial threats to our legal rights. Thank you.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.