https://www.youtube.com/watch?v=FbQKQDyxi28

Oumou Ly: Welcome to The Breakdown by the Berkman Klein Center. My name is Oumou. I’m a staff fel­low on the Center’s Assembly: Disinformation Program. Today on the top­ic of doc­tored media and whether they war­rant take­down as a gen­er­al rule, I’m joined by eve­lyn douek. eve­lyn is a lec­tur­er on law at the Harvard Law School, an affil­i­ate at the Berkman Klein Center, and her research focus­es on online speech gov­er­nance and the var­i­ous pri­vate, nation­al, and glob­al pro­pos­als for reg­u­lat­ing con­tent mod­er­a­tion. Thank you for being with us today, eve­lyn.

eve­lyn douek: Oh, it’s a plea­sure. Thank you for hav­ing me.

Ly: Yeah. So an ini­tial ques­tion that was the impe­tus behind this top­ic was, when it comes to doc­tored videos, images, and oth­er manip­u­lat­ed media, what is so sticky about the ques­tion of take­downs, and par­tic­u­lar­ly when the media and ques­tion is polit­i­cal in nature? I think this rais­es ques­tions about…you know, the gen­er­al oblig­a­tions of tech com­pa­nies to soci­ety and the lev­el of respon­si­bil­i­ty we should expect com­pa­nies to assume for the real-world impact of false con­tent that remains on their sites.

So my first ques­tion for you, eve­lyn, is can you describe the impact of manip­u­lat­ed media both on the online infor­ma­tion envi­ron­ment and in the real world? Just gen­er­al­ly what are your thoughts on the harm that this kind of con­tent stands to pose, and just as exam­ples I mean the sorts of things like a slurred speech video of Nancy Pelosi that cir­cu­lat­ed heav­i­ly last year. There was also a more recent inci­dent from the Bloomberg cam­paign dur­ing the demo­c­ra­t­ic pri­ma­ry where there was a doc­tored video of crick­ets play­ing after Bloomberg posed a ques­tion to all of his fel­low Democrats on the debate stage. And then of course the video of the Speaker of the House tear­ing up the State of the Union speech. So if you pro­vide any insights into that, real­ly would appre­ci­ate it.

douek: There’s real­ly two cat­e­gories of harm, I think, two buck­ets of harm when we’re talk­ing about manip­u­lat­ed media. And I don’t want to lose sight of the first cat­e­go­ry, which is sort of the per­son­al harms that can be cre­at­ed to like pri­va­cy or dig­ni­tary inter­ests, through the coop­ta­tion of some­one’s per­son­al image or voice. And that’s some­thing that Danielle Citron has writ­ten real­ly pow­er­ful­ly about, you know. Upwards of 95% of deep­fakes and manip­u­lat­ed media are still like, porn. And so I don’t want to lose sight of that kind of harm.

But obvi­ous­ly what we’re talk­ing about today is more sort of the soci­etal impacts. And that’s still a real­ly real­ly impor­tant thing, you know. Could a fake video of a can­di­date swing an elec­tion? Could a doc­tored video of for­eign offi­cials or mil­i­tary cre­ate nation­al secu­ri­ty risks? You know, these are real­ly live issues and I think it’s some­thing that we def­i­nite­ly need to be think­ing about.

But the ques­tion also does come up, you know, is there any­thing real­ly new here, with these new tech­nolo­gies? Disinformation is as old as infor­ma­tion. Manipulated media is as old as media. Is there some­thing par­tic­u­lar­ly harm­ful about this new infor­ma­tion envi­ron­ment and these new tech­nolo­gies, these hyper­re­al­is­tic false depic­tions, that we need to be espe­cial­ly wor­ried about?

There’s some sug­ges­tion that there is, that we’re par­tic­u­lar­ly pre­dis­posed to believe audio or video. And that it might be hard­er to dis­prove some­thing fake that’s been cre­at­ed from whole cloth rather than some­thing that’s been just manip­u­lat­ed. You know, it’s hard to prove a neg­a­tive, that some­thing did­n’t hap­pen when like, you don’t have any­thing to com­pare it to. But you know, on the oth­er hand this kind of thing, this con­cern has been the same with every new tech­nol­o­gy, you know, that there’s some­thing par­tic­u­lar­ly per­ni­cious about it, from tele­vi­sion to radio to com­put­er games. So, I think the jury’s still out on that one. But those are the kinds of things that we need to be think­ing about, and the poten­tial soci­etal harms that can come from this kind of manip­u­lat­ed media.

Ly: More than that, what respon­si­bil­i­ty do plat­forms have to mit­i­gate the real-world harm and not just the harm to the online infor­ma­tion envi­ron­ment?

douek: That’s real­ly sort of the big ques­tion at the moment and sort of the soci­etal con­ver­sa­tion that we’re hav­ing. It’s nice and sim­ple. I’m glad that I can give you a sound­bite answer that will get me into trou­ble with one par­tic­u­lar camp [indis­tinct] I’m sure.

I think that obvi­ous­ly, we’re in a place where I think there’s sort of a devel­op­ing con­sen­sus that plat­forms need to take more respon­si­bil­i­ty for the way they design their prod­ucts, and the effects that that has on soci­ety. Now, that’s an easy state­ment to make. What does that look like, and that’s where I think it gets more dif­fi­cult. I think we do need to be care­ful in this moment of sort of techlash—which I believe is still ongo­ing, some peo­ple have called it off dur­ing the pan­dem­ic but I think it’s still going—that we don’t over­re­act to sort of the per­cep­tion of harm and cre­ate a cure that’s worse than the dis­ease as well, because there are impor­tant speech inter­ests here. So, I’m not a free speech abso­lutist by any means. I am very much up for liv­ing in that messy world where we acknowl­edge that speech can cause harm and we need to sort of engage in that project. But I do think we also need to not lose sight of the free speech inter­ests that are at play and the good that can come from social media plat­forms as well.

Ly: Definitely. So what you just said kind of reminds me of some­thing that has emerged over the last cou­ple of years, cer­tain­ly since the 2016 elec­tion. And it’s the idea that a plat­form can be an arbiter of truth,” and I think it was Mark Zuckerberg him­self who coined that term. And I thought at the root of it is this idea that mak­ing a deci­sion on whether or not a piece of con­tent, whether it’s false or not should stay on stay on the site, in a way makes that deci­sion­mak­er the decider of what’s true or not. And I won­der… Well first, how would you respond to that, just that notion? And what do you think about that as a jus­ti­fi­ca­tion for allow­ing false con­tent to remain online in some cas­es?

douek: Yeah. So I mean, I do have some sym­pa­thy with the idea that these plat­forms should­n’t be and don’t want to be arbiters of truth. It’s not a fun job. And…you know, it’s a good line and I think that’s why they trot it out so often. Like of course we don’t want Mark Zuckerberg or Jack Dorsey being the arbiters of truth. I mean…, …, …come on, right? Like, their truth is not my truth, so…we could start from that propo­si­tion. And you know, it would be like…a ter­ri­ble job, you’re only ever going to upset a whole bunch of peo­ple.

But, that’s not the end of the con­ver­sa­tion. It’s still… I mean it’s obvi­ous­ly an over­sim­pli­fi­ca­tion and a dis­trac­tion from a lot of the issues that remain at play. So, we don’t need them to be arbiters truth, but plat­forms are not and have nev­er been neu­tral con­duits. And they are mak­ing deci­sions about what con­tent to leave up, take down, pri­or­i­tize, ampli­fy, all the time. And so to pre­tend that they’re just sort of sit­ting there, hands-off, not being arbiters of truth is a mas­sive sort of…complete over­sim­pli­fi­ca­tion of the issue. And it’s real­ly only only the begin­ning of the con­ver­sa­tion to say, Okay, you don’t need to be arbiters truth, but you do need to do some­thing. You can’t just be com­plete­ly hands-off.”

And so like I said, they need to acknowl­edge the impact of their design choic­es. And they need to be much more trans­par­ent about when and how they stack the deck in favor of cer­tain kinds of con­tent, and how they manip­u­late or dis­tort the infor­ma­tion ecosys­tem, which is def­i­nite­ly hap­pen­ing. And we are get­ting to know more and more about that, but there’s still nowhere near enough infor­ma­tion about exact­ly how those ecosys­tems work.

Ly: So you men­tioned there are a range of oth­er tools that plat­forms have at their dis­pos­al aside from leav­ing up or tak­ing down. Would you mind just describ­ing what that slate of actions might be, might look like?

douek: Yeah, so I real­ly think we need to get out of this leave up/take down par­a­digm, because plat­forms have so many more tools avail­able at their dis­pos­al. They can do things like label things as hav­ing been fact-checked or manip­u­lat­ed, in the con­text of manip­u­lat­ed media. They can reduce the amount of cir­cu­la­tion that a piece of con­tent’s get­ting, or how easy it is to share it, or sort of down­rank it in the news feed or the algo­rith­mic feed.

They can also make sort of archi­tec­tur­al and struc­tur­al design choic­es that can have huge impacts on the infor­ma­tion ecosys­tem. So an exam­ple here is WhatsApp, in the con­text of the pan­dem­ic, has reduced how easy it is to for­ward mes­sages. So instead of being able to for­ward it to mul­ti­ple peo­ple at a time you can only for­ward it once. And this has reduced the cir­cu­la­tion of cer­tain kinds of con­tent by 70%. Which is like, an absolute­ly huge impact. And that does­n’t involve being the arbiter of truth of the con­tent in ques­tion, but it does dras­ti­cal­ly change the infor­ma­tion envi­ron­ment. So those are the kinds of ini­tia­tives and tools that plat­forms have that I think we need to be talk­ing about a lot more.

Ly: Do you think there’s any use in plat­forms devel­op­ing a uni­fied pro­to­col on take­downs at all?

douek: So I think this is one of the most fas­ci­nat­ing ques­tions. I love this ques­tion. And I’m obsessed with it and I don’t know the answer to it.

Ly: Okay.

douek: So, when do we want uni­form stan­dards online? And you know, when do we want like, dif­fer­ent mar­ket­places of ideas, so to speak? So I think you can see argu­ments for either. On the one hand if you want stan­dards, you want stan­dards, and you want them uni­form­ly across the infor­ma­tion ecosys­tem. So, devel­op­ing the tools to detect and iden­ti­fy manip­u­lat­ed media is poten­tial­ly extreme­ly expen­sive and might be some­thing that only the largest plat­forms have the resources to be able to do. And if they do do that, why should­n’t small plat­forms also have the ben­e­fits of that tech­nol­o­gy and use the same tools?

But on the oth­er hand, free speech schol­ars get ner­vous when you start talk­ing about sort of com­pelled uni­for­mi­ty in speech stan­dards, and maybe if we don’t know where to draw the line, why not have lots of peo­ple try draw­ing it in dif­fer­ent places and see what works out best? This is some­thing that I’ve been call­ing the lab­o­ra­to­ries of online gov­er­nance” approach to this prob­lem.

So, ulti­mate­ly I actu­al­ly hope that we can find a mid­dle ground. Like a good lawyer I’m, you know, some­where in between where we can sort of have the resources and some sort of com­mon sort of stan­dards but some flex­i­bil­i­ty for plat­forms to adapt those to their unique sort of affor­dances and their unique envi­ron­ment.

Ly: Thank you so much for join­ing us today Evelyn. I real­ly enjoyed our con­ver­sa­tion.

douek: Thanks very much for hav­ing me.

Ly: Thanks.

Further Reference

Medium post for this episode, with intro­duc­tion and edit­ed text


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Cash App, or even just sharing the link. Thanks.