Oumou Ly: Welcome to The Breakdown by the Berkman Klein Center. My name is Oumou. I’m a staff fellow on the Center’s Assembly: Disinformation Program. Today on the topic of doctored media and whether they warrant takedown as a general rule, I’m joined by evelyn douek. evelyn is a lecturer on law at the Harvard Law School, an affiliate at the Berkman Klein Center, and her research focuses on online speech governance and the various private, national, and global proposals for regulating content moderation. Thank you for being with us today, evelyn.
evelyn douek: Oh, it’s a pleasure. Thank you for having me.
Ly: Yeah. So an initial question that was the impetus behind this topic was, when it comes to doctored videos, images, and other manipulated media, what is so sticky about the question of takedowns, and particularly when the media and question is political in nature? I think this raises questions about…you know, the general obligations of tech companies to society and the level of responsibility we should expect companies to assume for the real-world impact of false content that remains on their sites.
So my first question for you, evelyn, is can you describe the impact of manipulated media both on the online information environment and in the real world? Just generally what are your thoughts on the harm that this kind of content stands to pose, and just as examples I mean the sorts of things like a slurred speech video of Nancy Pelosi that circulated heavily last year. There was also a more recent incident from the Bloomberg campaign during the democratic primary where there was a doctored video of crickets playing after Bloomberg posed a question to all of his fellow Democrats on the debate stage. And then of course the video of the Speaker of the House tearing up the State of the Union speech. So if you provide any insights into that, really would appreciate it.
douek: There’s really two categories of harm, I think, two buckets of harm when we’re talking about manipulated media. And I don’t want to lose sight of the first category, which is sort of the personal harms that can be created to like privacy or dignitary interests, through the cooptation of someone’s personal image or voice. And that’s something that Danielle Citron has written really powerfully about, you know. Upwards of 95% of deepfakes and manipulated media are still like, porn. And so I don’t want to lose sight of that kind of harm.
But obviously what we’re talking about today is more sort of the societal impacts. And that’s still a really really important thing, you know. Could a fake video of a candidate swing an election? Could a doctored video of foreign officials or military create national security risks? You know, these are really live issues and I think it’s something that we definitely need to be thinking about.
But the question also does come up, you know, is there anything really new here, with these new technologies? Disinformation is as old as information. Manipulated media is as old as media. Is there something particularly harmful about this new information environment and these new technologies, these hyperrealistic false depictions, that we need to be especially worried about?
There’s some suggestion that there is, that we’re particularly predisposed to believe audio or video. And that it might be harder to disprove something fake that’s been created from whole cloth rather than something that’s been just manipulated. You know, it’s hard to prove a negative, that something didn’t happen when like, you don’t have anything to compare it to. But you know, on the other hand this kind of thing, this concern has been the same with every new technology, you know, that there’s something particularly pernicious about it, from television to radio to computer games. So, I think the jury’s still out on that one. But those are the kinds of things that we need to be thinking about, and the potential societal harms that can come from this kind of manipulated media.
Ly: More than that, what responsibility do platforms have to mitigate the real-world harm and not just the harm to the online information environment?
douek: That’s really sort of the big question at the moment and sort of the societal conversation that we’re having. It’s nice and simple. I’m glad that I can give you a soundbite answer that will get me into trouble with one particular camp [indistinct] I’m sure.
I think that obviously, we’re in a place where I think there’s sort of a developing consensus that platforms need to take more responsibility for the way they design their products, and the effects that that has on society. Now, that’s an easy statement to make. What does that look like, and that’s where I think it gets more difficult. I think we do need to be careful in this moment of sort of techlash—which I believe is still ongoing, some people have called it off during the pandemic but I think it’s still going—that we don’t overreact to sort of the perception of harm and create a cure that’s worse than the disease as well, because there are important speech interests here. So, I’m not a free speech absolutist by any means. I am very much up for living in that messy world where we acknowledge that speech can cause harm and we need to sort of engage in that project. But I do think we also need to not lose sight of the free speech interests that are at play and the good that can come from social media platforms as well.
Ly: Definitely. So what you just said kind of reminds me of something that has emerged over the last couple of years, certainly since the 2016 election. And it’s the idea that a platform can be an “arbiter of truth,” and I think it was Mark Zuckerberg himself who coined that term. And I thought at the root of it is this idea that making a decision on whether or not a piece of content, whether it’s false or not should stay on stay on the site, in a way makes that decisionmaker the decider of what’s true or not. And I wonder… Well first, how would you respond to that, just that notion? And what do you think about that as a justification for allowing false content to remain online in some cases?
douek: Yeah. So I mean, I do have some sympathy with the idea that these platforms shouldn’t be and don’t want to be arbiters of truth. It’s not a fun job. And…you know, it’s a good line and I think that’s why they trot it out so often. Like of course we don’t want Mark Zuckerberg or Jack Dorsey being the arbiters of truth. I mean…, …, …come on, right? Like, their truth is not my truth, so…we could start from that proposition. And you know, it would be like…a terrible job, you’re only ever going to upset a whole bunch of people.
But, that’s not the end of the conversation. It’s still… I mean it’s obviously an oversimplification and a distraction from a lot of the issues that remain at play. So, we don’t need them to be arbiters truth, but platforms are not and have never been neutral conduits. And they are making decisions about what content to leave up, take down, prioritize, amplify, all the time. And so to pretend that they’re just sort of sitting there, hands-off, not being arbiters of truth is a massive sort of…complete oversimplification of the issue. And it’s really only only the beginning of the conversation to say, “Okay, you don’t need to be arbiters truth, but you do need to do something. You can’t just be completely hands-off.”
And so like I said, they need to acknowledge the impact of their design choices. And they need to be much more transparent about when and how they stack the deck in favor of certain kinds of content, and how they manipulate or distort the information ecosystem, which is definitely happening. And we are getting to know more and more about that, but there’s still nowhere near enough information about exactly how those ecosystems work.
Ly: So you mentioned there are a range of other tools that platforms have at their disposal aside from leaving up or taking down. Would you mind just describing what that slate of actions might be, might look like?
douek: Yeah, so I really think we need to get out of this leave up/take down paradigm, because platforms have so many more tools available at their disposal. They can do things like label things as having been fact-checked or manipulated, in the context of manipulated media. They can reduce the amount of circulation that a piece of content’s getting, or how easy it is to share it, or sort of downrank it in the news feed or the algorithmic feed.
They can also make sort of architectural and structural design choices that can have huge impacts on the information ecosystem. So an example here is WhatsApp, in the context of the pandemic, has reduced how easy it is to forward messages. So instead of being able to forward it to multiple people at a time you can only forward it once. And this has reduced the circulation of certain kinds of content by 70%. Which is like, an absolutely huge impact. And that doesn’t involve being the arbiter of truth of the content in question, but it does drastically change the information environment. So those are the kinds of initiatives and tools that platforms have that I think we need to be talking about a lot more.
Ly: Do you think there’s any use in platforms developing a unified protocol on takedowns at all?
douek: So I think this is one of the most fascinating questions. I love this question. And I’m obsessed with it and I don’t know the answer to it.
Ly: Okay.
douek: So, when do we want uniform standards online? And you know, when do we want like, different marketplaces of ideas, so to speak? So I think you can see arguments for either. On the one hand if you want standards, you want standards, and you want them uniformly across the information ecosystem. So, developing the tools to detect and identify manipulated media is potentially extremely expensive and might be something that only the largest platforms have the resources to be able to do. And if they do do that, why shouldn’t small platforms also have the benefits of that technology and use the same tools?
But on the other hand, free speech scholars get nervous when you start talking about sort of compelled uniformity in speech standards, and maybe if we don’t know where to draw the line, why not have lots of people try drawing it in different places and see what works out best? This is something that I’ve been calling the “laboratories of online governance” approach to this problem.
So, ultimately I actually hope that we can find a middle ground. Like a good lawyer I’m, you know, somewhere in between where we can sort of have the resources and some sort of common sort of standards but some flexibility for platforms to adapt those to their unique sort of affordances and their unique environment.
Ly: Thank you so much for joining us today Evelyn. I really enjoyed our conversation.
douek: Thanks very much for having me.
Ly: Thanks.
Further Reference
Medium post for this episode, with introduction and edited text