Oumou Ly: Welcome to The Breakdown. My name is Oumou. I’m a fellow in the Assembly Disinformation program at the Berkman Klein Center for Internet and Society at Harvard. Our topic of discussion today continues the discussion of the election, and this particular episode we wanted to talk about domestic actors, some of their patterns of manipulation, the methods that they use, and what their objectives are.
I am joined today, and thrilled to be joined today, by Joan Donovan, who is the Research Director of the Shorenstein Center on Media, Politics, and Public policy. Dr. Donovan leads the field in examining Internet and technology studies, online extremism, media manipulation, and disinformation campaigns. Thank you Joan for joining us.
Joan Donovan: I’m really excited to talk about this stuff today.
Ly: So our discussion today centers on domestic actors and their goals in purveying disinformation, and we would be remiss if not to mention that at the time of this recording, just last night, the President fired Chris Krebs who is the head of CISA at DHS, the agency within the federal government that takes the lead on countering, mitigating, and responding to disinformation, particularly as it relates to democratic processes like elections. Joan, what do you sort of make of this late-night firing, this last-minutes development?
Donovan: You know, if you study disinformation long enough, you feel like you’re looking through a crystal ball in some instances. So we all…we all knew it was comin’. Even Krebs had said so much. And that’s because countering disinformation is…you know, it’s a really thankless job. In the sense that you know, it wasn’t just the fact that Krebs had built an agency that over the course of the last few years really had flown under the radar in terms of any kind of partisan divides, had done a lot of work to ensure election security, and cared about the question of disinformation, misinformation, as it applied to questions about election integrity, right. So CISA and Krebs wasn’t trying to dispel all of the crazy myths and conspiracies out there, but they were doing their part within their remit to make sure that any kind of theory about voter fraud was something that they took seriously and took the time to debunk.
And so it wasn’t necessarily just the kinds of tweets that were coming out of CISA but it was really about this web site that they had put together that was really a kind of low-budget version of Snopes in the sense that…the web site’s called Rumor Control, and the idea was very simple, which was provide very rapid analysis of any conspiracy or allegation of election fraud that was starting to reach a tipping point—not everything, but things that started to get covered in different areas, covered by journalists, and to give people an anchor that says, “This is what we know to be true at this moment.”
Of course, as the President has come to refute the election results rather forcefully online, Krebs’ role became much more important as a vocal critic, with the truth on his side. And over the last few weeks, especially the last week, we’ve seen Trump move anybody out of his way that would either contradict him in public or would seriously imperil his desire to stay in the White House.
Ly: That makes me think of something that I think about a lot recently, particularly over the last four years but especially in 2020, is this use of disinformation as political strategy by the GOP. It seems like you know, one pillar of that strategy is just one, to spread disinformation. The second is to sort of leverage our institutions to legitimize the information that they’re spreading. And the third is just to accelerate truth decay in a manner that’s advantageous to the GOP’s particular political aims. How do you respond to that, and how do you think the information ecosystem should be organizing around that problem that we have a major political party in the United States for whom this is a strategy for them?
Donovan: They’re really just leveraging the communication opportunities in our current media ecosystem to get their messaging across. And in this instance, when we know where the propaganda’s coming from—that is it’s coming from the White House, it’s coming from Giuliani, it’s coming from Bannon, it’s coming from Roger Stone…how then do we reckon with it, because we actually know what it is. So the concept of white propaganda’s really important here because when we know what the source is, we can treat it differently.
However, the difference between something like what went down in 2016 and what happened in 2020 is an evolution of these strategies to use some automated techniques in order to increase engagement on certain posts so that more people see them, coupled with serious, serious money and influence in order to make disinformation travel further and faster.
The third thing about this communication strategy in this moment is that the problem really transcends social media at this point, where we do have our more legitimate institutions starting to bow out and say, “You know what, we’re not even going to try to tackle this, where like for us it’s not even an issue. Because we’re not gonna play into allegations that there’s voter fraud. We’re not gonna play into any of these pet theories that’ve emerged about Hammer and Scorecard and Dominion,” and if you’ve heard any of those keywords then you’ve you’ve encountered disinformation.
But, it does go to show that we are immersed in a hyperpartisan media ecosystem where the future of journalism is at stake, the future of social media is at stake. And right now I’m really worried that the US democracy might not survive this moment.
Ly: I completely agree with you. And that is a really scary thing to think. Can you talk a little bit about sites like Parler, Discord, Telegram, Gab. Just recently after the election, Facebook disbanded a group called “Stop the Steal,” and then many of those followers found a new home on Parler. Why are sites like this so attractive to people who have a history of creating affinity around conspiracy theories?
Donovan: So I think about Gab, for instance…me and Brian Friedberg and Becca Lewis wrote about Gab post- the Unite the Right rally. Because Gab really put a lot of energy into recruiting white supremacists who were being removed from platforms for terms of service violations. And they were basically saying, “We’re the free speech platform, and we don’t care what you say.” And for Gab, that…you know, went ass over head pretty fast, where they did have to start banning white supremacists because unfortunately what you get when you make a platform that emphasizes lack of moderation, is you get some of the worst kind of pornography you can ever imagine. No style, no grace, nothin’ sexy about it, just…—
Ly: The worst.
Donovan: —here’s a buncha people in diapers, right. Like it’s just not good. And so right now, these minor apps that’re saying, “We’re unmoderated, come one come all,” are actually facing a pretty strong content moderation problem where trolls are now showing up pretending to be celebrities. There’s lots and lots of screenshots out there where people think they heard from some celebrity on one of these apps and it’s really just a troll with a fake account.
But this moment is an opportunity for these apps to grow. And they will say and do anything in order to capture that market segment. If we think about infrastructure as all three things: the technology; the people that bring the technology together, including the audiences; and the policies, right now we’re having a crisis of stability in terms of content moderation policies. And so people are seeking out other platforms that increase that kind of stability in their messaging. Because they want to know why they’re seeing what they’re seeing, and they want for those rules to be really clear.
Ly: Picking up on that content moderation thread to talk about larger and sort of more legacy tech platforms more broadly, what is your sense of how well content moderation and maybe even more specifically labeling efforts work? We saw Twitter and some of the other platforms too do a pretty…I think comparatively good job when you compare it with the past, of slapping labels on the President’s tweets. But that’s because there was such an expectation that there would be premature claims of victory. What’s your sense of how well it minimizes virality?
Donovan: Um…so, we don’t really know or have any data to conclude that the labeling is really doing anything other than aggravating people.
Ly: Yeah.
Donovan: Which is to say that you know, we thought that the labeling was gonna result in massive reduction in virality. In some instances you see influencers taking photos or just screenshots of the labels on their tweets on Twitter— [some dropped audio] —saying like, “Look, it’s happening to me,” as a kind of badge of honor.
But, at the same time they do…when done well they convey the right kind of message. Unfortunately I don’t think any of us anticipated the amount of labels that were gonna be needed on key public figures, right. And I imagine that you know, okay they’re going to do these labels for folks that have over 100,000 followers on Twitter. Or you know, they’re gonna show up on YouTube in ways that deal with both the claims of voter fraud but also the virality. But it’s hard to say if anybody’s clicking through on these labels. I’ve clicked through some of them and the information on the other side of the label is totally irrelevant. That is, it’s just not about the tweet or any—it’s not specific enough?
Ly: Yeah.
Donovan: Which is to say that you know, in watching the tech hearing this week, Dorsey seemed to not really be committed to a content moderation policy that deals with misinformation at scale. And as a result, what you get is these half measures that…we don’t really know what their effect is going to be. And for the partners in the fact-checking world that partnered with Facebook, they’re now under a deluge of allegations that they’re somehow partisan and they’ve been weaponized in a bunch of different ways. And so I don’t even know what the broad payout is to risk your reputation as a news organization to do that kind of fact-checking on Facebook, where Facebook isn’t really committed to removing certain kinds of misinformation.
Ly: Joan, why is medical mis- and disinformation different than other types of information we see circulating maybe related to elections or other democratic processes?
Donovan: So, when we think about medical misinformation we’re really thinking about well how quickly are people going to change their behavior, right. If you hear that coronavirus is in the water, you’re gonna stop drinking water, right. If you hear that it’s in the air, you’re gonna put a mask on.
Ly: Some of us.
Donovan: And so the way in which people receive medical advice, really it can stop them on a dime and move them in a different direction. And unfortunately we’ve entered into this situation where medical advice has been polarized in our hyperpartisan media environment. And there’s been some recent studies that can even show the degree to which that polarization is happening that is really leading people to downplay the risks of COVID-19, and this has a lot to do with them encountering misinformation from what they might consider even trusted sources.
And so, when we think about the design of social media in this moment we actually have to think about a curation strategy for the truth. We need access to information that is timely, local, relevant, and accurate. And if we don’t get that kind of information today, people are going to continue to die because they don’t understand what the real risk is, they don’t understand how they can protect themselves, and especially as we enter into this holiday season where a lot of people are starting to relax their vigilance and are hoping that it won’t happen to them, that’s the exact moment where we need to crank up the health messaging and make sure that people understand the risks and have seen some form of true and correct information about COVID-19. Because I’ll tell you right now if you go on social media and you start pokin’ around, sure there’s a couple of interstitials or there’s a couple of banners here and there, but we can do a lot better to make sure that people know what COVID-19 is, what the symptoms are, how to get tested, how to keep yourself safe, and how to keep your loved ones safe as well.
Ly: I’m just curious what are the sorts of data points you’ve seen that would explain why some people don’t necessarily subscribe to…you know, believe information from authoritative sources sources about the spread of COVID-19, mitigations you can take, not hanging out with family members, and such and such and this. Why do people… Why are some people inclined not to believe that authoritative information?
Donovan: It’s a good question. And you know, part of it has to do with the echo chambers that they’ve been getting information in for years. We’ve started to see certain Facebook groups that maybe it’s a local Facebook group, and you’ve been in it a long time, and it is about exchanging…like the free list, you know, exchanging things in your neighborhood.
And then people slowly start to talk about these really important issues, and misinformation is introduced through a blog post, or an article. Or you know, “I saw this on the ‘news’ ” and you find out that they’ve been watching one of these hyperpartisan news sources that is downplaying what’s happening. And so you kind of see it in the ephemera. But in our journal the Harvard Kennedy Misinfo Review we’ve published research around, even within the right-wing media ecosystem, the degree to which someone watches a lot of let’s say Hannity versus Tucker, they’re gonna have different associations with the risk of COVID-19 because it’s covered differently by these folks that are at the same outlet.
And so it’s really important to understand that this has to do with the communication environment that is designed, and the fact that people are really trying when they’re sharing things that are novel, or outrageous, or things that might be medically incorrect, they’re doing it in some cases out of love. They’re doing it just in case. Maybe you didn’t see this. And it’s an unfortunate situation that we’ve gotten ourselves into where the more outrageous the conspiracy theory, the more outlandish the claim, the more viral it tends to be. And that’s an unfortunate consequence of the design of these systems.
Ly: Yeah. Thank you so much for joining me today Joan. I really enjoyed our conversation.
Donovan: Great. Thank you so much. I really appreciate you doing these series.
Further Reference
Medium post for this episode, with introduction and edited text