Oumou Ly: Welcome to The Breakdown. My name is Oumou. I’m a fel­low in the Assembly Disinformation pro­gram at the Berkman Klein Center for Internet and Society at Harvard. Our top­ic of dis­cus­sion today con­tin­ues the dis­cus­sion of the elec­tion, and this par­tic­u­lar episode we want­ed to talk about domes­tic actors, some of their pat­terns of manip­u­la­tion, the meth­ods that they use, and what their objec­tives are.

I am joined today, and thrilled to be joined today, by Joan Donovan, who is the Research Director of the Shorenstein Center on Media, Politics, and Public pol­i­cy. Dr. Donovan leads the field in exam­in­ing Internet and tech­nol­o­gy stud­ies, online extrem­ism, media manip­u­la­tion, and dis­in­for­ma­tion cam­paigns. Thank you Joan for join­ing us.

Joan Donovan: I’m real­ly excit­ed to talk about this stuff today.

Ly: So our dis­cus­sion today cen­ters on domes­tic actors and their goals in pur­vey­ing dis­in­for­ma­tion, and we would be remiss if not to men­tion that at the time of this record­ing, just last night, the President fired Chris Krebs who is the head of CISA at DHS, the agency with­in the fed­er­al gov­ern­ment that takes the lead on coun­ter­ing, mit­i­gat­ing, and respond­ing to dis­in­for­ma­tion, par­tic­u­lar­ly as it relates to demo­c­ra­t­ic process­es like elec­tions. Joan, what do you sort of make of this late-night fir­ing, this last-minutes development?

Donovan: You know, if you study dis­in­for­ma­tion long enough, you feel like you’re look­ing through a crys­tal ball in some instances. So we all…we all knew it was comin’. Even Krebs had said so much. And that’s because coun­ter­ing dis­in­for­ma­tion is…you know, it’s a real­ly thank­less job. In the sense that you know, it was­n’t just the fact that Krebs had built an agency that over the course of the last few years real­ly had flown under the radar in terms of any kind of par­ti­san divides, had done a lot of work to ensure elec­tion secu­ri­ty, and cared about the ques­tion of dis­in­for­ma­tion, mis­in­for­ma­tion, as it applied to ques­tions about elec­tion integri­ty, right. So CISA and Krebs was­n’t try­ing to dis­pel all of the crazy myths and con­spir­a­cies out there, but they were doing their part with­in their remit to make sure that any kind of the­o­ry about vot­er fraud was some­thing that they took seri­ous­ly and took the time to debunk. 

And so it was­n’t nec­es­sar­i­ly just the kinds of tweets that were com­ing out of CISA but it was real­ly about this web site that they had put togeth­er that was real­ly a kind of low-budget ver­sion of Snopes in the sense that…the web site’s called Rumor Control, and the idea was very sim­ple, which was pro­vide very rapid analy­sis of any con­spir­a­cy or alle­ga­tion of elec­tion fraud that was start­ing to reach a tip­ping point—not every­thing, but things that start­ed to get cov­ered in dif­fer­ent areas, cov­ered by jour­nal­ists, and to give peo­ple an anchor that says, This is what we know to be true at this moment.”

Of course, as the President has come to refute the elec­tion results rather force­ful­ly online, Krebs’ role became much more impor­tant as a vocal crit­ic, with the truth on his side. And over the last few weeks, espe­cial­ly the last week, we’ve seen Trump move any­body out of his way that would either con­tra­dict him in pub­lic or would seri­ous­ly imper­il his desire to stay in the White House.

Ly: That makes me think of some­thing that I think about a lot recent­ly, par­tic­u­lar­ly over the last four years but espe­cial­ly in 2020, is this use of dis­in­for­ma­tion as polit­i­cal strat­e­gy by the GOP. It seems like you know, one pil­lar of that strat­e­gy is just one, to spread dis­in­for­ma­tion. The sec­ond is to sort of lever­age our insti­tu­tions to legit­imize the infor­ma­tion that they’re spread­ing. And the third is just to accel­er­ate truth decay in a man­ner that’s advan­ta­geous to the GOP’s par­tic­u­lar polit­i­cal aims. How do you respond to that, and how do you think the infor­ma­tion ecosys­tem should be orga­niz­ing around that prob­lem that we have a major polit­i­cal par­ty in the United States for whom this is a strat­e­gy for them?

Donovan: They’re real­ly just lever­ag­ing the com­mu­ni­ca­tion oppor­tu­ni­ties in our cur­rent media ecosys­tem to get their mes­sag­ing across. And in this instance, when we know where the pro­pa­gan­da’s com­ing from—that is it’s com­ing from the White House, it’s com­ing from Giuliani, it’s com­ing from Bannon, it’s com­ing from Roger Stone…how then do we reck­on with it, because we actu­al­ly know what it is. So the con­cept of white pro­pa­gan­da’s real­ly impor­tant here because when we know what the source is, we can treat it differently. 

However, the dif­fer­ence between some­thing like what went down in 2016 and what hap­pened in 2020 is an evo­lu­tion of these strate­gies to use some auto­mat­ed tech­niques in order to increase engage­ment on cer­tain posts so that more peo­ple see them, cou­pled with seri­ous, seri­ous mon­ey and influ­ence in order to make dis­in­for­ma­tion trav­el fur­ther and faster.

The third thing about this com­mu­ni­ca­tion strat­e­gy in this moment is that the prob­lem real­ly tran­scends social media at this point, where we do have our more legit­i­mate insti­tu­tions start­ing to bow out and say, You know what, we’re not even going to try to tack­le this, where like for us it’s not even an issue. Because we’re not gonna play into alle­ga­tions that there’s vot­er fraud. We’re not gonna play into any of these pet the­o­ries that’ve emerged about Hammer and Scorecard and Dominion,” and if you’ve heard any of those key­words then you’ve you’ve encoun­tered disinformation.

But, it does go to show that we are immersed in a hyper­par­ti­san media ecosys­tem where the future of jour­nal­ism is at stake, the future of social media is at stake. And right now I’m real­ly wor­ried that the US democ­ra­cy might not sur­vive this moment.

Ly: I com­plete­ly agree with you. And that is a real­ly scary thing to think. Can you talk a lit­tle bit about sites like Parler, Discord, Telegram, Gab. Just recent­ly after the elec­tion, Facebook dis­band­ed a group called Stop the Steal,” and then many of those fol­low­ers found a new home on Parler. Why are sites like this so attrac­tive to peo­ple who have a his­to­ry of cre­at­ing affin­i­ty around con­spir­a­cy theories?

Donovan: So I think about Gab, for instance…me and Brian Friedberg and Becca Lewis wrote about Gab post- the Unite the Right ral­ly. Because Gab real­ly put a lot of ener­gy into recruit­ing white suprema­cists who were being removed from plat­forms for terms of ser­vice vio­la­tions. And they were basi­cal­ly say­ing, We’re the free speech plat­form, and we don’t care what you say.” And for Gab, that…you know, went ass over head pret­ty fast, where they did have to start ban­ning white suprema­cists because unfor­tu­nate­ly what you get when you make a plat­form that empha­sizes lack of mod­er­a­tion, is you get some of the worst kind of pornog­ra­phy you can ever imag­ine. No style, no grace, noth­in’ sexy about it, just…—

Ly: The worst.

Donovan: —here’s a bun­cha peo­ple in dia­pers, right. Like it’s just not good. And so right now, these minor apps that’re say­ing, We’re unmod­er­at­ed, come one come all,” are actu­al­ly fac­ing a pret­ty strong con­tent mod­er­a­tion prob­lem where trolls are now show­ing up pre­tend­ing to be celebri­ties. There’s lots and lots of screen­shots out there where peo­ple think they heard from some celebri­ty on one of these apps and it’s real­ly just a troll with a fake account. 

But this moment is an oppor­tu­ni­ty for these apps to grow. And they will say and do any­thing in order to cap­ture that mar­ket seg­ment. If we think about infra­struc­ture as all three things: the tech­nol­o­gy; the peo­ple that bring the tech­nol­o­gy togeth­er, includ­ing the audi­ences; and the poli­cies, right now we’re hav­ing a cri­sis of sta­bil­i­ty in terms of con­tent mod­er­a­tion poli­cies. And so peo­ple are seek­ing out oth­er plat­forms that increase that kind of sta­bil­i­ty in their mes­sag­ing. Because they want to know why they’re see­ing what they’re see­ing, and they want for those rules to be real­ly clear.

Ly: Picking up on that con­tent mod­er­a­tion thread to talk about larg­er and sort of more lega­cy tech plat­forms more broad­ly, what is your sense of how well con­tent mod­er­a­tion and maybe even more specif­i­cal­ly label­ing efforts work? We saw Twitter and some of the oth­er plat­forms too do a pretty…I think com­par­a­tive­ly good job when you com­pare it with the past, of slap­ping labels on the President’s tweets. But that’s because there was such an expec­ta­tion that there would be pre­ma­ture claims of vic­to­ry. What’s your sense of how well it min­i­mizes virality?

Donovan: Um…so, we don’t real­ly know or have any data to con­clude that the label­ing is real­ly doing any­thing oth­er than aggra­vat­ing people. 

Ly: Yeah.

Donovan: Which is to say that you know, we thought that the label­ing was gonna result in mas­sive reduc­tion in viral­i­ty. In some instances you see influ­encers tak­ing pho­tos or just screen­shots of the labels on their tweets on Twitter— [some dropped audio] —say­ing like, Look, it’s hap­pen­ing to me,” as a kind of badge of honor. 

But, at the same time they do…when done well they con­vey the right kind of mes­sage. Unfortunately I don’t think any of us antic­i­pat­ed the amount of labels that were gonna be need­ed on key pub­lic fig­ures, right. And I imag­ine that you know, okay they’re going to do these labels for folks that have over 100,000 fol­low­ers on Twitter. Or you know, they’re gonna show up on YouTube in ways that deal with both the claims of vot­er fraud but also the viral­i­ty. But it’s hard to say if any­body’s click­ing through on these labels. I’ve clicked through some of them and the infor­ma­tion on the oth­er side of the label is total­ly irrel­e­vant. That is, it’s just not about the tweet or any—it’s not spe­cif­ic enough? 

Ly: Yeah.

Donovan: Which is to say that you know, in watch­ing the tech hear­ing this week, Dorsey seemed to not real­ly be com­mit­ted to a con­tent mod­er­a­tion pol­i­cy that deals with mis­in­for­ma­tion at scale. And as a result, what you get is these half mea­sures that…we don’t real­ly know what their effect is going to be. And for the part­ners in the fact-checking world that part­nered with Facebook, they’re now under a del­uge of alle­ga­tions that they’re some­how par­ti­san and they’ve been weaponized in a bunch of dif­fer­ent ways. And so I don’t even know what the broad pay­out is to risk your rep­u­ta­tion as a news orga­ni­za­tion to do that kind of fact-checking on Facebook, where Facebook isn’t real­ly com­mit­ted to remov­ing cer­tain kinds of misinformation.

Ly: Joan, why is med­ical mis- and dis­in­for­ma­tion dif­fer­ent than oth­er types of infor­ma­tion we see cir­cu­lat­ing maybe relat­ed to elec­tions or oth­er demo­c­ra­t­ic processes?

Donovan: So, when we think about med­ical mis­in­for­ma­tion we’re real­ly think­ing about well how quick­ly are peo­ple going to change their behav­ior, right. If you hear that coro­n­avirus is in the water, you’re gonna stop drink­ing water, right. If you hear that it’s in the air, you’re gonna put a mask on.

Ly: Some of us.

Donovan: And so the way in which peo­ple receive med­ical advice, real­ly it can stop them on a dime and move them in a dif­fer­ent direc­tion. And unfor­tu­nate­ly we’ve entered into this sit­u­a­tion where med­ical advice has been polar­ized in our hyper­par­ti­san media envi­ron­ment. And there’s been some recent stud­ies that can even show the degree to which that polar­iza­tion is hap­pen­ing that is real­ly lead­ing peo­ple to down­play the risks of COVID-19, and this has a lot to do with them encoun­ter­ing mis­in­for­ma­tion from what they might con­sid­er even trust­ed sources. 

And so, when we think about the design of social media in this moment we actu­al­ly have to think about a cura­tion strat­e­gy for the truth. We need access to infor­ma­tion that is time­ly, local, rel­e­vant, and accu­rate. And if we don’t get that kind of infor­ma­tion today, peo­ple are going to con­tin­ue to die because they don’t under­stand what the real risk is, they don’t under­stand how they can pro­tect them­selves, and espe­cial­ly as we enter into this hol­i­day sea­son where a lot of peo­ple are start­ing to relax their vig­i­lance and are hop­ing that it won’t hap­pen to them, that’s the exact moment where we need to crank up the health mes­sag­ing and make sure that peo­ple under­stand the risks and have seen some form of true and cor­rect infor­ma­tion about COVID-19. Because I’ll tell you right now if you go on social media and you start pokin’ around, sure there’s a cou­ple of inter­sti­tials or there’s a cou­ple of ban­ners here and there, but we can do a lot bet­ter to make sure that peo­ple know what COVID-19 is, what the symp­toms are, how to get test­ed, how to keep your­self safe, and how to keep your loved ones safe as well.

Ly: I’m just curi­ous what are the sorts of data points you’ve seen that would explain why some peo­ple don’t nec­es­sar­i­ly sub­scribe to…you know, believe infor­ma­tion from author­i­ta­tive sources sources about the spread of COVID-19, mit­i­ga­tions you can take, not hang­ing out with fam­i­ly mem­bers, and such and such and this. Why do peo­ple… Why are some peo­ple inclined not to believe that author­i­ta­tive information?

Donovan: It’s a good ques­tion. And you know, part of it has to do with the echo cham­bers that they’ve been get­ting infor­ma­tion in for years. We’ve start­ed to see cer­tain Facebook groups that maybe it’s a local Facebook group, and you’ve been in it a long time, and it is about exchanging…like the free list, you know, exchang­ing things in your neighborhood.

And then peo­ple slow­ly start to talk about these real­ly impor­tant issues, and mis­in­for­ma­tion is intro­duced through a blog post, or an arti­cle. Or you know, I saw this on the news’ ” and you find out that they’ve been watch­ing one of these hyper­par­ti­san news sources that is down­play­ing what’s hap­pen­ing. And so you kind of see it in the ephemera. But in our jour­nal the Harvard Kennedy Misinfo Review we’ve pub­lished research around, even with­in the right-wing media ecosys­tem, the degree to which some­one watch­es a lot of let’s say Hannity ver­sus Tucker, they’re gonna have dif­fer­ent asso­ci­a­tions with the risk of COVID-19 because it’s cov­ered dif­fer­ent­ly by these folks that are at the same outlet. 

And so it’s real­ly impor­tant to under­stand that this has to do with the com­mu­ni­ca­tion envi­ron­ment that is designed, and the fact that peo­ple are real­ly try­ing when they’re shar­ing things that are nov­el, or out­ra­geous, or things that might be med­ical­ly incor­rect, they’re doing it in some cas­es out of love. They’re doing it just in case. Maybe you did­n’t see this. And it’s an unfor­tu­nate sit­u­a­tion that we’ve got­ten our­selves into where the more out­ra­geous the con­spir­a­cy the­o­ry, the more out­landish the claim, the more viral it tends to be. And that’s an unfor­tu­nate con­se­quence of the design of these systems.

Ly: Yeah. Thank you so much for join­ing me today Joan. I real­ly enjoyed our conversation.

Donovan: Great. Thank you so much. I real­ly appre­ci­ate you doing these series.

Further Reference

Medium post for this episode, with intro­duc­tion and edit­ed text


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Cash App, or even just sharing the link. Thanks.