Oumou Ly: Welcome to the Berkman Klein Center’s new inter­view series The Breakdown. I’m Oumou. I’m a staff fel­low on the Berkman Klein Center’s Assembly Disinformation pro­gram. Today we’re inter­view­ing Renée DiResta. She is a tech­ni­cal research man­ag­er at the Stanford Internet Observatory. She stud­ies the spread of false nar­ra­tives across social net­works and helps pol­i­cy­mak­ers in devis­ing respons­es to the dis­in­for­ma­tion problem.

Thanks Renée. So today we are gonna talk about COVID-19 and the way dis- and mis­in­for­ma­tion relat­ed to COVID has per­co­lat­ed across the Internet and in so many ways cre­at­ed new prob­lems in dis­in­for­ma­tion that I think pol­i­cy­mak­ers and those of us who were focused on the issue weren’t nec­es­sar­i­ly track­ing before and gives us a whole new lens through which to study issues that kind of are not unique to this prob­lem but tell us a lot about dis­in­for­ma­tion as a whole.

So one of the first ques­tions I had for you, Renée, is just whether there’s any­thing new about what we’re learn­ing about dis­in­for­ma­tion from COVID?

Renée DiResta: Yeah. It’s been real­ly inter­est­ing to see the entire world pay atten­tion to one top­ic, right. This is some­thing some­what unprece­dent­ed. We have had out­breaks in the era of social media mis­in­for­ma­tion before. Zika in 2015, Ebola 2018, right. So there have been a range of moments in which dis­eases have cap­ti­vat­ed pub­lic atten­tion. But usu­al­ly they tend to stay at least some­what geo­graph­i­cal­ly con­fined in terms of attention.

What’s inter­est­ing with the coro­n­avirus pan­dem­ic is of course the entire world has been affect­ed by the virus. And so the oth­er inter­est­ing thing about it has been that very lit­tle in the way—you know, even the insti­tu­tions don’t real­ly have a very strong sense of what is hap­pen­ing, unfor­tu­nate­ly. There’s a lot of these unknowns, as the dis­ease man­i­fes­tion, treat­ment, a lot of the mechan­ics of the out­break of the pan­dem­ic itself are poor­ly under­stood so the infor­ma­tion around them is sim­i­lar­ly fuzzy. And so one of the chal­lenges that we real­ly see here is the chal­lenge of how do we even know what is author­i­ta­tive infor­ma­tion? How do you help the pub­lic make sense of some­thing when the author­i­ties are still try­ing to make sense of it them­selves, and researchers are still try­ing to make sense of it themselves.

But what we see with COVID is a lot of real, sus­tained atten­tion. And I think that what that’s shown us is there’s this demand for infor­ma­tion and it’s revealed gaps where plat­form cura­tion, they don’t have enough to sur­face things. They’re strug­gling with what an author­i­ty is, what author­i­ta­tive infor­ma­tion looks like. I think that that’s been one of the real inter­est­ing dynam­ics that’s come out of this.

Ly: Thanks for that. So one ques­tion I have about the bit on author­i­ta­tive sources is, what makes it so dif­fi­cult for so many of the plat­forms to pri­or­i­tize author­i­ta­tive sources of infor­ma­tion and depri­or­i­tize false con­tent and oth­er sources? Do you think that the polit­i­cal attacks or par­ti­san attacks on tra­di­tion­al­ly author­i­ta­tive sources of infor­ma­tion like the CDC and WHO com­pli­cate the task of plat­forms to pri­or­i­tize that we call good” information?

DiResta: So, for plat­forms to sur­face infor­ma­tion when peo­ple are search­ing for a par­tic­u­lar key­word or top­ic, they have rec­og­nized that sur­fac­ing the thing that is most pop­u­lar is not the right answer either, that pop­u­lar­i­ty can be quite eas­i­ly gamed on these sys­tems. But the ques­tion becomes what do you give to peo­ple? Is an author­i­ta­tive source only an insti­tu­tion­al­ly author­i­ta­tive source? I think the answer is quite clear­ly no. But how do we decide what an author­i­ta­tive is?

So you saw Twitter begin­ning to try to ver­i­fy and give blue checks to doc­tors and virol­o­gists and epi­demi­ol­o­gists and oth­ers who were out there doing the work of real-time sci­ence com­mu­ni­ca­tion, who were rep­utable. And so the ques­tion became, for the plat­forms, how do you find these sources that are accu­rate and that are author­i­ta­tive that are not nec­es­sar­i­ly just the two insti­tu­tions that have been deemed kind of pur­vey­ors of good infor­ma­tion in the past. And per your point, unfor­tu­nate­ly attacks on cred­i­bil­i­ty do have the effect of erod­ing trust and con­fi­dence in the long term.

The plat­forms did begin to take steps to deal with health mis­in­for­ma­tion last year, actu­al­ly, and so a lot of the poli­cies that’re in place now, why health is treat­ed dif­fer­ent­ly than polit­i­cal con­tent, is that there has been a sense that there are right answers in health. There are things that are quite clear­ly true or not true. And those truths can have quite a mate­r­i­al impact on your life.

So Google’s name for that pol­i­cy was Your Money Or Your Life. It was the idea that Google search results should­n’t show you the most pop­u­lar result, because again pop­u­lar­i­ty can be gamed, but it should in fact show you some­thing author­i­ta­tive for ques­tions relat­ed to health or finance because those could have a mate­r­i­al impact on your life. And that was a frame­work that Google used for search begin­ning back I think in 2013, def­i­nite­ly in 2015. But it inter­est­ing­ly was­n’t rolled out to things like YouTube and oth­er places that were seen more as enter­tain­ment plat­forms. So, the oth­er social net­work com­pa­nies began to incor­po­rate that in 2019, in large part actu­al­ly in response to the measles outbreaks.

Ly: Do you think that there are any new insights that this has offered us into maybe the def­i­n­i­tion, the nature, or just gen­er­al char­ac­ter of disinformation?

DiResta: Um, one of the things that we’ve been look­ing at at Stanford Internet Observatory is actu­al­ly the reach of broad­cast media. This is some­thing that—the idea of net­worked pro­pa­gan­da, right. Of course the title came out of some Harvard pro­fes­sors, right, Rob Faris and Yochai Benkler. 

So, the idea of broad­cast media and the inter­est­ing inter­sec­tion between…you know, broad­cast is no longer dis­tinct from the Internet, right, and they all have Facebook pages so there’s…for some rea­son I think peo­ple still have this men­tal mod­el where the media is this thing over here and the Internet is this oth­er thing but I don’t see it that way.

So when you look at some­thing like state media prop­er­ties on Facebook, you do see this real­ly inter­est­ing dynam­ic where overt, attrib­ut­able actors (mean­ing this is quite clear­ly Chinese state media, Iranian state media, Russian state media), they’re not con­ceal­ing who they are—this is not like a troll fac­to­ry or a troll farm ampli­fy­ing some­thing subversively—they’re quite overt­ly putting out things that are…uh… (nice way to say it) con­spir­a­to­r­i­al at best? And so the chal­lenge there is this is no longer just being done sur­rep­ti­tious­ly, this is actu­al­ly being done on chan­nels with phe­nom­e­nal reach. And so, again it’s an inter­est­ing ques­tion of that inter­sec­tion between qual­i­ty of sources, dis­sem­i­na­tion social plat­forms, dis­sem­i­na­tion if you go direct­ly to the source—meaning to their web site or their pro­gram, and just real­ly think­ing about the infor­ma­tion envi­ron­ment as a sys­tem not as this dis­tinct silo in which what is hap­pen­ing on broad­cast and what is hap­pen­ing on the Internet are two dif­fer­ent things.

Ly: Yeah. Sort of relat­ed to that, one of the things we’ve talked about I know…even in our con­ver­sa­tions amongst our groups at Harvard is how dif­fi­cult it is to come up with answers to ques­tions of impact. How do we know, for exam­ple, that after expo­sure to a piece of false con­tent some­one went out and changed their behav­ior in any sub­stan­tial way? And that’s of course dif­fi­cult, giv­en the fact that we don’t know how peo­ple were going to behave to begin with. So, do you think that this has offered us any new insights into how we might study ques­tions of impact? Do you think maybe for instance, push­es of cures and treat­ments for COVID might be illus­tra­tive of the poten­tial for answers to those ques­tions here?

DiResta: Yeah. I think peo­ple are doing a lot of look­ing at search query results. You know, the very real—you know—

Ly: Yeah.

DiResta: When what we’ll call like blue check dis­in­for­ma­tion” or blue check mis­in­for­ma­tion,” maybe char­i­ta­bly, comes out, does that change peo­ple’s search behav­iors? Do they go look for infor­ma­tion in response to that prompt? One of the things that plat­forms have some vis­i­bil­i­ty into that unfor­tu­nate­ly those of us on the out­side still don’t is actu­al­ly the con­nec­tion path­ways from join­ing one group to join­ing the next group, right. And that is the thing that—you know, I would love to have vis­i­bil­i­ty into that. That is like the ques­tion for me, which is, when you join a group relat­ed to reopen, and a lot of the peo­ple in the reopen groups are anti-vaxxers, are you then more like­ly to go join…you know, how does that influ­ence path­way play out? Do you then kind of find your­self join­ing groups relat­ed to con­spir­a­cies that’ve been incor­po­rat­ed by oth­er mem­bers of the group?

I think there’s a lot of inter­est­ing dynam­ics there that we just don’t have vis­i­bil­i­ty into. But per your point, one of the things we can see, unfor­tu­nate­ly, is stuff like sto­ries of peo­ple tak­ing hydrox­y­chloro­quine and oth­er drugs that are dan­ger­ous for healthy peo­ple to take. Again, one of the chal­lenges to under­stand­ing that is you don’t want the media to report on like, the one guy who did it as if that’s part of a nation­al trend, because then that is also—

Ly: Right.

DiResta: —harm­ful.

Ly: Right.

DiResta: So it’s real­ly appro­pri­ate­ly con­tex­tu­al­iz­ing what peo­ple do in response, I think is a big part of our gaps in understanding.

Ly: Yeah. Definitely for sure. Okay. If you could change one thing about how the plat­forms are respond­ing to COVID-19 dis­in­for­ma­tion, what would it be and why?

DiResta: I real­ly wish that we could expand our ideas of author­i­ta­tive sources and have a broad­er base of trust­ed insti­tu­tions like local pedi­atric hos­pi­tals and oth­er enti­ties that still occu­py a high­er degree of trust, ver­sus major, behe­moth, politi­cized orga­ni­za­tions. That’s my kind of per­son­al wishlist.

I think the oth­er thing is that I real­ly want to see us not screw up is every­body who works on man­u­fac­tur­ing treat­ments and vac­cines for this dis­ease as we move for­ward is going to become a tar­get. And there is absolute­ly no doubt that that is going to hap­pen. Happens every sin­gle time. Somebody like Bill Gates could become the focus of con­spir­a­cy the­o­ries and peo­ple show­ing up at his house and all these oth­er things, you know. He’s a pub­lic fig­ure with secu­ri­ty and resources. That is not going to be true for a lot of the peo­ple who are doing some of the front­line devel­op­ment work who’re going to become inad­ver­tent­ly famous” or inad­ver­tent­ly pub­lic fig­ures, unfor­tu­nate­ly, just by virtue of try­ing to do life-saving work. We see doc­tors get­ting tar­get­ed already.

Ly: Yeah.

DiResta: And I think that the plat­forms real­ly have to do a bet­ter job of under­stand­ing that there will be per­son­al smears put out about these peo­ple. There will be disinform—videos made, web sites made, Facebook pages made, designed to erode con­fi­dence of the pub­lic in the work that they’re doing by attack­ing them per­son­al­ly. And I think we absolute­ly have to do a bet­ter job of know­ing that is com­ing, and hav­ing the appro­pri­ate pro­vi­sions in place to pre­vent it.

Ly: What do you think are those appro­pri­ate provisions?

DiResta: If you believe that good infor­ma­tion is the best counter to bad infor­ma­tion, or that more voices—you know, Zuckerberg has said repeatedly—is the anti­dote to—you know, good speech is the anti­dote to bad speech, and authen­tic com­mu­ni­ca­tion coun­ters con­spir­a­cies and these oth­er things, then you have to under­stand that harass­ment is a tool by which those voic­es are pushed out of the con­ver­sa­tion. And so that is where the dynam­ic comes into play where you want to ensure that the cost of par­tic­i­pat­ing in vac­cine research or health com­mu­ni­ca­tion to the pub­lic is not that peo­ple stalk your kids, right. I mean that’s an unrea­son­able cost to ask some­one to bear. And so I think that that is of course the real chal­lenge here. If you want to have that coun­ter­speech, then there has to be recog­ni­tion of the dynam­ics at play to ensure that peo­ple still feel com­fort­able tak­ing on that role and doing that work.

Further Reference

Medium post for this episode, with intro­duc­tion and edit­ed text

Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Cash App, or even just sharing the link. Thanks.