Oumou Ly: Welcome to the Berkman Klein Center’s new interview series The Breakdown. I’m Oumou. I’m a staff fellow on the Berkman Klein Center’s Assembly Disinformation program. Today we’re interviewing Renée DiResta. She is a technical research manager at the Stanford Internet Observatory. She studies the spread of false narratives across social networks and helps policymakers in devising responses to the disinformation problem.
Thanks Renée. So today we are gonna talk about COVID-19 and the way dis- and misinformation related to COVID has percolated across the Internet and in so many ways created new problems in disinformation that I think policymakers and those of us who were focused on the issue weren’t necessarily tracking before and gives us a whole new lens through which to study issues that kind of are not unique to this problem but tell us a lot about disinformation as a whole.
So one of the first questions I had for you, Renée, is just whether there’s anything new about what we’re learning about disinformation from COVID?
Renée DiResta: Yeah. It’s been really interesting to see the entire world pay attention to one topic, right. This is something somewhat unprecedented. We have had outbreaks in the era of social media misinformation before. Zika in 2015, Ebola 2018, right. So there have been a range of moments in which diseases have captivated public attention. But usually they tend to stay at least somewhat geographically confined in terms of attention.
What’s interesting with the coronavirus pandemic is of course the entire world has been affected by the virus. And so the other interesting thing about it has been that very little in the way—you know, even the institutions don’t really have a very strong sense of what is happening, unfortunately. There’s a lot of these unknowns, as the disease manifestion, treatment, a lot of the mechanics of the outbreak of the pandemic itself are poorly understood so the information around them is similarly fuzzy. And so one of the challenges that we really see here is the challenge of how do we even know what is authoritative information? How do you help the public make sense of something when the authorities are still trying to make sense of it themselves, and researchers are still trying to make sense of it themselves.
But what we see with COVID is a lot of real, sustained attention. And I think that what that’s shown us is there’s this demand for information and it’s revealed gaps where platform curation, they don’t have enough to surface things. They’re struggling with what an authority is, what authoritative information looks like. I think that that’s been one of the real interesting dynamics that’s come out of this.
Ly: Thanks for that. So one question I have about the bit on authoritative sources is, what makes it so difficult for so many of the platforms to prioritize authoritative sources of information and deprioritize false content and other sources? Do you think that the political attacks or partisan attacks on traditionally authoritative sources of information like the CDC and WHO complicate the task of platforms to prioritize that we call “good” information?
DiResta: So, for platforms to surface information when people are searching for a particular keyword or topic, they have recognized that surfacing the thing that is most popular is not the right answer either, that popularity can be quite easily gamed on these systems. But the question becomes what do you give to people? Is an authoritative source only an institutionally authoritative source? I think the answer is quite clearly no. But how do we decide what an authoritative is?
So you saw Twitter beginning to try to verify and give blue checks to doctors and virologists and epidemiologists and others who were out there doing the work of real-time science communication, who were reputable. And so the question became, for the platforms, how do you find these sources that are accurate and that are authoritative that are not necessarily just the two institutions that have been deemed kind of purveyors of good information in the past. And per your point, unfortunately attacks on credibility do have the effect of eroding trust and confidence in the long term.
The platforms did begin to take steps to deal with health misinformation last year, actually, and so a lot of the policies that’re in place now, why health is treated differently than political content, is that there has been a sense that there are right answers in health. There are things that are quite clearly true or not true. And those truths can have quite a material impact on your life.
So Google’s name for that policy was Your Money Or Your Life. It was the idea that Google search results shouldn’t show you the most popular result, because again popularity can be gamed, but it should in fact show you something authoritative for questions related to health or finance because those could have a material impact on your life. And that was a framework that Google used for search beginning back I think in 2013, definitely in 2015. But it interestingly wasn’t rolled out to things like YouTube and other places that were seen more as entertainment platforms. So, the other social network companies began to incorporate that in 2019, in large part actually in response to the measles outbreaks.
Ly: Do you think that there are any new insights that this has offered us into maybe the definition, the nature, or just general character of disinformation?
DiResta: Um, one of the things that we’ve been looking at at Stanford Internet Observatory is actually the reach of broadcast media. This is something that—the idea of networked propaganda, right. Of course the title came out of some Harvard professors, right, Rob Faris and Yochai Benkler.
So, the idea of broadcast media and the interesting intersection between…you know, broadcast is no longer distinct from the Internet, right, and they all have Facebook pages so there’s…for some reason I think people still have this mental model where the media is this thing over here and the Internet is this other thing but I don’t see it that way.
So when you look at something like state media properties on Facebook, you do see this really interesting dynamic where overt, attributable actors (meaning this is quite clearly Chinese state media, Iranian state media, Russian state media), they’re not concealing who they are—this is not like a troll factory or a troll farm amplifying something subversively—they’re quite overtly putting out things that are…uh… (nice way to say it) conspiratorial at best? And so the challenge there is this is no longer just being done surreptitiously, this is actually being done on channels with phenomenal reach. And so, again it’s an interesting question of that intersection between quality of sources, dissemination social platforms, dissemination if you go directly to the source—meaning to their web site or their program, and just really thinking about the information environment as a system not as this distinct silo in which what is happening on broadcast and what is happening on the Internet are two different things.
Ly: Yeah. Sort of related to that, one of the things we’ve talked about I know…even in our conversations amongst our groups at Harvard is how difficult it is to come up with answers to questions of impact. How do we know, for example, that after exposure to a piece of false content someone went out and changed their behavior in any substantial way? And that’s of course difficult, given the fact that we don’t know how people were going to behave to begin with. So, do you think that this has offered us any new insights into how we might study questions of impact? Do you think maybe for instance, pushes of cures and treatments for COVID might be illustrative of the potential for answers to those questions here?
DiResta: Yeah. I think people are doing a lot of looking at search query results. You know, the very real—you know—
Ly: Yeah.
DiResta: When what we’ll call like “blue check disinformation” or “blue check misinformation,” maybe charitably, comes out, does that change people’s search behaviors? Do they go look for information in response to that prompt? One of the things that platforms have some visibility into that unfortunately those of us on the outside still don’t is actually the connection pathways from joining one group to joining the next group, right. And that is the thing that—you know, I would love to have visibility into that. That is like the question for me, which is, when you join a group related to reopen, and a lot of the people in the reopen groups are anti-vaxxers, are you then more likely to go join…you know, how does that influence pathway play out? Do you then kind of find yourself joining groups related to conspiracies that’ve been incorporated by other members of the group?
I think there’s a lot of interesting dynamics there that we just don’t have visibility into. But per your point, one of the things we can see, unfortunately, is stuff like stories of people taking hydroxychloroquine and other drugs that are dangerous for healthy people to take. Again, one of the challenges to understanding that is you don’t want the media to report on like, the one guy who did it as if that’s part of a national trend, because then that is also—
Ly: Right.
DiResta: —harmful.
Ly: Right.
DiResta: So it’s really appropriately contextualizing what people do in response, I think is a big part of our gaps in understanding.
Ly: Yeah. Definitely for sure. Okay. If you could change one thing about how the platforms are responding to COVID-19 disinformation, what would it be and why?
DiResta: I really wish that we could expand our ideas of authoritative sources and have a broader base of trusted institutions like local pediatric hospitals and other entities that still occupy a higher degree of trust, versus major, behemoth, politicized organizations. That’s my kind of personal wishlist.
I think the other thing is that I really want to see us not screw up is everybody who works on manufacturing treatments and vaccines for this disease as we move forward is going to become a target. And there is absolutely no doubt that that is going to happen. Happens every single time. Somebody like Bill Gates could become the focus of conspiracy theories and people showing up at his house and all these other things, you know. He’s a public figure with security and resources. That is not going to be true for a lot of the people who are doing some of the frontline development work who’re going to become inadvertently “famous” or inadvertently public figures, unfortunately, just by virtue of trying to do life-saving work. We see doctors getting targeted already.
Ly: Yeah.
DiResta: And I think that the platforms really have to do a better job of understanding that there will be personal smears put out about these people. There will be disinform—videos made, web sites made, Facebook pages made, designed to erode confidence of the public in the work that they’re doing by attacking them personally. And I think we absolutely have to do a better job of knowing that is coming, and having the appropriate provisions in place to prevent it.
Ly: What do you think are those appropriate provisions?
DiResta: If you believe that good information is the best counter to bad information, or that more voices—you know, Zuckerberg has said repeatedly—is the antidote to—you know, good speech is the antidote to bad speech, and authentic communication counters conspiracies and these other things, then you have to understand that harassment is a tool by which those voices are pushed out of the conversation. And so that is where the dynamic comes into play where you want to ensure that the cost of participating in vaccine research or health communication to the public is not that people stalk your kids, right. I mean that’s an unreasonable cost to ask someone to bear. And so I think that that is of course the real challenge here. If you want to have that counterspeech, then there has to be recognition of the dynamics at play to ensure that people still feel comfortable taking on that role and doing that work.
Further Reference
Medium post for this episode, with introduction and edited text