Oumu Ly: Welcome to The Breakdown. My name is Oumou, I’m a fellow at the Berkman Klein Center on the Assembly Disinformation Program. I am really excited to be joined today by Naima Green-Riley. Naima’s a PhD candidate at the Department of Government at Harvard University, with a particular focus on public diplomacy and the global information space. She also was formerly a foreign service officer and a Pickering fellow. Welcome Naima. Thanks so much for joining.
Naima Green-Riley: Well thank you so much for having me.
Ly: Thank you. So, our conversation today centers on foreign interference in the upcoming election which is drawing really really close. At the time of this recording we’re about two weeks out from November 3rd. And a few of the big topics on my mind today, Naima, are you know, sort of one, the sort of big threat actors this time around. We know that 2016 was sort of a watershed moment in terms of foreign interference for American democratic processes. In terms of social media manipulation in particular, how do foreign influence efforts in 2020 look in contrast to active measures we saw in 2016? Maybe have the primary threat actors changed, optimized their methods a little bit, or adopted overall new approaches to influencing public opinion.
Green: Well, you’re definitely right that 2016 marked the first time that the US started to really pay attention to this type of online foreign influence activity. And during that election year we saw a series of coordinated social media campaigns targeting various groups of individuals in the United States and seeking to influence their political thoughts and behavior.
The campaigns were focused on sowing discord in US politics mainly, by driving a wedge between people on very polarizing topics. So they usually involved either creating, or amplifying, content on social media that would encourage people to take more extreme viewpoints. So some examples might be that veterans were often targeted. There was this one meme that was run by Russian trolls, basically, that showed a picture of a US soldier, and then it had the text “Hillary Clinton has a 69% disapproval rate amongst all veterans” on it. Clearly intended to have impact on how those people were thinking.
Ly: Right.
Green: They might also give misleading information about the elections. Like they might tell people that the election date was maybe several days after the actual election date, and therefore try and ruin people’s chances of using their right to vote. Some disinformation campaigns told people that they could tweet or text to their vote in so they didn’t have to leave their homes. And also there was exploitation of real political sentiment in the US, often encouraging division. And particularly divisions around race. And so there were YouTube channels that would be called things like “Don’t Shoot” or “Black to Live” that shared content about police violence and Black Lives Matter. And some racialized campaign that were linked to those types of sites would then promote ideas like the black community can’t rely on the government; it’s not worth voting anyway.
So that’s the type of stuff that we started to see in 2016, and many of those efforts were either linked to the GRU, which is a part of the general staff of the armed forces of Russia. Or the Internet Research Agency, the IRA, of Russia. And many characterize the IRA as a troll farm, so an organization that particularly focuses on spreading false information online.
So since 2016, unfortunately online influence campaigns have only become more rampant and more complicated. We’ve seen a more diverse range of people being targeted in the United States, so not just veterans and African Americans but also different political groups from the far right to the far left. We’ve seen immigrant communities be targeted, religious groups. People who care about specific issues like gun rights or the Confederate flag. So basically the most controversial topics are the topics that foreign actors tend to drill deep on to try and influence Americans. It’s just gotten more and more complex.
Ly: I want to pick up on this point because so often particularly racial issues form the basis of disinformation and influence campaigns, because like you said they are the most divisive, contentious issues. I mean in what ways have you seen foreign actors work to weaponize social issues in the United States just this year, maybe since the death of George Floyd?
Green: Well you know, it’s interesting because we focus a lot on disinformation as targeted towards the elections, but a number of different types of behaviors and activities have been targeted through disinformation. So we’ve seen people try to manipulate things like census participation or certain types of civic involvement. And the range of ways that actors are actually using different platforms is changing too. So we’re seeing text messages and WhatsApp messages being used to impact people in addition to social media.
But after George Floyd was killed, as you might expect, because it’s a controversial issue that affects Americans, absolutely there was sort of this onslaught of misinformation and disinformation that showed up online. So, there were claims that George Floyd didn’t die. There were claims that were stoking conspiracy theories about protests that happened after his death.
And I have to say, not all dis- and misinformation is foreign, so that’s why this is such a large problem because there are many domestic actors that engage in disinformation campaigns as well. So, the narratives that we’ve seen across the space come from so many different people that sometimes it can be hard to target the the problem to one particular actor or one particular motive.
Ly: So in 2016, the Russian government undertook really sophisticated methods of influence, certainly for that particular time and for that election, including mobilizing inauthentic narratives via inauthentic users, leveraging witting and unwitting Americans, and social media users. How would you contrast the threat posed by Russia’s efforts with other countries known to be involved in ongoing influence efforts?
Green: Well, I have to say that Russia continues to be a country of major concern. We saw just recently this week the FBI announcing that Russia has been shown to have some information about voter registration in the United States. Russian disinformation campaigns have definitely reemerged in the 2020 election cycle. But those campaigns only make up a small amount of the overall activities that Russia’s engaging in today, all with the goal of undermining democracy and eroding democratic institutions around the world.
That being said, we’ve seen other actors emerging in this space. Within the first few months of the COVID-19 pandemic, Chinese agents were shown to be pushing false narratives within the US saying that President Trump was going to put the entire country on lockdown. Iran has increasingly been involved in these types of campaigns as well. Recently they used massive emails to affect US public opinion about the elections.
And one more thing I want to mention is that this is really a global phenomenon. So you know, these actors, these state actors often outsource their activity through sort of operations in different countries. So for instance, there are stories of a Russian troll farm that was set up in Ghana to push racial narratives about the United States. And you know, there’ve also been troll farms that are set up by state actors in places like Nigeria, Albania, the Philippines. So what’s interesting here is that the individuals who’re actually sending those messages are either economically motivated—they’re getting paid—or they might be ideologically motivated. But they’re acting on behalf of these state actors. And that makes this not just a state-to-state issue but a real global problem that involves many people in different parts of the world.
Ly: So turning to the platforms for a second, what are your thoughts on some of the interventions platforms have announced so far? Maybe like limiting retweets and shares via private message, labeling posts and accounts associated with state-run media organizations. You know, the list of interventions sort of goes on.
Green: Yeah. All of the things that you mentioned are a good start, I would say. At the end of the day I think it’s gotta be a major focus on how can we inform social media users of the potential threats in the information environment, and how can we best equip them to really understand what they’re consuming. So I think that part of the answer is for these tech companies to of their own accord continue to create policies that will address this issue. But, we also need better legislation, and that legislation has to focus on privacy rights, has the focus on online advertising, political advertising, tech sector regulation. And then we need policies that will enforce this type of thing moving forward. So it can’t all be upon the tech companies without that guidance, because I don’t know that they necessarily have the total will to do all that’s necessary to really get at this problem.
Social media companies have already started to label content. They’re also searching for inauthentic behavior, especially coordinated inauthentic behavior online. But I think that there is particular work to be done in terms of the way that we think about content labeling. So, when platforms are labeling content, they are usually labeling content from some sort of state-run media. And if it’s state-run media, much of the state-run media that they’re looking at is not completely a covert operation; it’s not of a situation where like this media source just doesn’t want anyone to know that it’s associated with the state.
But, it might be pretty difficult for the audience to actually determine that that outlet is from a state-run site. So an example would be RT, formerly known as Russia Roday. There’s a reason I think that it went from Russia Today to RT. If you go to the RT web site, you will see a big banner that says “question more; RT” and then there’s lots of information about how RT works all over the world in order to help people to uncover the truth. And then if you scroll alllll the way to the bottom of the web site, you’ll see RT has the support of Moscow or the Russian government, something to that effect.
Ly: Yeah.
Green: So, it’s difficult for people to actually know where this content is coming from. And this summer, Facebook made good on a policy that they had said that they were going to enact for some time where they now label certain types of content. And basically they say that they’ll label any content it seems like it’s wholly are fully under editorial control that’s influenced by the state, by some state government. And so lots of Chinese and Russian sites or outlets are included in this policy so far. And according to Facebook they’re going to increase the number of outlets that get this label. And basically what you see is like, on the post you see “Chinese state-controlled media; Russian state-controlled media,” something to that effect.
That’s helpful because now a person doesn’t have to click, and then go to the web site, and scroll to the bottom of the page to find out that this outlet comes from Russia.
Ly: Surprise!
Green: But, at the same time, I still think we need to do more in terms of helping Americans to understand why it’s an iss—why state actors are trying to reach them, little old me who lives in some small city or some small town in the middle of America. And how narratives can be manipulated. And so only if that’s done, in connection with labeling more of these types of outlets on social media do I think you get more impact.
YouTube does something else. In 2018 they started to label their content. But the way they were label their content is they basically label anything that is government-sponsored. So, if some outlet is funded in whole or in part by a government, there’s a banner that comes up at the bottom of the video that tells people that. And so you’ll see RT labeled as Russian content, but you also see BBC labeled as British content so it doesn’t have to do with the editorial control of the outlet.
One final thing on this, because I think this is really important. So I have heard stories of people who let’s say for whatever reason have stumbled upon some sort of content from a foreign actor.
Ly: Yeah.
Green: And so, this content might come up because somebody shared something and they watched the video, right. So they watch a video, let’s say they watch an RT video. Maybe they weren’t trying to find the RT video and maybe they also aren’t the type of person who would watch a lot of content from RT. But they watched that one video. They continue to scroll on their news feed. And then they get a suggestion. “You might enjoy this.”
Now, the next thing that they get comes from Sputnik. It comes from RT again. So now they’re getting fed information about the US political system that is being portrayed by a foreign actor, and they weren’t even looking for it. I think that that’s another thing that we’ve got to tackle, is the algorithms that are used in order to uphold tech companies’ business models. Because in some cases, those algorithms will be harmful to people because they’ll actually feed them information from foreign actors that might have malicious intent.
Ly: Naima, this week the FBI confirmed that Iran was responsible for an influence effort giving the appearance of election interference. And in this particular episode, US voters in Florida and I think a number of other states received threatening emails from a domain appearing to belong to a white supremacist group. Can you talk a little bit about what in particular the FBI revealed and what its significance is for the election?
Green: Right. So, there was a press conference on October 21st in which the FBI announced that they had uncovered an email campaign that was orchestrated by Iran. The emails purported themselves to come from the Proud Boys, which as you mentioned is a far-right group with ties to white supremacy. And it was also a group that had recently been referenced in US politics in the first presidential debate. But actually now we know that these emails came from Iran. And some of the individuals who received the contents of the email posted them online. So they were addressed to the email users by name, and they said “we are in possession of all of your information; email, address, telephone, everything.” And then they said they knew that the individuals was registered as a Democrat because they had gained access to the US voting infrastructure. And they said “You will vote for Trump on election day or we will come after you.”
So first of all, they included a huge amount of intimidation. Second of all, they were purporting themselves to be this group that they were not. And third of all they absolutely were attempting to contribute to discord in the run-up to the election. It’s dangerous activity. It is alarming activity. It’s something that I think will have multiple impacts for a time to come. Because even though the FBI was able to identify that this happened, that goal of shaking voter confidence of course may have been a little bit successful in that instance. And so, one of the things that is good about this is that the FBI was able to identify this very quickly; to make an announcement to the US public that it had happened; to be clear about what happened.
Unfortunately, what they announced was not just that the GMail users were receiving this email and there was false information in it. They also said that they had information that both Russia and Iran have actually obtained registration information from the United States. And that’s concerning as well. There appears to be good coordination between the private sector and the government on this issue. Google announced the number of GMail users that are estimated to have been targeted through the Iranian campaign. Unfortunately the number is about 25,000 email users, which is no small amount. And so this is just another instance of how not social media but the Internet realm—email—can be used as a way to target American public opinion.
Ly: Thank you so much for joining, Naima. I really enjoyed our conversation and I know our viewers will too.
Green: Excellent! Well, I really enjoyed this so thanks for having me.
Ly: Thank you.
Further Reference
Medium post for this episode, with introduction and edited text