Oumu Ly: Welcome to The Breakdown. My name is Oumou, I’m a fel­low at the Berkman Klein Center on the Assembly Disinformation Program. I am real­ly excit­ed to be joined today by Naima Green-Riley. Naima’s a PhD can­di­date at the Department of Government at Harvard University, with a par­tic­u­lar focus on pub­lic diplo­ma­cy and the glob­al infor­ma­tion space. She also was for­mer­ly a for­eign ser­vice offi­cer and a Pickering fel­low. Welcome Naima. Thanks so much for joining.

Naima Green-Riley: Well thank you so much for hav­ing me.

Ly: Thank you. So, our con­ver­sa­tion today cen­ters on for­eign inter­fer­ence in the upcom­ing elec­tion which is draw­ing real­ly real­ly close. At the time of this record­ing we’re about two weeks out from November 3rd. And a few of the big top­ics on my mind today, Naima, are you know, sort of one, the sort of big threat actors this time around. We know that 2016 was sort of a water­shed moment in terms of for­eign inter­fer­ence for American demo­c­ra­t­ic process­es. In terms of social media manip­u­la­tion in par­tic­u­lar, how do for­eign influ­ence efforts in 2020 look in con­trast to active mea­sures we saw in 2016? Maybe have the pri­ma­ry threat actors changed, opti­mized their meth­ods a lit­tle bit, or adopt­ed over­all new approach­es to influ­enc­ing pub­lic opinion.

Green: Well, you’re def­i­nite­ly right that 2016 marked the first time that the US start­ed to real­ly pay atten­tion to this type of online for­eign influ­ence activ­i­ty. And dur­ing that elec­tion year we saw a series of coor­di­nat­ed social media cam­paigns tar­get­ing var­i­ous groups of indi­vid­u­als in the United States and seek­ing to influ­ence their polit­i­cal thoughts and behavior. 

The cam­paigns were focused on sow­ing dis­cord in US pol­i­tics main­ly, by dri­ving a wedge between peo­ple on very polar­iz­ing top­ics. So they usu­al­ly involved either cre­at­ing, or ampli­fy­ing, con­tent on social media that would encour­age peo­ple to take more extreme view­points. So some exam­ples might be that vet­er­ans were often tar­get­ed. There was this one meme that was run by Russian trolls, basi­cal­ly, that showed a pic­ture of a US sol­dier, and then it had the text Hillary Clinton has a 69% dis­ap­proval rate amongst all vet­er­ans” on it. Clearly intend­ed to have impact on how those peo­ple were thinking.

Ly: Right.

Green: They might also give mis­lead­ing infor­ma­tion about the elec­tions. Like they might tell peo­ple that the elec­tion date was maybe sev­er­al days after the actu­al elec­tion date, and there­fore try and ruin peo­ple’s chances of using their right to vote. Some dis­in­for­ma­tion cam­paigns told peo­ple that they could tweet or text to their vote in so they did­n’t have to leave their homes. And also there was exploita­tion of real polit­i­cal sen­ti­ment in the US, often encour­ag­ing divi­sion. And par­tic­u­lar­ly divi­sions around race. And so there were YouTube chan­nels that would be called things like Don’t Shoot” or Black to Live” that shared con­tent about police vio­lence and Black Lives Matter. And some racial­ized cam­paign that were linked to those types of sites would then pro­mote ideas like the black com­mu­ni­ty can’t rely on the gov­ern­ment; it’s not worth vot­ing anyway. 

So that’s the type of stuff that we start­ed to see in 2016, and many of those efforts were either linked to the GRU, which is a part of the gen­er­al staff of the armed forces of Russia. Or the Internet Research Agency, the IRA, of Russia. And many char­ac­ter­ize the IRA as a troll farm, so an orga­ni­za­tion that par­tic­u­lar­ly focus­es on spread­ing false infor­ma­tion online. 

So since 2016, unfor­tu­nate­ly online influ­ence cam­paigns have only become more ram­pant and more com­pli­cat­ed. We’ve seen a more diverse range of peo­ple being tar­get­ed in the United States, so not just vet­er­ans and African Americans but also dif­fer­ent polit­i­cal groups from the far right to the far left. We’ve seen immi­grant com­mu­ni­ties be tar­get­ed, reli­gious groups. People who care about spe­cif­ic issues like gun rights or the Confederate flag. So basi­cal­ly the most con­tro­ver­sial top­ics are the top­ics that for­eign actors tend to drill deep on to try and influ­ence Americans. It’s just got­ten more and more complex. 

Ly: I want to pick up on this point because so often par­tic­u­lar­ly racial issues form the basis of dis­in­for­ma­tion and influ­ence cam­paigns, because like you said they are the most divi­sive, con­tentious issues. I mean in what ways have you seen for­eign actors work to weaponize social issues in the United States just this year, maybe since the death of George Floyd?

Green: Well you know, it’s inter­est­ing because we focus a lot on dis­in­for­ma­tion as tar­get­ed towards the elec­tions, but a num­ber of dif­fer­ent types of behav­iors and activ­i­ties have been tar­get­ed through dis­in­for­ma­tion. So we’ve seen peo­ple try to manip­u­late things like cen­sus par­tic­i­pa­tion or cer­tain types of civic involve­ment. And the range of ways that actors are actu­al­ly using dif­fer­ent plat­forms is chang­ing too. So we’re see­ing text mes­sages and WhatsApp mes­sages being used to impact peo­ple in addi­tion to social media.

But after George Floyd was killed, as you might expect, because it’s a con­tro­ver­sial issue that affects Americans, absolute­ly there was sort of this onslaught of mis­in­for­ma­tion and dis­in­for­ma­tion that showed up online. So, there were claims that George Floyd did­n’t die. There were claims that were stok­ing con­spir­a­cy the­o­ries about protests that hap­pened after his death. 

And I have to say, not all dis- and mis­in­for­ma­tion is for­eign, so that’s why this is such a large prob­lem because there are many domes­tic actors that engage in dis­in­for­ma­tion cam­paigns as well. So, the nar­ra­tives that we’ve seen across the space come from so many dif­fer­ent peo­ple that some­times it can be hard to tar­get the the prob­lem to one par­tic­u­lar actor or one par­tic­u­lar motive. 

Ly: So in 2016, the Russian gov­ern­ment under­took real­ly sophis­ti­cat­ed meth­ods of influ­ence, cer­tain­ly for that par­tic­u­lar time and for that elec­tion, includ­ing mobi­liz­ing inau­then­tic nar­ra­tives via inau­then­tic users, lever­ag­ing wit­ting and unwit­ting Americans, and social media users. How would you con­trast the threat posed by Russia’s efforts with oth­er coun­tries known to be involved in ongo­ing influ­ence efforts?

Green: Well, I have to say that Russia con­tin­ues to be a coun­try of major con­cern. We saw just recent­ly this week the FBI announc­ing that Russia has been shown to have some infor­ma­tion about vot­er reg­is­tra­tion in the United States. Russian dis­in­for­ma­tion cam­paigns have def­i­nite­ly reemerged in the 2020 elec­tion cycle. But those cam­paigns only make up a small amount of the over­all activ­i­ties that Russia’s engag­ing in today, all with the goal of under­min­ing democ­ra­cy and erod­ing demo­c­ra­t­ic insti­tu­tions around the world. 

That being said, we’ve seen oth­er actors emerg­ing in this space. Within the first few months of the COVID-19 pan­dem­ic, Chinese agents were shown to be push­ing false nar­ra­tives with­in the US say­ing that President Trump was going to put the entire coun­try on lock­down. Iran has increas­ing­ly been involved in these types of cam­paigns as well. Recently they used mas­sive emails to affect US pub­lic opin­ion about the elections. 

And one more thing I want to men­tion is that this is real­ly a glob­al phe­nom­e­non. So you know, these actors, these state actors often out­source their activ­i­ty through sort of oper­a­tions in dif­fer­ent coun­tries. So for instance, there are sto­ries of a Russian troll farm that was set up in Ghana to push racial nar­ra­tives about the United States. And you know, there’ve also been troll farms that are set up by state actors in places like Nigeria, Albania, the Philippines. So what’s inter­est­ing here is that the indi­vid­u­als who’re actu­al­ly send­ing those mes­sages are either eco­nom­i­cal­ly motivated—they’re get­ting paid—or they might be ide­o­log­i­cal­ly moti­vat­ed. But they’re act­ing on behalf of these state actors. And that makes this not just a state-to-state issue but a real glob­al prob­lem that involves many peo­ple in dif­fer­ent parts of the world.

Ly: So turn­ing to the plat­forms for a sec­ond, what are your thoughts on some of the inter­ven­tions plat­forms have announced so far? Maybe like lim­it­ing retweets and shares via pri­vate mes­sage, label­ing posts and accounts asso­ci­at­ed with state-run media orga­ni­za­tions. You know, the list of inter­ven­tions sort of goes on.

Green: Yeah. All of the things that you men­tioned are a good start, I would say. At the end of the day I think it’s got­ta be a major focus on how can we inform social media users of the poten­tial threats in the infor­ma­tion envi­ron­ment, and how can we best equip them to real­ly under­stand what they’re con­sum­ing. So I think that part of the answer is for these tech com­pa­nies to of their own accord con­tin­ue to cre­ate poli­cies that will address this issue. But, we also need bet­ter leg­is­la­tion, and that leg­is­la­tion has to focus on pri­va­cy rights, has the focus on online adver­tis­ing, polit­i­cal adver­tis­ing, tech sec­tor reg­u­la­tion. And then we need poli­cies that will enforce this type of thing mov­ing for­ward. So it can’t all be upon the tech com­pa­nies with­out that guid­ance, because I don’t know that they nec­es­sar­i­ly have the total will to do all that’s nec­es­sary to real­ly get at this problem.

Social media com­pa­nies have already start­ed to label con­tent. They’re also search­ing for inau­then­tic behav­ior, espe­cial­ly coor­di­nat­ed inau­then­tic behav­ior online. But I think that there is par­tic­u­lar work to be done in terms of the way that we think about con­tent label­ing. So, when plat­forms are label­ing con­tent, they are usu­al­ly label­ing con­tent from some sort of state-run media. And if it’s state-run media, much of the state-run media that they’re look­ing at is not com­plete­ly a covert oper­a­tion; it’s not of a sit­u­a­tion where like this media source just does­n’t want any­one to know that it’s asso­ci­at­ed with the state. 

But, it might be pret­ty dif­fi­cult for the audi­ence to actu­al­ly deter­mine that that out­let is from a state-run site. So an exam­ple would be RT, for­mer­ly known as Russia Roday. There’s a rea­son I think that it went from Russia Today to RT. If you go to the RT web site, you will see a big ban­ner that says ques­tion more; RT” and then there’s lots of infor­ma­tion about how RT works all over the world in order to help peo­ple to uncov­er the truth. And then if you scroll all­l­ll the way to the bot­tom of the web site, you’ll see RT has the sup­port of Moscow or the Russian gov­ern­ment, some­thing to that effect.

Ly: Yeah.

Green: So, it’s dif­fi­cult for peo­ple to actu­al­ly know where this con­tent is com­ing from. And this sum­mer, Facebook made good on a pol­i­cy that they had said that they were going to enact for some time where they now label cer­tain types of con­tent. And basi­cal­ly they say that they’ll label any con­tent it seems like it’s whol­ly are ful­ly under edi­to­r­i­al con­trol that’s influ­enced by the state, by some state gov­ern­ment. And so lots of Chinese and Russian sites or out­lets are includ­ed in this pol­i­cy so far. And accord­ing to Facebook they’re going to increase the num­ber of out­lets that get this label. And basi­cal­ly what you see is like, on the post you see Chinese state-controlled media; Russian state-controlled media,” some­thing to that effect. 

That’s help­ful because now a per­son does­n’t have to click, and then go to the web site, and scroll to the bot­tom of the page to find out that this out­let comes from Russia. 

Ly: Surprise!

Green: But, at the same time, I still think we need to do more in terms of help­ing Americans to under­stand why it’s an iss—why state actors are try­ing to reach them, lit­tle old me who lives in some small city or some small town in the mid­dle of America. And how nar­ra­tives can be manip­u­lat­ed. And so only if that’s done, in con­nec­tion with label­ing more of these types of out­lets on social media do I think you get more impact. 

YouTube does some­thing else. In 2018 they start­ed to label their con­tent. But the way they were label their con­tent is they basi­cal­ly label any­thing that is government-sponsored. So, if some out­let is fund­ed in whole or in part by a gov­ern­ment, there’s a ban­ner that comes up at the bot­tom of the video that tells peo­ple that. And so you’ll see RT labeled as Russian con­tent, but you also see BBC labeled as British con­tent so it does­n’t have to do with the edi­to­r­i­al con­trol of the outlet.

One final thing on this, because I think this is real­ly impor­tant. So I have heard sto­ries of peo­ple who let’s say for what­ev­er rea­son have stum­bled upon some sort of con­tent from a for­eign actor.

Ly: Yeah.

Green: And so, this con­tent might come up because some­body shared some­thing and they watched the video, right. So they watch a video, let’s say they watch an RT video. Maybe they weren’t try­ing to find the RT video and maybe they also aren’t the type of per­son who would watch a lot of con­tent from RT. But they watched that one video. They con­tin­ue to scroll on their news feed. And then they get a sug­ges­tion. You might enjoy this.”

Now, the next thing that they get comes from Sputnik. It comes from RT again. So now they’re get­ting fed infor­ma­tion about the US polit­i­cal sys­tem that is being por­trayed by a for­eign actor, and they weren’t even look­ing for it. I think that that’s anoth­er thing that we’ve got to tack­le, is the algo­rithms that are used in order to uphold tech com­pa­nies’ busi­ness mod­els. Because in some cas­es, those algo­rithms will be harm­ful to peo­ple because they’ll actu­al­ly feed them infor­ma­tion from for­eign actors that might have mali­cious intent. 

Ly: Naima, this week the FBI con­firmed that Iran was respon­si­ble for an influ­ence effort giv­ing the appear­ance of elec­tion inter­fer­ence. And in this par­tic­u­lar episode, US vot­ers in Florida and I think a num­ber of oth­er states received threat­en­ing emails from a domain appear­ing to belong to a white suprema­cist group. Can you talk a lit­tle bit about what in par­tic­u­lar the FBI revealed and what its sig­nif­i­cance is for the election?

Green: Right. So, there was a press con­fer­ence on October 21st in which the FBI announced that they had uncov­ered an email cam­paign that was orches­trat­ed by Iran. The emails pur­port­ed them­selves to come from the Proud Boys, which as you men­tioned is a far-right group with ties to white suprema­cy. And it was also a group that had recent­ly been ref­er­enced in US pol­i­tics in the first pres­i­den­tial debate. But actu­al­ly now we know that these emails came from Iran. And some of the indi­vid­u­als who received the con­tents of the email post­ed them online. So they were addressed to the email users by name, and they said we are in pos­ses­sion of all of your infor­ma­tion; email, address, tele­phone, every­thing.” And then they said they knew that the indi­vid­u­als was reg­is­tered as a Democrat because they had gained access to the US vot­ing infra­struc­ture. And they said You will vote for Trump on elec­tion day or we will come after you.”

So first of all, they includ­ed a huge amount of intim­i­da­tion. Second of all, they were pur­port­ing them­selves to be this group that they were not. And third of all they absolute­ly were attempt­ing to con­tribute to dis­cord in the run-up to the elec­tion. It’s dan­ger­ous activ­i­ty. It is alarm­ing activ­i­ty. It’s some­thing that I think will have mul­ti­ple impacts for a time to come. Because even though the FBI was able to iden­ti­fy that this hap­pened, that goal of shak­ing vot­er con­fi­dence of course may have been a lit­tle bit suc­cess­ful in that instance. And so, one of the things that is good about this is that the FBI was able to iden­ti­fy this very quick­ly; to make an announce­ment to the US pub­lic that it had hap­pened; to be clear about what happened. 

Unfortunately, what they announced was not just that the GMail users were receiv­ing this email and there was false infor­ma­tion in it. They also said that they had infor­ma­tion that both Russia and Iran have actu­al­ly obtained reg­is­tra­tion infor­ma­tion from the United States. And that’s con­cern­ing as well. There appears to be good coor­di­na­tion between the pri­vate sec­tor and the gov­ern­ment on this issue. Google announced the num­ber of GMail users that are esti­mat­ed to have been tar­get­ed through the Iranian cam­paign. Unfortunately the num­ber is about 25,000 email users, which is no small amount. And so this is just anoth­er instance of how not social media but the Internet realm—email—can be used as a way to tar­get American pub­lic opinion. 

Ly: Thank you so much for join­ing, Naima. I real­ly enjoyed our con­ver­sa­tion and I know our view­ers will too.

Green: Excellent! Well, I real­ly enjoyed this so thanks for hav­ing me.

Ly: Thank you.

Further Reference

Medium post for this episode, with intro­duc­tion and edit­ed text


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Cash App, or even just sharing the link. Thanks.