Moderator: Good morning. So, for everyone who saw our first keynote, we have a keynote by Tom Perez. Tom is the chair of the Democratic Party. We are thrilled to be able to have him here at DEF CON—unfortunately he’s not actually at DEF CON. For those of you in the political know, he is unfortunately at the Iowa State Fair eating way too much funnel cake. But he has kindly agreed to call into DEF CON to do some remarks for us, which if you will bear with us one moment we will try and get the Skype up and working. It worked flawlessly this morning, that’s a guarantee that it will not work now.
[looks off to the side while a tech is working on something] See? Flawless.
Tom, can you hear us?
“Tom Perez”: —that I can’t come to Las Vegas. However I did want to share with you what we at the DNC are doing to increase awareness around the security threats from disinformation. We’re monitoring disinformation and developing a program to combat these online attacks. The basis for any such program is education. And here the three tips we tell campaigns to help spot manipulated videos online.
Number one: know the source. Is it a reputable news organization? Do you know who posted it? Can you find instances of the clip or image from other reputable sources? If not, it may be fake.
Number two: be skeptical of video you find online. Are there gaps or unexplained transitions in the video? If so, this may be a sign of deception.
Number three: look for signs that the video has been manipulated. Does the speaker’s voice sound too low, or are they moving strangely? Is there limited or no blinking in the video? Is there inconsistent coloring or blurring?
These all maybe signs of manipulation. We all have a part to play in stemming the problem of deceptive videos. Researchers can use their skills to work with industry experts to develop tech to quickly identify the signs of manipulation. Social media platforms can work to develop clear policies and technology that limit the prominence and damage deceptive videos can do. And the media to help teach the public about the threat these videos pose, while being careful not to feed to the trolls and give bad actors the oxygen they crave.
It’s not going to be outrageous videos of Will Smith as Cardi B, or of Sylvester Stallone as the Terminator that trip us up. It’ll be something more subtle, like a slowed-down video or even a deepfake of Tom Perez talking about cybersecurity.
Moderator: So obviously that was not Tom Perez. What? Shocking.
But more exciting, that was the Chief Security Officer of the Democratic Party, who agreed to let us video him for about an hour couple a weeks ago in order to do that deepfake. So I would like to introduce now Bob Lord, Chief Security Officer of the Democratic Party.
Bob Lord: How many of you actually knew Tom Perez’s name before today? Oh, wow, that’s actually very good. Okay well then I won’t use my next line. But how many of you actually knew his voice? Ah, see—oh no, they work there. No. Cheating, cheating.
Cheating. Not good, not good. So you know, I think there’s a whole bunch of stuff that is probably on your mind like why is Bob up here? Why is he doing that? How long did it take? By the way, congratulations to all the people who had to watch hours of video of me. Like reading my emails. And you can probably imagine what a CSO’s face looks like just reading emails and things like that. Just doing—it’s [groans]. That’s the sort of thing that they had to sit there and watch. So they know my face better than I do.
So anyway, why’re we doing this. Let me back up a little bit and talk a little bit about our larger program. So I joined the DNC about a year and a half ago, and I had worked at companies like Twitter, and at Yahoo, and so was really quite new to the entire space of politics. And it really is very very different. What people don’t really realize is that the DNC is only…I dunno, somewhere around 200 people, something like that give or take. So it’s not really a huge organization. I’ve worked in organizations where the entire security team is larger than the entirety of the DNC. And we sometimes forget that because it’s on TV every night so you sometimes misunderstand the overall scale.
So impact is obviously very great but the number of people there is very small. And what I found out when I joined is that Tom Perez—the actual real Tom Perez, not the fake Tom Perez—the real Tom Perez wanted me to not only work on the DNC to help improve cybersecurity but to really expand that out to the state parties. Because they’re separate legal entities with their own funding, their own staffing. And then also the campaigns for the mid-term. So I had to really try to figure out what on Earth is Bob going to do to try to figure out how to improve security.
So we did a number of things. We stood up some webinars. So we taught them basic cybersecurity. Which is difficult because I can’t put agents on their machines to monitor what’s going on. And remember, they’re not remote offices and I’m not headquarters. So this is a real struggle for us to try to figure out. The way that we organize the party is actually very good for being nimble and making local decisions fast. But in terms of cybersecurity it kind of works against us because organizations with just a dozen people, not likely to go out and hire another dozen cybersecurity experts and IT experts. So it’s a real challenge to try to figure out how to nudge them along the path of being more secure.
One of the things that we did in the last cycle was we sent, like I said webinars, we sent out newsletters, email blasts when there was something that happened. You may have read about one of the email blasts that I sent out recently around a Russian app called FaceApp. Why some things become interesting in the press and others don’t I can only speculate, but we do those kinds of things. And we also ask for feedback from people to say, “Hey, if you see something really strange, please report it to us because it may be more common than you think. There may be other state parties. There may be other campaigns that are experiencing the same kinds of problems.”
So we did all of that and we organized our activities in the three main buckets. So basic cybersecurity, that makes sense; turn on two-factor. Great. Another one is not so intuitive, which is around counter-intelligence. And so the world is a very interesting and scary place these days. And so we’re concerned about people showing up to volunteer for campaigns who may not have the best of intentions. And even recruiting that would normally take place in person in the United States face-to-face, at a bar. You’ve seen The Americans, that kind of thing. But we’re also concerned about relationships that get spun up via Facebook and LinkedIn, and things along those lines.
The third bucket is the one that you’re here to hear more about, which is disinformation. And so we started off last cycle with inviting the social media companies to come in and talk to the campaigns and to the state parties. But we really needed to supersize that for for this election cycle. So that’s what we’re doing. And we brought one new staff and we’re working on training. I’ll talk a bit more about that later.
So again, why would Bob bother to come out here and videotape himself reading his emails for a couple of hours. And you know, a few things that I wanted to do. one of the things I wanted to do is show you a different kind of deepfake. So you know, I saw people kind of nodding in agreement with what Tom—fake Tom—was saying because these sound like good things that we should be telling state parties and campaigns. But this wasn’t especially funny, although it might be funny to have a senior executive try to Skype in. We had audio problems, like that’s a normal thing.
But really this was not funny or dramatic. So this wasn’t an impersonation of one Hollywood celebrity on top of another one. This wasn’t Jordan Peele being Obama. Those are funny. This was different. And this is the kind of thing that I’m much more worried about, which is not the big dramatic things of major candidates saying things, but other people whose voices you may not know, whose backgrounds you may not know, and where you may find it very difficult to really put together the historical context to know that person would not have said that thing. You don’t know Tom’s background. You might know a little bit. But you probably don’t know enough about him to be able to immediately judge the kinds of things that he would or would not do or say. So this is a problem.
So the other thing is I wanted to be able to put myself out here in this awkward way to meet some of you. So I need to be able to establish a link with the rest of the community that you all represent. So, I’m not a machine learning expert. I’m not an AI expert. I’m not a sociologist or an ethicist. But there are people who fulfill all of those functions here in this room. And part of I think the thing that the real Tom wanted me to do is not just work to nudge people to turn on two-factor but I think he wanted us to be able to build much richer bridges with the research and hacker communities. So, that’s a part of why I’m up here and so I hope that we can begin a dialogue that will help us take information that you have that I do not, and be able to take that back to campaigns and candidates, and to try to keep our elections safe.
So, I’m pretty nervous about the 2020 elections. We’ve seen a lot of little deepfakes here and there. And I suspect it’s not going to surprise you to say that I’m worried that things are going to get far far worse and far more nuanced. And here’s the other thing, you know. if you’re studying the world of ethics, it occurs to me that there a lot of people doing a lot of really fun stuff with deepfakes. I was watching a whole bunch last night and some of them are genuinely very funny and clever, and disturbing. And you know, I wonder to what degree creating and distributing these fun videos actually creates…you know, a second-order effect which degrades the ability for people to tell what’s real? Or maybe even, it may cause them to not try to figure it out? It may also cause them to really start to distrust everything. And so when they start to see real media, if it doesn’t agree with their existing belief systems they may decide to simply tag it as fake news, and inappropriately. So, I’m certainly not going to be the guy to get up and say, “Never do deep fakes,” but it’s a question. It’s an open question to what degree each of us plays a role in creating an environment that can then be used by people with bad intentions. Or not.
I also want to take a moment just to talk a little bit about the larger context. You know, I’ve heard a lot of people ask me like, “Oh, tell me about how scared you are of deepfakes.” And you know, I talked to them a little bit about that and it’s sort of like when somebody comes to a security person and says, “Can you talk to me about security,” we’re so happy that they come to talk to us about security that we’ll just answer the question. Like yay, somebody cares. But I think that there’s a larger context here that we should really be thinking about, and deepfakes are just one of the things that we worry about.
So we also worry about the shallowfakes or the cheapfakes, which many people here probably up on. But we’ve already seen all sorts of examples in the wild that’re not deepfakes, but they’re very disturbing cheapfakes. So we’re talking about doctored political videos. Some things you may have seen recently. So the Sunrise Movement splicing of conversation with Senator Feinstein—anybody see that? Pprobably everybody. Come on. You must’ve seen it, it was everywhere. There’s the CRTV splicing a video of an interview with Representative Ocasio-Cortez, Representative Gaetz asserting that women and children receiving money in Guatemala were Honduran migrants being funded by George Soros. This actually got traction. Isolated clips of Representative Omar saying “some people did something” without the larger proper context. These are just editing tricks. So, deceptively edited video of Representative Omar saying that she supported profiling of white males. There’s another one which is doctored videos of Speaker Pelosi appearing to slur her words. Everyone—I mean, you must have seen that, right? Okay.
So, what sort of technology did that take? I mean, it didn’t take [indistinct] to do this, right. He was just… I mean it took somebody with with some very light video editing skills. So, I do want people to be concerned about the deepfakes. I don’t want them to have the sense of fatalism like there’s nothing that we can do about this. But I also want them to understand that there are a whole host of things that we actually are very concerned about, and we’re seeing far more of those take root today. So we can’t just focus on one without the other.
And those of you who studied psych will know this far better than I do, but when we start to see something and believe it and attach a label of truth to it, it becomes incredibly hard to unseat that. And so video’s especially implicated in this kind of thing. And there’s some counterintuitive things like the more that we try to convince people that it’s fake, the more they double down on their existing beliefs. And this has been studied widely and it can be replicated in university studies. So it’s very difficult for us to know how to attack it. If we simply tell somebody “this is a deepfake,” it may not…even if they kind of understand what we’re talking about, it may not actually change their minds in any meaningful way. So we’ve got some real burdens with regard to cognitive biases that we all have, and I think the people who’re playing these games are very well—whether they’re well aware of them by name or whether they’re simply able to harness these powers to create this disruption doesn’t really matter but that’s what they’re doing.
So you know, I think this stuff is kind of new, and so we we get like I said fixated on the world of the deepfakes. But I was doing some research the other day and I saw a reference to active measures. Who knows what active measures are? Come on, you’ve all seen The Americans. Come on, raise your hands.
So this is…you know, the Soviet active measures programs led by the KGB and other parts of the organization in Russia were really quite effective. And so these active measures were well-documented in the 80s. And I was looking at a few things and I saw a footnote that referenced something from 1982 and I was like there’s no way they were talking about active measures and disinformation and forgeries back in the 80s. Or were they? So I went and actually found the Senate testimony, or the House testimony, that was referenced, and it was a CIA deputy director who was literally laying out exactly what we’re seeing today. So he was laying out the ways in which they do it. This was a high priority of the politburo at the time. He talked about the funding models. He talked about the ways that they had prioritized various kinds of activities. And the terminology was exactly the same that we see today, and the strategy was exactly the same. And you could literally take out the words KGB and put in FSB or GRU, take out the word Soviet and put in Russia, and the sentence just would hold up. So this for me was sort of remarkable. I’d sort of known of this, but then actually seeing page after page after page of testimony was really key.
And then I saw the second half of this huge document. There were dozens and dozens of pages that were all out real-life examples of Soviet forgeries. So these were documents that were fake letters from president Reagan to some diplomat, and they were fake. And so the CIA had compiled all of these and put them into their record. So, forgeries are really nothing new. And I guess 1 of my concerns is not just the deepfakes and it’s not just the cheapfakes, but it’s the fact that this is part of a larger strategy that can be used against us and it’s been going on for a very very long time—longer than many of you have been alive. And so I think by focusing on the specific tactics, we’re doing ourselves a disservice because we’re going to be victimized by these kinds of things again and again, because we don’t understand this as part of a long game. So, this is a long con and it has a long horizon. And people are willing to invest many millions of dollars and many great experts from many different fields to work against us. And it’s of course not just the Russians. There are all sorts of other intelligence agencies in various countries that are going to be doing the same thing now that we know that these particular attacks can work.
So, what kind of goals do they have? Classic, age-old goals. So they want to be able to reinforce people’s existing biases. So, if you can the right news to the right people and convince them to double down on their existing beliefs rather than to try to understand what other people are saying, you’re making a big impact. If you try to drive a wedge into naturally-occurring cracks in a society, then you’re going to be able to move the needle. So, anything around immigration or abortion, gun control, the environment, racism; any of these things are great fodder for somebody who wants to actively work against us.
And of course, when all else fails just create some chaos, you know. Put up a disinformation campaign that tells people to vote from home by SMS. Like this was a real thing. We actually saw this in the mid-terms. There was another one which was called the “no man midterms.” So this was a campaign that was aimed at getting men to sit out the election and let the ladies take charge. This was a real thing. And so we actually had to work hard with the social media companies to find these and try to stamp them out—of course, they were already out there. So imagine using a deepfake for something like this, having somebody in a position of authority say something like this. Even when you go back and debunk it, people are still going to remember it. People remember the false story, they don’t remember the retraction.
So, what would we like to see? Well, one of the ways that we approach this is we currently think of these things, disinformation campaigns, as cybersecurity problems. Yes, it’s a content problem too, so it’s a quality problem. There are elements of that. But at the end of the day, this isn’t about people being wrong or being misinformed, this is about an active attacker who’s trying to do something against us. And so this looks enough like the world of cybersecurity that we really want to find ways to work with people to come up with the right frameworks to understand them. So we sometimes call these our kill chain, they sometimes are called attacker life cycles. But we need to find ways to define these so that we can work against them, find ways to prevent them, or detect, and to respond and recover.
So there’s one that we found which is called AMITT, which is the Adversarial, Misinformation and Influence Tactics and Techniques. Boy, it just rolls off the tongue. So anyway, this is an example of somebody’s the group that has sat down in they’ve tried to figure out what are the major stages in an attack when it comes to disinformation. And that gives us, we the defenders an opportunity to figure out what it is that we can do against that.
I think we need more guidelines on what is acceptable editing practice. You know, the media are very quick to put up things that they find online. But some of them are more clearly doctor than others, and so we think coming up with acceptable editing practices is going to be a useful thing.
People in this room are probably able to help with this next one a great deal, which is what we need better ways of detecting these deepfakes. But detecting them is not enough. I think we have to detect them quickly so they can be part of the initial news story and not something that follows on the next day.
And the other thing is even harder, which is that we need to find ways to make sure that people believe them and they trust these results. I don’t know how we’re gonna do that, but I’m going to look to many of the people in this room to help figure that out.
And of course we need the social media companies to continue to find ways to slow the spread of disinformation, and to really recognize that they have a role in either uplifting or pushing down disinformation as they find it.
There’s of course a role for government and finding a way to really build out their programs. I think that’s going to be key. It’s not clear who’s really in charge from the disinformation standpoint these days. And so I think we need to figure out what their role is, too.
And them of course, with the media we would like them to keep improving their ability of educating people. I’d like to see information that is known to be fake, or stolen, or manipulated to be called out immediately, and teach people to be more skeptical than they have been in the past.
And then finally, right now I don’t think there are a lot of deterrents. So, there’s really no penalty for somebody who builds some of these things maliciously and then spreads them around. So, why would they not do that, if we tolerate it? When there are other kinds of activities, especially when they’re driven by an actual government, we have techniques for holding them accountable, for calling them out publicly, for imposing economic sanctions—there are all sorts of tools that we have at our disposal. We haven’t really gotten to the point where we are able to hold people accountable and create those disincentives.
So those are some of the things that caused us to want to participate and to be here today to learn from many of you and to give you the opportunity to see us as a possible partner in a way that we can work together. So, those are my prepared remarks and then if—I don’t know if we have time, we can take a question or two.
Moderator: I think we have time for maybe two questions. Does anyone have a question they would like to ask Bob?
Audience 1: [inaudible]
Bob Lord: Yeah. So the question was around at the ethics of some of these cheapfakes, and labeling in particular. So I think that there is somebody who’s an ethicist coming up later today? I think that’s right. So I’m certainly not going to be able to speak as eloquently but yeah. Labeling is a key thing. So, labeling something as known disinformation seems like a key thing. I would also want to hear from psychologists as to whether or not that creates a backfire effect; whether that actually has these unintended consequences. So these are very good questions. I would definitely like to make sure that anytime there is a foreign government-sponsored message, that that’s clearly labeled as coming from a foreign government. That sometimes happens but not always. So I think labeling is probably very key, but I would want to hear from people who have actually studied not just the first-order effects but the second-order effects. We’re in new territory here so, definitely want to hear from some of those folks.
Audience 2: [inaudible]
Lord: Right. So the question is really around some of the other mechanisms. So I represent the DNC and so when I was talking about “us” I was…somewhat talking about the Democratic ecosystem but I think largely as a sort of a proxy for the larger thing. So I personally don’t have a lot of contacts with the folks at the RNC. They may have similar kinds of programs. I just don’t really know.
But the one thing I would also mention is that what we saw in 2016 were state-sponsored attacks that had a certain flow to them. What we’re seeing now is that these playbooks are now organically sprouting up in a lot of different places. And so there are Americans who are taking some of these playbooks and running with them too. So we’ve seen this transition. That’s not to say that we don’t worry about all of the other adversaries in cyberspace that we have—we’re definitely worried about them. They can definitely scale, they can definitely be funded, and they can definitely be patient. A lot of these activities take a long time to really germinate. The one that the KGB did in the 80s, one of the more impressive ones was one where they planted a story—I think it was like in an Indian newspaper or research paper or something like that. And then they were able to wait and maybe nudge things a little bit here. And then it started showing up in more mainstream newspapers, and then eventually—you can go find this online—Dan Rather is saying that there’s concern that the CIA may has been the originator of the AIDS virus with the goal of killing black people. So, this got from that initial source all the way up. So we definitely worry about the large nation-states doing what nation-states do against us. But now we have the secondary problem where people are using the same playbook internally. So that’s another set of of headaches that we have to worry about.
Moderator: Unfortunately for time, we won’t be able take another question. But Bob, thank you.
Lord: Okay. Thank you.