Moderator: Good morn­ing. So, for every­one who saw our first keynote, we have a keynote by Tom Perez. Tom is the chair of the Democratic Party. We are thrilled to be able to have him here at DEF CON—unfortunately he’s not actu­al­ly at DEF CON. For those of you in the polit­i­cal know, he is unfor­tu­nate­ly at the Iowa State Fair eat­ing way too much fun­nel cake. But he has kind­ly agreed to call into DEF CON to do some remarks for us, which if you will bear with us one moment we will try and get the Skype up and work­ing. It worked flaw­less­ly this morn­ing, that’s a guar­an­tee that it will not work now.

[looks off to the side while a tech is work­ing on some­thing] See? Flawless.

Tom, can you hear us?

Tom Perez”: —that I can’t come to Las Vegas. However I did want to share with you what we at the DNC are doing to increase aware­ness around the secu­ri­ty threats from dis­in­for­ma­tion. We’re mon­i­tor­ing dis­in­for­ma­tion and devel­op­ing a pro­gram to com­bat these online attacks. The basis for any such pro­gram is edu­ca­tion. And here the three tips we tell cam­paigns to help spot manip­u­lat­ed videos online. 

Number one: know the source. Is it a rep­utable news orga­ni­za­tion? Do you know who post­ed it? Can you find instances of the clip or image from oth­er rep­utable sources? If not, it may be fake. 

Number two: be skep­ti­cal of video you find online. Are there gaps or unex­plained tran­si­tions in the video? If so, this may be a sign of deception. 

Number three: look for signs that the video has been manip­u­lat­ed. Does the speak­er’s voice sound too low, or are they mov­ing strange­ly? Is there lim­it­ed or no blink­ing in the video? Is there incon­sis­tent col­or­ing or blurring? 

These all maybe signs of manip­u­la­tion. We all have a part to play in stem­ming the prob­lem of decep­tive videos. Researchers can use their skills to work with indus­try experts to devel­op tech to quick­ly iden­ti­fy the signs of manip­u­la­tion. Social media plat­forms can work to devel­op clear poli­cies and tech­nol­o­gy that lim­it the promi­nence and dam­age decep­tive videos can do. And the media to help teach the pub­lic about the threat these videos pose, while being care­ful not to feed to the trolls and give bad actors the oxy­gen they crave.

It’s not going to be out­ra­geous videos of Will Smith as Cardi B, or of Sylvester Stallone as the Terminator that trip us up. It’ll be some­thing more sub­tle, like a slowed-down video or even a deep­fake of Tom Perez talk­ing about cybersecurity. 

Moderator: So obvi­ous­ly that was not Tom Perez. What? Shocking. 

But more excit­ing, that was the Chief Security Officer of the Democratic Party, who agreed to let us video him for about an hour cou­ple a weeks ago in order to do that deep­fake. So I would like to intro­duce now Bob Lord, Chief Security Officer of the Democratic Party. 

Bob Lord: How many of you actu­al­ly knew Tom Perez’s name before today? Oh, wow, that’s actu­al­ly very good. Okay well then I won’t use my next line. But how many of you actu­al­ly knew his voice? Ah, see—oh no, they work there. No. Cheating, cheating. 

Cheating. Not good, not good. So you know, I think there’s a whole bunch of stuff that is prob­a­bly on your mind like why is Bob up here? Why is he doing that? How long did it take? By the way, con­grat­u­la­tions to all the peo­ple who had to watch hours of video of me. Like read­ing my emails. And you can prob­a­bly imag­ine what a CSOs face looks like just read­ing emails and things like that. Just doing—it’s [groans]. That’s the sort of thing that they had to sit there and watch. So they know my face bet­ter than I do. 

So any­way, why’re we doing this. Let me back up a lit­tle bit and talk a lit­tle bit about our larg­er pro­gram. So I joined the DNC about a year and a half ago, and I had worked at com­pa­nies like Twitter, and at Yahoo, and so was real­ly quite new to the entire space of pol­i­tics. And it real­ly is very very dif­fer­ent. What peo­ple don’t real­ly real­ize is that the DNC is only…I dun­no, some­where around 200 peo­ple, some­thing like that give or take. So it’s not real­ly a huge orga­ni­za­tion. I’ve worked in orga­ni­za­tions where the entire secu­ri­ty team is larg­er than the entire­ty of the DNC. And we some­times for­get that because it’s on TV every night so you some­times mis­un­der­stand the over­all scale. 

So impact is obvi­ous­ly very great but the num­ber of peo­ple there is very small. And what I found out when I joined is that Tom Perez—the actu­al real Tom Perez, not the fake Tom Perez—the real Tom Perez want­ed me to not only work on the DNC to help improve cyber­se­cu­ri­ty but to real­ly expand that out to the state par­ties. Because they’re sep­a­rate legal enti­ties with their own fund­ing, their own staffing. And then also the cam­paigns for the mid-term. So I had to real­ly try to fig­ure out what on Earth is Bob going to do to try to fig­ure out how to improve security. 

So we did a num­ber of things. We stood up some webi­na­rs. So we taught them basic cyber­se­cu­ri­ty. Which is dif­fi­cult because I can’t put agents on their machines to mon­i­tor what’s going on. And remem­ber, they’re not remote offices and I’m not head­quar­ters. So this is a real strug­gle for us to try to fig­ure out. The way that we orga­nize the par­ty is actu­al­ly very good for being nim­ble and mak­ing local deci­sions fast. But in terms of cyber­se­cu­ri­ty it kind of works against us because orga­ni­za­tions with just a dozen peo­ple, not like­ly to go out and hire anoth­er dozen cyber­se­cu­ri­ty experts and IT experts. So it’s a real chal­lenge to try to fig­ure out how to nudge them along the path of being more secure.

One of the things that we did in the last cycle was we sent, like I said webi­na­rs, we sent out newslet­ters, email blasts when there was some­thing that hap­pened. You may have read about one of the email blasts that I sent out recent­ly around a Russian app called FaceApp. Why some things become inter­est­ing in the press and oth­ers don’t I can only spec­u­late, but we do those kinds of things. And we also ask for feed­back from peo­ple to say, Hey, if you see some­thing real­ly strange, please report it to us because it may be more com­mon than you think. There may be oth­er state par­ties. There may be oth­er cam­paigns that are expe­ri­enc­ing the same kinds of problems.”

So we did all of that and we orga­nized our activ­i­ties in the three main buck­ets. So basic cyber­se­cu­ri­ty, that makes sense; turn on two-factor. Great. Another one is not so intu­itive, which is around counter-intelligence. And so the world is a very inter­est­ing and scary place these days. And so we’re con­cerned about peo­ple show­ing up to vol­un­teer for cam­paigns who may not have the best of inten­tions. And even recruit­ing that would nor­mal­ly take place in per­son in the United States face-to-face, at a bar. You’ve seen The Americans, that kind of thing. But we’re also con­cerned about rela­tion­ships that get spun up via Facebook and LinkedIn, and things along those lines. 

The third buck­et is the one that you’re here to hear more about, which is dis­in­for­ma­tion. And so we start­ed off last cycle with invit­ing the social media com­pa­nies to come in and talk to the cam­paigns and to the state par­ties. But we real­ly need­ed to super­size that for for this elec­tion cycle. So that’s what we’re doing. And we brought one new staff and we’re work­ing on train­ing. I’ll talk a bit more about that later. 

So again, why would Bob both­er to come out here and video­tape him­self read­ing his emails for a cou­ple of hours. And you know, a few things that I want­ed to do. one of the things I want­ed to do is show you a dif­fer­ent kind of deep­fake. So you know, I saw peo­ple kind of nod­ding in agree­ment with what Tom—fake Tom—was say­ing because these sound like good things that we should be telling state par­ties and cam­paigns. But this was­n’t espe­cial­ly fun­ny, although it might be fun­ny to have a senior exec­u­tive try to Skype in. We had audio prob­lems, like that’s a nor­mal thing. 

But real­ly this was not fun­ny or dra­mat­ic. So this was­n’t an imper­son­ation of one Hollywood celebri­ty on top of anoth­er one. This was­n’t Jordan Peele being Obama. Those are fun­ny. This was dif­fer­ent. And this is the kind of thing that I’m much more wor­ried about, which is not the big dra­mat­ic things of major can­di­dates say­ing things, but oth­er peo­ple whose voic­es you may not know, whose back­grounds you may not know, and where you may find it very dif­fi­cult to real­ly put togeth­er the his­tor­i­cal con­text to know that per­son would not have said that thing. You don’t know Tom’s back­ground. You might know a lit­tle bit. But you prob­a­bly don’t know enough about him to be able to imme­di­ate­ly judge the kinds of things that he would or would not do or say. So this is a problem.

So the oth­er thing is I want­ed to be able to put myself out here in this awk­ward way to meet some of you. So I need to be able to estab­lish a link with the rest of the com­mu­ni­ty that you all rep­re­sent. So, I’m not a machine learn­ing expert. I’m not an AI expert. I’m not a soci­ol­o­gist or an ethi­cist. But there are peo­ple who ful­fill all of those func­tions here in this room. And part of I think the thing that the real Tom want­ed me to do is not just work to nudge peo­ple to turn on two-factor but I think he want­ed us to be able to build much rich­er bridges with the research and hack­er com­mu­ni­ties. So, that’s a part of why I’m up here and so I hope that we can begin a dia­logue that will help us take infor­ma­tion that you have that I do not, and be able to take that back to cam­paigns and can­di­dates, and to try to keep our elec­tions safe. 

So, I’m pret­ty ner­vous about the 2020 elec­tions. We’ve seen a lot of lit­tle deep­fakes here and there. And I sus­pect it’s not going to sur­prise you to say that I’m wor­ried that things are going to get far far worse and far more nuanced. And here’s the oth­er thing, you know. if you’re study­ing the world of ethics, it occurs to me that there a lot of peo­ple doing a lot of real­ly fun stuff with deep­fakes. I was watch­ing a whole bunch last night and some of them are gen­uine­ly very fun­ny and clever, and dis­turb­ing. And you know, I won­der to what degree cre­at­ing and dis­trib­ut­ing these fun videos actu­al­ly creates…you know, a second-order effect which degrades the abil­i­ty for peo­ple to tell what’s real? Or maybe even, it may cause them to not try to fig­ure it out? It may also cause them to real­ly start to dis­trust every­thing. And so when they start to see real media, if it does­n’t agree with their exist­ing belief sys­tems they may decide to sim­ply tag it as fake news, and inap­pro­pri­ate­ly. So, I’m cer­tain­ly not going to be the guy to get up and say, Never do deep fakes,” but it’s a ques­tion. It’s an open ques­tion to what degree each of us plays a role in cre­at­ing an envi­ron­ment that can then be used by peo­ple with bad inten­tions. Or not. 

I also want to take a moment just to talk a lit­tle bit about the larg­er con­text. You know, I’ve heard a lot of peo­ple ask me like, Oh, tell me about how scared you are of deep­fakes.” And you know, I talked to them a lit­tle bit about that and it’s sort of like when some­body comes to a secu­ri­ty per­son and says, Can you talk to me about secu­ri­ty,” we’re so hap­py that they come to talk to us about secu­ri­ty that we’ll just answer the ques­tion. Like yay, some­body cares. But I think that there’s a larg­er con­text here that we should real­ly be think­ing about, and deep­fakes are just one of the things that we wor­ry about. 

So we also wor­ry about the shal­low­fakes or the cheap­fakes, which many peo­ple here prob­a­bly up on. But we’ve already seen all sorts of exam­ples in the wild that’re not deepfakes, but they’re very dis­turb­ing cheap­fakes. So we’re talk­ing about doc­tored polit­i­cal videos. Some things you may have seen recent­ly. So the Sunrise Movement splic­ing of con­ver­sa­tion with Senator Feinstein—anybody see that? Pprobably every­body. Come on. You must’ve seen it, it was every­where. There’s the CRTV splic­ing a video of an inter­view with Representative Ocasio-Cortez, Representative Gaetz assert­ing that women and chil­dren receiv­ing mon­ey in Guatemala were Honduran migrants being fund­ed by George Soros. This actu­al­ly got trac­tion. Isolated clips of Representative Omar say­ing some peo­ple did some­thing” with­out the larg­er prop­er con­text. These are just edit­ing tricks. So, decep­tive­ly edit­ed video of Representative Omar say­ing that she sup­port­ed pro­fil­ing of white males. There’s anoth­er one which is doc­tored videos of Speaker Pelosi appear­ing to slur her words. Everyone—I mean, you must have seen that, right? Okay. 

So, what sort of tech­nol­o­gy did that take? I mean, it did­n’t take [indis­tinct] to do this, right. He was just… I mean it took some­body with with some very light video edit­ing skills. So, I do want peo­ple to be con­cerned about the deep­fakes. I don’t want them to have the sense of fatal­ism like there’s noth­ing that we can do about this. But I also want them to under­stand that there are a whole host of things that we actu­al­ly are very con­cerned about, and we’re see­ing far more of those take root today. So we can’t just focus on one with­out the other. 

And those of you who stud­ied psych will know this far bet­ter than I do, but when we start to see some­thing and believe it and attach a label of truth to it, it becomes incred­i­bly hard to unseat that. And so video’s espe­cial­ly impli­cat­ed in this kind of thing. And there’s some coun­ter­in­tu­itive things like the more that we try to con­vince peo­ple that it’s fake, the more they dou­ble down on their exist­ing beliefs. And this has been stud­ied wide­ly and it can be repli­cat­ed in uni­ver­si­ty stud­ies. So it’s very dif­fi­cult for us to know how to attack it. If we sim­ply tell some­body this is a deep­fake,” it may not…even if they kind of under­stand what we’re talk­ing about, it may not actu­al­ly change their minds in any mean­ing­ful way. So we’ve got some real bur­dens with regard to cog­ni­tive bias­es that we all have, and I think the peo­ple who’re play­ing these games are very well—whether they’re well aware of them by name or whether they’re sim­ply able to har­ness these pow­ers to cre­ate this dis­rup­tion does­n’t real­ly mat­ter but that’s what they’re doing. 

So you know, I think this stuff is kind of new, and so we we get like I said fix­at­ed on the world of the deep­fakes. But I was doing some research the oth­er day and I saw a ref­er­ence to active mea­sures. Who knows what active mea­sures are? Come on, you’ve all seen The Americans. Come on, raise your hands. 

So this is…you know, the Soviet active mea­sures pro­grams led by the KGB and oth­er parts of the orga­ni­za­tion in Russia were real­ly quite effec­tive. And so these active mea­sures were well-documented in the 80s. And I was look­ing at a few things and I saw a foot­note that ref­er­enced some­thing from 1982 and I was like there’s no way they were talk­ing about active mea­sures and dis­in­for­ma­tion and forg­eries back in the 80s. Or were they? So I went and actu­al­ly found the Senate tes­ti­mo­ny, or the House tes­ti­mo­ny, that was ref­er­enced, and it was a CIA deputy direc­tor who was lit­er­al­ly lay­ing out exact­ly what we’re see­ing today. So he was lay­ing out the ways in which they do it. This was a high pri­or­i­ty of the polit­buro at the time. He talked about the fund­ing mod­els. He talked about the ways that they had pri­or­i­tized var­i­ous kinds of activ­i­ties. And the ter­mi­nol­o­gy was exact­ly the same that we see today, and the strat­e­gy was exact­ly the same. And you could lit­er­al­ly take out the words KGB and put in FSB or GRU, take out the word Soviet and put in Russia, and the sen­tence just would hold up. So this for me was sort of remark­able. I’d sort of known of this, but then actu­al­ly see­ing page after page after page of tes­ti­mo­ny was real­ly key. 

And then I saw the sec­ond half of this huge doc­u­ment. There were dozens and dozens of pages that were all out real-life exam­ples of Soviet forg­eries. So these were doc­u­ments that were fake let­ters from pres­i­dent Reagan to some diplo­mat, and they were fake. And so the CIA had com­piled all of these and put them into their record. So, forg­eries are real­ly noth­ing new. And I guess 1 of my con­cerns is not just the deep­fakes and it’s not just the cheap­fakes, but it’s the fact that this is part of a larg­er strat­e­gy that can be used against us and it’s been going on for a very very long time—longer than many of you have been alive. And so I think by focus­ing on the spe­cif­ic tac­tics, we’re doing our­selves a dis­ser­vice because we’re going to be vic­tim­ized by these kinds of things again and again, because we don’t under­stand this as part of a long game. So, this is a long con and it has a long hori­zon. And peo­ple are will­ing to invest many mil­lions of dol­lars and many great experts from many dif­fer­ent fields to work against us. And it’s of course not just the Russians. There are all sorts of oth­er intel­li­gence agen­cies in var­i­ous coun­tries that are going to be doing the same thing now that we know that these par­tic­u­lar attacks can work.

So, what kind of goals do they have? Classic, age-old goals. So they want to be able to rein­force peo­ple’s exist­ing bias­es. So, if you can the right news to the right peo­ple and con­vince them to dou­ble down on their exist­ing beliefs rather than to try to under­stand what oth­er peo­ple are say­ing, you’re mak­ing a big impact. If you try to dri­ve a wedge into naturally-occurring cracks in a soci­ety, then you’re going to be able to move the nee­dle. So, any­thing around immi­gra­tion or abor­tion, gun con­trol, the envi­ron­ment, racism; any of these things are great fod­der for some­body who wants to active­ly work against us. 

And of course, when all else fails just cre­ate some chaos, you know. Put up a dis­in­for­ma­tion cam­paign that tells peo­ple to vote from home by SMS. Like this was a real thing. We actu­al­ly saw this in the mid-terms. There was anoth­er one which was called the no man midterms.” So this was a cam­paign that was aimed at get­ting men to sit out the elec­tion and let the ladies take charge. This was a real thing. And so we actu­al­ly had to work hard with the social media com­pa­nies to find these and try to stamp them out—of course, they were already out there. So imag­ine using a deep­fake for some­thing like this, hav­ing some­body in a posi­tion of author­i­ty say some­thing like this. Even when you go back and debunk it, peo­ple are still going to remem­ber it. People remem­ber the false sto­ry, they don’t remem­ber the retraction. 

So, what would we like to see? Well, one of the ways that we approach this is we cur­rent­ly think of these things, dis­in­for­ma­tion cam­paigns, as cyber­se­cu­ri­ty prob­lems. Yes, it’s a con­tent prob­lem too, so it’s a qual­i­ty prob­lem. There are ele­ments of that. But at the end of the day, this isn’t about peo­ple being wrong or being mis­in­formed, this is about an active attack­er who’s try­ing to do some­thing against us. And so this looks enough like the world of cyber­se­cu­ri­ty that we real­ly want to find ways to work with peo­ple to come up with the right frame­works to under­stand them. So we some­times call these our kill chain, they some­times are called attack­er life cycles. But we need to find ways to define these so that we can work against them, find ways to pre­vent them, or detect, and to respond and recover.

So there’s one that we found which is called AMITT, which is the Adversarial, Misinformation and Influence Tactics and Techniques. Boy, it just rolls off the tongue. So any­way, this is an exam­ple of some­body’s the group that has sat down in they’ve tried to fig­ure out what are the major stages in an attack when it comes to dis­in­for­ma­tion. And that gives us, we the defend­ers an oppor­tu­ni­ty to fig­ure out what it is that we can do against that. 

I think we need more guide­lines on what is accept­able edit­ing prac­tice. You know, the media are very quick to put up things that they find online. But some of them are more clear­ly doc­tor than oth­ers, and so we think com­ing up with accept­able edit­ing prac­tices is going to be a use­ful thing. 

People in this room are prob­a­bly able to help with this next one a great deal, which is what we need bet­ter ways of detect­ing these deep­fakes. But detect­ing them is not enough. I think we have to detect them quick­ly so they can be part of the ini­tial news sto­ry and not some­thing that fol­lows on the next day. 

And the oth­er thing is even hard­er, which is that we need to find ways to make sure that peo­ple believe them and they trust these results. I don’t know how we’re gonna do that, but I’m going to look to many of the peo­ple in this room to help fig­ure that out. 

And of course we need the social media com­pa­nies to con­tin­ue to find ways to slow the spread of dis­in­for­ma­tion, and to real­ly rec­og­nize that they have a role in either uplift­ing or push­ing down dis­in­for­ma­tion as they find it. 

There’s of course a role for gov­ern­ment and find­ing a way to real­ly build out their pro­grams. I think that’s going to be key. It’s not clear who’s real­ly in charge from the dis­in­for­ma­tion stand­point these days. And so I think we need to fig­ure out what their role is, too. 

And them of course, with the media we would like them to keep improv­ing their abil­i­ty of edu­cat­ing peo­ple. I’d like to see infor­ma­tion that is known to be fake, or stolen, or manip­u­lat­ed to be called out imme­di­ate­ly, and teach peo­ple to be more skep­ti­cal than they have been in the past. 

And then final­ly, right now I don’t think there are a lot of deter­rents. So, there’s real­ly no penal­ty for some­body who builds some of these things mali­cious­ly and then spreads them around. So, why would they not do that, if we tol­er­ate it? When there are oth­er kinds of activ­i­ties, espe­cial­ly when they’re dri­ven by an actu­al gov­ern­ment, we have tech­niques for hold­ing them account­able, for call­ing them out pub­licly, for impos­ing eco­nom­ic sanctions—there are all sorts of tools that we have at our dis­pos­al. We haven’t real­ly got­ten to the point where we are able to hold peo­ple account­able and cre­ate those disincentives. 

So those are some of the things that caused us to want to par­tic­i­pate and to be here today to learn from many of you and to give you the oppor­tu­ni­ty to see us as a pos­si­ble part­ner in a way that we can work togeth­er. So, those are my pre­pared remarks and then if—I don’t know if we have time, we can take a ques­tion or two. 

Moderator: I think we have time for maybe two ques­tions. Does any­one have a ques­tion they would like to ask Bob?

Audience 1: [inaudi­ble]

Bob Lord: Yeah. So the ques­tion was around at the ethics of some of these cheap­fakes, and label­ing in par­tic­u­lar. So I think that there is some­body who’s an ethi­cist com­ing up lat­er today? I think that’s right. So I’m cer­tain­ly not going to be able to speak as elo­quent­ly but yeah. Labeling is a key thing. So, label­ing some­thing as known dis­in­for­ma­tion seems like a key thing. I would also want to hear from psy­chol­o­gists as to whether or not that cre­ates a back­fire effect; whether that actu­al­ly has these unin­tend­ed con­se­quences. So these are very good ques­tions. I would def­i­nite­ly like to make sure that any­time there is a for­eign government-sponsored mes­sage, that that’s clear­ly labeled as com­ing from a for­eign gov­ern­ment. That some­times hap­pens but not always. So I think label­ing is prob­a­bly very key, but I would want to hear from peo­ple who have actu­al­ly stud­ied not just the first-order effects but the second-order effects. We’re in new ter­ri­to­ry here so, def­i­nite­ly want to hear from some of those folks.

Audience 2: [inaudi­ble]

Lord: Right. So the ques­tion is real­ly around some of the oth­er mech­a­nisms. So I rep­re­sent the DNC and so when I was talk­ing about us” I was…some­what talk­ing about the Democratic ecosys­tem but I think large­ly as a sort of a proxy for the larg­er thing. So I per­son­al­ly don’t have a lot of con­tacts with the folks at the RNC. They may have sim­i­lar kinds of pro­grams. I just don’t real­ly know. 

But the one thing I would also men­tion is that what we saw in 2016 were state-sponsored attacks that had a cer­tain flow to them. What we’re see­ing now is that these play­books are now organ­i­cal­ly sprout­ing up in a lot of dif­fer­ent places. And so there are Americans who are tak­ing some of these play­books and run­ning with them too. So we’ve seen this tran­si­tion. That’s not to say that we don’t wor­ry about all of the oth­er adver­saries in cyber­space that we have—we’re def­i­nite­ly wor­ried about them. They can def­i­nite­ly scale, they can def­i­nite­ly be fund­ed, and they can def­i­nite­ly be patient. A lot of these activ­i­ties take a long time to real­ly ger­mi­nate. The one that the KGB did in the 80s, one of the more impres­sive ones was one where they plant­ed a story—I think it was like in an Indian news­pa­per or research paper or some­thing like that. And then they were able to wait and maybe nudge things a lit­tle bit here. And then it start­ed show­ing up in more main­stream news­pa­pers, and then eventually—you can go find this online—Dan Rather is say­ing that there’s con­cern that the CIA may has been the orig­i­na­tor of the AIDS virus with the goal of killing black peo­ple. So, this got from that ini­tial source all the way up. So we def­i­nite­ly wor­ry about the large nation-states doing what nation-states do against us. But now we have the sec­ondary prob­lem where peo­ple are using the same play­book inter­nal­ly. So that’s anoth­er set of of headaches that we have to wor­ry about. 

Moderator: Unfortunately for time, we won’t be able take anoth­er ques­tion. But Bob, thank you. 

Lord: Okay. Thank you.

Further Reference

DEF CON 27 event page

AI Village at DEF CON 27 event page