B. Cavello: Thank you all so much. So you’ve been hear­ing a lot about the Navy SEAL and I just want to say that that is…not me. Our team is an inter­est­ing one. Peaks unfor­tu­nate­ly could­n’t be with us here tonight. But we’re a small team, although the image here might mis­lead you. 

We are small in part because we’re work­ing on a pret­ty con­tro­ver­sial top­ic, and that is the top­ic of algo­rith­mic war­fare. That’s what brought us togeth­er. And in par­tic­u­lar, when peo­ple hear that term they think of a lot of dif­fer­ent things. They may think of this guy, the Terminator, Skynet. Thinking about robots, and here in Cambridge, Boston Dynamics. 

But we actu­al­ly want­ed to chal­lenge that par­a­digm a lit­tle bit. When we began this project we were look­ing into what is the role of AI in war­fare, what is algo­rith­mic war­fare. And we came to the con­clu­sion that there’s kind of this inter­est­ing mis­un­der­stand­ing, right. 

Some robots kill you with a gun.

So there’s a real­ly big effort under­way to address the dan­gers of these kinds of robot­ics. And it’s real, right. That’s a valid effort. 

Some robots send a person instead.

But there’s also this very real threat that a lot of peo­ple kind of over­look, which is that at the end of the day it does­n’t actu­al­ly nec­es­sar­i­ly rely on a robot being the deliv­ery mech­a­nism of vio­lence if you’re still being targeted. 

And so that’s real­ly what inspired this project. We want­ed to look at how sur­veil­lance, how these algo­rith­mic deci­sion­mak­ing sys­tems and sur­veil­lance sys­tems feed into this kind of tar­get­ing deci­sion­mak­ing. And in par­tic­u­lar what we’re going to talk about today is the role of the AI research com­mu­ni­ty. How that research ends up in the real world being used with real-world con­se­quences. And then talk a lit­tle bit about what we found in our inves­ti­ga­tion of the space and what we invite you all to join us in doing as we move forward. 

So, to take a step back, there’s a pret­ty diverse crowd here so I just want to con­tex­tu­al­ize a lit­tle bit about what we’re talk­ing about when we’re talk­ing about this kind of algo­rith­mic sur­veil­lance. Here, you’re see­ing depict­ed a video sur­veil­lance sys­tem that has image recog­ni­tion tech­nol­o­gy. So it’s iden­ti­fy­ing, it’s putting box­es around var­i­ous peo­ple and vehi­cles and so on. So we’re look­ing at a cou­ple of dif­fer­ent types of sys­tems here: com­put­er vision sys­tems, sys­tems that can lis­ten to voice or audio, as well as sys­tems that may look at social net­works or things that we post on social media to come to con­clu­sions about whether or not we fall into a cer­tain group. 

Now, in this project in par­tic­u­lar we’re talk­ing about an incred­i­bly rich and inter­re­lat­ed sys­tem, and I’ll talk a lit­tle bit about this data visu­al­iza­tion in a sec­ond. But I want to first just con­tex­tu­al­ize that when we’re talk­ing about this research com­mu­ni­ty, we’re talk­ing about a lot of dif­fer­ent play­ers in a glob­al ecosys­tem. And you know, as researchers we want to share, we want to col­lab­o­rate, and that’s a beau­ti­ful thing. But what can be chal­leng­ing about this space is that these threads of con­nec­tion could lead to places that we did­n’t orig­i­nal­ly intend and we may as researchers not have even being aware. And so our work in par­tic­u­lar is explor­ing how some of those out­comes, how some of the end uses of research that may be done with the thought of being the­o­ret­i­cal or benign can actu­al­ly have real harm in our world. 

So with that I’m going to turn it over to Carl. 

Carl Governale: I’m Carl. I’m the one they’ve been talk­ing about. And I’m going to teach you how to hunt people. 

Screenshots of multiple surveillance systems and methods

So, many prac­ti­tion­ers in this space will often say that the tech­nol­o­gy, it just isn’t mature enough to be a true threat. You could­n’t use facial recog­ni­tion alone to build this tar­get deck. And I promise you that gross­ly under­mines the threat. How you hunt peo­ple is actu­al­ly with a series of over­lays; you don’t use one indi­vid­ual tech­nol­o­gy, you use them to hone in and find the corol­lar­ies to build a small­er list, until you have your approved tar­get deck. 

So, if I want­ed to iden­ti­fy all of the pro­test­ers in Hong Kong yes­ter­day, I would­n’t just use facial recog­ni­tion of CCTV cam­era footage. I would take that and put that on an over­lay of metro cards used in the sub­way sys­tem in Hong Kong, right. And that might give me a more accu­rate deck to begin our interrogations. 

So, jump­ing off of that short exam­ple, a quick deep dive. So non-exhaustive, but if you’re not famil­iar with the Uyghur cri­sis in Xinjiang, China I’ll give you just a cou­ple of sound­bites. So, since 2014 the Chinese peo­ple’s war on ter­ror has been a sys­tem­at­ic effort to oppress the Uyghur pop­u­la­tion, which is an eth­nic minor­i­ty in China that prac­tices Islam. Roughly 10 mil­lion indi­vid­u­als, so 1.5% of the Chinese pop­u­la­tion, but also does account for 20% of all arrests in China. Consequently there are over one mil­lion Uighurs cur­rent­ly assessed to be interned in Xinjiang Province in re-education camps. These are the camps of Holocaust lore, where there’s killing, torture…primarily this method of assim­i­la­tion via submission. 

It’s a heavy work­load; geno­cide is tough. So the People’s Liberation Army has lever­aged the tech indus­try. So this region, this small province and cross-section of soci­ety accounts for about $7.5 bil­lion of the secu­ri­ty indus­tri­al com­plex in China. And they lever­age both facial recog­ni­tion, voice recog­ni­tion, and oth­er forms of AI to apply lists of peo­ple who are either demon­strat­ing Islamic prac­tice, or they demon­strate phe­no­typ­ic fea­tures of this eth­nic minority. 

Now, where you stand depends on where you sit, and in the inter­est of posi­tion­al­i­ty we’re going to bring this home to MIT. A bunch of insti­tu­tions in America here ser­vice key nodes in the sur­veil­lance sup­ply chain. But here at MIT, through our research we found there’s about fif­teen projects ongo­ing in some way, shape, or fash­ion for surveillance-type of technology. 

So the nature of this ecosys­tem lends itself to this real­ly com­plex inter­de­pen­dence. So the research ongo­ing here at MIT is done in affil­i­a­tion with or with fund­ing from both pri­vate and pub­lic insti­tu­tions. So a pri­vate insti­tu­tion would be NEC is a Japanese-based secu­ri­ty firm that’s…basically their prod­uct is pre­dic­tive polic­ing. We use it here in the States, it’s used in the UK

Other com­pa­nies that have con­tributed funds to this type of research at MIT include SenseTime and iFlytek. Those two com­pa­nies are both impli­cat­ed in the pro­vi­sion of ser­vices in Xinjiang province. 

Also, one fan­tas­tic sup­pli­er, if you can use that term, of fund­ing for this type of research is the US gov­ern­ment. So, these same researchers that are linked to these com­pa­nies are also receiv­ing fund­ing from DARPA, Office of Naval Research, Marine Corps Warfighting Lab, and Army Research Lab. Occasionally they actu­al­ly appear on these same pub­li­ca­tions in the acknowl­edg­ments section. 

A screen full of news headlines regarding surveillance technology

Cavello: So, we want­ed to present this, and for some folks in the room it’s prob­a­bly new infor­ma­tion, but we are not the first peo­ple to talk about this issue. The issue of sur­veil­lance tech is real­ly impor­tant and it’s been an emerg­ing and bur­geon­ing con­ver­sa­tion here in the United States as well as around the world. There are actu­al­ly folks in the room here who are experts in the top­ic. But real­ly there’s this com­mon theme here, which is that researchers and prod­uct teams have an impor­tant role to play in actu­al­ly help­ing to think about how this tech­nol­o­gy is used, or pre­vent it from being used irre­spon­si­bly. And in par­tic­u­lar I want­ed to talk a lit­tle bit about what we’ve cov­ered so far. 

So, we’ve looked at, in real­i­ty thou­sands of dif­fer­ent projects and con­tracts from the US gov­ern­ment or jour­nal pub­lish­ings. We honed that into a couple-hundred list of surveillance-tech-specific. And then we ulti­mate­ly came down to about sixty-five research insti­tu­tions here in the United States, with forty-nine fun­ders, as Carl said, rang­ing from Chinese sur­veil­lance tech com­pa­nies, to US sur­veil­lance tech com­pa­nies, to our own US gov­ern­ment. And they cov­ered a broad kind of breadth of dif­fer­ent types of sur­veil­lance research, includ­ing facial recog­ni­tion, social net­work analy­sis, and per­son re-identification—which for me was a new phrase—which is a lit­tle bit like what Carl was talk­ing about at the begin­ning the pre­sen­ta­tion here. 

So, what we’ve been doing is putting togeth­er all of this infor­ma­tion and real­ly try­ing to lit­er­al­ly visu­al­ize it. So I know this kind of looks like scrib­bles on the screen here but this is a data visu­al­iza­tion we’ve been putting togeth­er rep­re­sent­ing all of those dif­fer­ent con­nect­ed nodes. And ulti­mate­ly what we would like to do is to be able to move for­ward on a project that has a bet­ter under­stand­ing of the real­i­ty of what’s on the ground. We want to know, is this data reli­able? Can we get access to more infor­ma­tion? For instance, today we were only using pub­lic sources. And we also want to real­ly involve the voic­es of the peo­ple who are devel­op­ing these tools. One of the com­mon themes in the work that we were doing to research this project is that we found that a lot of the peo­ple involved actu­al­ly did­n’t know how their work was being used in the world, and that is both con­cern­ing but also a real­ly excit­ing oppor­tu­ni­ty for us to edu­cate and do better. 

But ulti­mate­ly there are also a cou­ple of things that even if you’re not a machine learn­ing researcher, even if you’re not in this kind of sur­veil­lance tech space, there are a cou­ple of key points that we would real­ly like you to push on as you’re hav­ing con­ver­sa­tions with your rep­re­sen­ta­tives about what we can do about sur­veil­lance tech, at least here in the United States. 

Governale: So this is where we go from fact to opin­ion, so the opin­ions here­in are not rep­re­sen­ta­tive of the US gov­ern­ment. But they are ours. 

So, call to action. Two points that through our research we feel like we can apply some lever­age to this prob­lem set and ide­al­ly make a dif­fer­ence rather than admire this problem. 

First is Department of Commerce. That’s the office that holds the export admin­is­tra­tion reg­u­la­tions. So these are essen­tial­ly reg­u­la­tions that gov­ern the export of com­modi­ties to include dual-use tech­nolo­gies and soft­ware. So in adding facial recog­ni­tion and sur­veil­lance tech­nolo­gies to that list, it would be a forc­ing func­tion for any­body that’s in this com­plex ecosys­tem to have to apply some lev­el of risk mit­i­ga­tion and threat mod­el­ing to their deci­sion calculus. 

And then final­ly, we should prob­a­bly call for a mora­to­ri­um on US gov­ern­ment fund­ing for the research of these tech­nolo­gies. Not say­ing that the require­ments are not going to be ful­filled by the US gov­ern­ment. But the acqui­si­tion of this tech­nol­o­gy should prob­a­bly hap­pen in a highly-regulated com­mer­cial mar­ket as opposed to being direct­ly fund­ed. Because the com­plex­i­ty and the low den­si­ty of these skill sets just don’t lend them­selves to ethically-sound and trans­par­ent fund­ing pro­files. So let’s take that basic research fund­ing and put it some­where else. Thank you.