Trevor Paglen: So…

Kate Crawford: Hi. It’s great to see you here.

Paglen: I think we should get right into it because we have half an hour, and there’s a lot of mate­r­i­al to cov­er, a lot of ideas. So just… I guess like, I can kind of start where I came to this top­ic from, which is that I start­ed think­ing a lot about the social and polit­i­cal impli­ca­tions of— You know, I don’t even think we can call it the Internet. We need some­thing else. I know in Berlin here at the Haus der Kulturen der Welt there’s this con­cept the tech­nos­phere that they’ve been inter­ro­gat­ing. So we could just use that for now, despite any kind of nit­picky prob­lems one might have with that.

But I think for me you know, being some­what attached to the Snowden project was real­ly when I start­ed think­ing about what these kind of plan­e­tary infra­struc­tures of com­mu­ni­ca­tion and sur­veil­lance kind of were, and what their impli­ca­tions might be. And of course there’s been a lot of con­cern about what the impli­ca­tions of mass sur­veil­lance at a glob­al scale are in terms of democ­ra­cies, in terms of state pow­er, in terms of cul­ture and the like. But I think that we when we look at the NSA kind of mass sur­veil­lance infra­struc­tures, the ques­tions that it pos­es are what’s our rela­tion to pol­i­tics? There’s a ques­tion about pri­va­cy, etc.

But at the same time that those infra­struc­tures have been built, you know, NSA might be tap­ping the cables between the Google data cen­ters. But Google has the data cen­ters, and has the data. And I think we’re kind of arriv­ing at a moment where it’s start­ing to become clear what that’s going to mean. And a big part of what that is has to do with AI. So I think you should just talk about like, what are we talk­ing about when we’re talk­ing about AI? Are we talk­ing about the Singularity that’s going to take over the world, or are we going to talk about [inaudi­ble].

Crawford: Well it’s fun­ny. I actu­al­ly think, despite that excel­lent intro­duc­tion, that the big con­cerns that I have about arti­fi­cial intel­li­gence are real­ly not about the Singularity, which frankly com­put­er sci­en­tists say is…if it’s pos­si­ble at all it’s hun­dreds of years away. I’m actu­al­ly much more inter­est­ed in the effects that we are see­ing of AI now.

So to be clear, I’m talk­ing about this con­stel­la­tion of tech­nolo­gies from machine learn­ing, to nat­ur­al lan­guage pro­cess­ing, to image recog­ni­tion, and to deep neur­al nets. It’s real­ly this sort of clus­ter of tech­nolo­gies that, it’s a very loose term, but allow us to sort of work with that today. But know that that’s the sort of under­ly­ing tech­no­log­i­cal lay­er that we’re inter­est­ed in here.

And I think what’s sort of fas­ci­nat­ing about this moment is that peo­ple don’t real­ize how much AI is part of every­day life. It’s already sort of part of your device. Often it has a per­son­al­i­ty and a name, like Siri or Cortana or Alexa. But most of the time it doesn’t. Most of the time it is a name­less, face­less, back­end sys­tem that is work­ing at the seams of mul­ti­ple data sets and mak­ing a set of infer­ences, and often a set of pre­dic­tions, that will have real out­comes.

And in the US, which is where I’ve been doing a lot of my research, I’m look­ing at how these kinds of large AI pre­dic­tive sys­tems are being deployed in core social insti­tu­tions. So here I’m talk­ing about things like the health sys­tem, edu­ca­tion, crim­i­nal jus­tice, and polic­ing. And what is I guess to me as a researcher so inter­est­ing and I think con­cern­ing about this moment that we’re would deploy­ing these things with no agreed-on method about how to study them; what effects they might be hav­ing; how there might be dis­parate impact with peo­ple from dif­fer­ent com­mu­ni­ties, from dif­fer­ent races, low income pop­u­la­tions. The down­sides of these sys­tems seem to be very much clus­tered around pop­u­la­tions who dif­fer from the norm. So that’s the project that I think is real­ly open and real­ly needs a lot of us to be think­ing about and work­ing on.

Paglen: It seems like the kind of pol­i­cy frame­work and even kind of intel­lec­tu­al frame­work that we have to think about this is very much the words pri­va­cy and sur­veil­lance. And for a long time I’ve kind of felt like these are real­ly inad­e­quate con­cepts to hang our hats on when we’re talk­ing about how data is being used in soci­ety and how it will be used in soci­ety.

Crawford: And I think you know, it doesn’t mean that they’re not impor­tant. They are absolute­ly cru­cial­ly impor­tant ideas. I think the lim­i­ta­tion that I’ve found with pri­va­cy is that it comes from a very legal­is­tic, indi­vid­u­al­ist per­spec­tive. You know, you have indi­vid­ual pri­va­cy rights—

Paglen: It’s a bour­geois con­cept in the first place.

Crawford: It is inher­ent­ly a bour­geois con­cept that sort of emerges from you know, basi­cal­ly around the sort of late 1880s. And you see with the emer­gence of new technologies—at the time it was sort of pop­u­lar newspapers—that peo­ple are like, Oh, this news­pa­pers is writ­ing a scan­dalous sto­ry about me. There should be some sort of right of pri­va­cy.” And so we start to see the emer­gence of a juridi­cal frame­work of pri­va­cy. But it was always designed to pro­tect the elites, to pro­tect indi­vid­u­als.

What I think is going to be need­ed to address this new set of chal­lenges with machine learn­ing and arti­fi­cial intel­li­gence is a much more collective-based set of prac­tices, both in terms of how we rep­re­sent polit­i­cal action togeth­er as groups, but also con­cepts around ethics and pow­er are very impor­tant here. Because we are, par­tic­u­lar­ly when we talk about AI, we’re real­ly talk­ing about sev­en com­pa­nies in the world who are deploy­ing this at scale. Who have the data infra­struc­ture. Who have the capac­i­ties to be real­ly doing this. That is extra­or­di­nar­i­ly con­cen­trat­ed. That is the thing that I think we have to real­ly think about in terms of they’re going to be the com­pa­nies that decide what edu­ca­tion­al looks like, what health looks like. So that’s why I think we need to be think­ing about pow­er and ethics and move on, per­haps, from the indi­vid­u­al­is­tic fram­ing of pri­va­cy.

Paglen: Yeah. So I think we should kind of go into that. We can ask some ques­tions about what these impli­ca­tions are. You know, I think for me—and we’ve talked about this a lot it—a few months ago you know, ear­li­er this year the com­pa­ny DeepMind kin­da famous­ly won this Go game and it was con­sid­ered this huge advance­ment or this real­ly spec­tac­u­lar thing. Because nobody thought that you would be able to beat a grand mas­ter at Go. It’s a much much more com­pli­cat­ed game than chess. DeepMind did it, and lots of media atten­tion about it.

But then after that they did some­thing that didn’t get as much media atten­tion, which what they did was apply that same AI frame­work to look at pow­er con­sump­tion at Google data cen­ters. And what they were able to do was to reduce the pow­er con­sump­tion of the cool­ing costs by 40%. And that sort of effi­cien­cy is real­ly, real­ly remark­able. I mean, you don’t see that kind of hap­pen every day. And to me that’s a kind of a micro­cosm of a phe­nom­e­na that might become more wide­spread. And that kind of opti­miza­tion is going to have mas­sive effects for things like labor. For things like logis­tics. For things like health­care. For insur­ance, cred­it. These kinds of things that are very much a part of our every­day lives, right. So maybe we can talk through some of that.

Crawford: Yeah no, it’s inter­est­ing. I mean, both Trevor and I were sort of par­tic­u­lar­ly fas­ci­nat­ed with this sto­ry. I mean, what hap­pened with Go was that part of the rea­son there was such media atten­tion was because it was pre­dict­ed that we wouldn’t be able to defeat a human Go mas­ter for anoth­er ten years. So there was a sense that we’d tak­en a leap for­ward in time.

But what was inter­est­ing about the ener­gy con­sump­tion project is that they used exact­ly the same tech­nique. It was a game engine. So they thought about data cen­ters as a game that you play, where you could open win­dows, increase the tem­per­a­ture or decrease the tem­per­a­ture accord­ing to the ener­gy load. And they got very pre­cise about times and then how would you play with all of these levers. And the result I still think is extra­or­di­nary. And if we’re going to talk about where I think there’s real pos­i­tive upside to how we can start think­ing about using AI, imag­ine if we could do that across a whole range of dif­fer­ent ener­gy con­sump­tion tech­nolo­gies. I mean, that’s astound­ing.

However. If you apply that same log­ic to alright, you’re a work­er who is in a basic super­mar­ket job. And we’ve basi­cal­ly got you on the clock, com­ing in at the peak opti­mal time, when there’s going to be max­i­mum crowds. And you’re real­ly only going to have like a two-hour shift and then we’re going to get rid of you. That’s great for me as the per­son who runs the super­mar­ket. I’m max­i­miz­ing my val­ue for buy­ing your labor. But for you it’s ter­ri­ble, because you’re wait­ing to see if you’ve been sum­moned… It’s this kind of idea of the shar­ing econ­o­my (ter­ri­bly mis­named) spread to every­thing. So that shift—

Paglen: Like flex­i­ble labor.

Crawford: Flexible labor, absolute­ly. Absolutely. Or so-called flex­i­ble,” but flex­i­ble for whom? Certainly not flex­i­ble for the per­son who’s doing the labor.

So I think these are the sort of shifts that we have to take. That there has been this way of talk­ing about AI as a sin­gu­lar thing that if applied to every­thing will be bril­liant, every­thing will be more effi­cient. But I think we need a much more gran­u­lar analy­sis that says where are we going to get max­i­mum ben­e­fits with min­i­mum human costs? And that is not an easy ques­tion to ask right now, because we have so lit­tle data.

Paglen: No, absolute­ly. I mean, we’re talk­ing about sys­tems that can rad­i­cal­ly trans­form every­day life. And that has polit­i­cal impli­ca­tions, it has cul­tur­al impli­ca­tions, it has soci­o­log­i­cal impli­ca­tions.

But… How do you… There’s a cou­ple of ques­tions here in rela­tion to pow­er. And one of the things that you men­tioned before was that this is real­ly five com­pa­nies, right. So we’re talk­ing about like, that right there—

Crawford: Five to sev­en, depend­ing.

Paglen: Five to sev­en, depend­ing how you slice it. Whether IBM is in there…

But what… First of all, how did that con­cen­tra­tion hap­pen? Why can’t I just go and cre­ate my own AI and fig­ure out how to run my stu­dio more effi­cient­ly or some­thing like that? I mean maybe I can.

Crawford: Well this is actu­al­ly some­thing that I think is fas­ci­nat­ing. Because if we com­pare it to the last sort of extra­or­di­nary tech­no­log­i­cal shift that I think most of us in this room wit­nessed, which was around the Internet and the Web, right. So we had this sense of like well, you could more or less teach your­self some forms of code. And you could you know, pret­ty much cre­ate a web site. You could do a whole lot of things with­out too much sort of self-teaching. It was pret­ty straight­for­ward.

The dif­fer­ence with AI is that the costs to first of all have large-scale train­ing data is huge. Just to get that train­ing data, it’s extreme­ly valu­able. Companies don’t share it because it’s pro­pri­etary. There are open data equiv­a­lents, but it’s again then the issue becomes pro­cess­ing. So then you’re run­ning big GPUs. It’s very expen­sive. And actu­al­ly anoth­er artist, Darius Kazemi, just did a real­ly inter­est­ing short paper look­ing at if he was try­ing to start again now as a kid say­ing, I wan­na do DIY AI, how would I do it?” And he’s like, I could not afford this.”

So I think that’s part of the issue. Also these are all the com­pa­nies who have been col­lect­ing data for some time. They have dif­fer­ent types of data, so they’ll be pro­duc­ing dif­fer­ent types of sort of AI inter­ven­tions. But what is inter­est­ing is what hap­pens when they start deploy­ing those mod­els, is we’re start­ing to see this pat­tern which is real­ly inter­est­ing. Which is that we’re real­ly good at machine learn­ing for some things. But keep in mind, machine learn­ing sys­tems are real­ly look­ing for pat­terns, and they’re very very bad at unpack­ing why those pat­terns are there, or think­ing about the con­text.

So let me give you an exam­ple to make this con­crete. There was a real­ly inter­est­ing study done at the University of Pittsburgh Medical Hospital, where they were study­ing pneu­mo­nia patients. So they thought look, let’s basi­cal­ly train this on a deep neur­al net, which we don’t real­ly know what it’s doing but we’ll see what the out­puts are. We’ll train it on a more open sys­tem where we can see what pat­terns they find but we don’t under­stand the rules.

And what they found with the DNN ver­sion, the deep neur­al net, was that it was extreme­ly good at fig­ur­ing out who was actu­al­ly like­ly to have com­pli­ca­tions from pneu­mo­nia. Apart from in one case: it want­ed to send home all of the peo­ple who had chron­ic asth­ma. Of course, they’re the peo­ple who are the most vul­ner­a­ble and most like­ly to die. So it was a very bad deci­sion.

But the rea­son why it came to that con­clu­sion is actu­al­ly quite log­i­cal, which was that the data indi­cates that the doc­tors had been so effi­cient— If you came to me and you said you had pneu­mo­nia and you had chron­ic asth­ma, I’m like, Straight to inten­sive care. Off you go.” So you are actu­al­ly now unlike­ly to get com­pli­ca­tions because I’ve moved you straight into an inten­sive care sys­tem.

But of course if I’m a data mod­el I just see that oh, peo­ple with chron­ic asth­ma don’t have com­pli­ca­tions, send them home. So it’s a real­ly inter­est­ing study to show the dif­fer­ence between inter­pretabil­i­ty and data pat­terns. There’s a pat­tern there, but how are you inter­pret­ing it? So I think we have a set of issues there that also relate to how we think about the deploy­ment of AI into social sys­tems.

Paglen: So we can think about its deploy­ment in health­care, labor. What about oth­er… I’m think­ing of like, what are the clas­sic kind of sec­tors in the post-Fordist economies like insur­ance, real estate, cred­it, right. And very much affect our every­day lives, right. And it’s almost like cred­it is almost a kind of…right, in a way. What I mean by that is that these are de fac­to things that you can do as a human in the world, right. And if your cred­it score’s mod­u­lat­ing, you effec­tive­ly have dif­fer­ent rights than some­body with a dif­fer­ent kind of cred­it score.

And so one of the things that when we think through what will… The inte­gra­tion of AI into those sorts of indus­tries, what will be the effects of that, do you think? I was think­ing in terms of our every­day priv­i­leges, basi­cal­ly.

Crawford: Well, it’s going to be real­ly inter­est­ing. One of the things I’d sug­gest is that these sys­tems are going to get real­ly good at hyper­per­son­al­iz­ing to you. To the point where if you’re an 18 year-old who’s hav­ing a few beers at a par­ty and there’s your Facebook pho­tos, an insur­ance company’s like, Huh, inter­est­ing. And you’re dri­ving a car. We might be increas­ing your insur­ance pre­mi­ums,” on this very gran­u­lar like—oh, this week, this month.

But actu­al­ly, and again I’m going to speak to the con­text that I know best, which is the US legal sys­tem. We do have some pro­tec­tions that we can use around cred­it. Because let’s face it, cred­it and insur­ance agen­cies have been using data to real­ly pin­point peo­ple for some time. So there’s some push­back there.

But I’m actu­al­ly more wor­ried about when this gets deployed into areas like the crim­i­nal jus­tice sys­tem. So I’m sure some of you read the ProPublica sto­ry Machine Bias”. That was based on Julia Angwin’s work over four­teen months, with five jour­nal­ists basi­cal­ly FOIAing the hell out of this com­pa­ny called Northpointe. Northpointe has used this soft­ware plat­form through­out court­rooms in the US. What it does is it gives a crim­i­nal defen­dant a num­ber between one and ten, to indi­cate the risk of them being a vio­lent offend­er in the future—so it’s basi­cal­ly like a recidi­vism risk.

And what she found in this big inves­ti­ga­tion was that basi­cal­ly black defen­dants were get­ting a false pos­i­tive rate of twice that of white defen­dants. So the race dis­par­i­ty was extra­or­di­nary, and the fail­ure rates were real­ly very clear. But what was fas­ci­nat­ing about this huge story—blew up, every­one was con­cerned: we still don’t know why. Northpointe hasn’t released the data. They won’t reveal how these cal­cu­la­tions are being made because it’s pro­pri­etary.”

So this sys­tem that was being deployed to judges that they were using to make these real­ly key deci­sions, is still a com­plete black box to us. So that’s where we’re actu­al­ly real­ly bad at think­ing about you know, what are the due process struc­tures? How do we make these kinds of pre­dic­tive sys­tems account­able?

Now, that of course is not an AI sys­tem. To be super clear, it’s a pre­dic­tive data sys­tem. I wouldn’t call it autonomous in the way that I would call AI. But I think it’s a pre­cur­sor sys­tem.

Paglen: Yeah, I mean there’s anoth­er com­pa­ny called Vigilant Solutions. And what they do is—

Crawford: Doesn’t sound omi­nous at all. That’s clear­ly great.

Paglen: No, exact­ly. Not at all. It’s a com­pa­ny that most­ly caters to law enforce­ment. And what they do is they deploy LPR cam­eras. So they have cam­eras all over cities, and all over their own fleet of cars that just go and take a pic­ture of everybody’s license plate. And they sell this to law enforce­ment, insur­ance, col­lec­tions agen­cies and that sort of thing.

They had a pro­gram in Texas where they were part­ner­ing with local law enforce­ment agen­cies. And they were installing ALPR cam­eras on cop cars, so any­where the cop car drove it would record the license plate. Vigilant would ingest that data, merge it with their own, and then make it avail­able back to the cops.

The oth­er move that Texas did was gave police the abil­i­ty to swipe people’s cred­it cards as a way to pay fines, traf­fic tick­ets, take care of arrest war­rants and that sort of thing. So then what the police had was here’s a record of where every­body is, here’s every­body who we have some­thing on. We can dri­ve to their house, take out your cred­it card. And so this is like Ferguson, the very preda­to­ry kind of munic­i­pal­i­ties kind of gone—you know, is this a vision of the future?

Crawford: That’s extra­or­di­nary. I mean, it’s inter­est­ing and so that we don’t depress you too much and leave a lit­tle time for ques­tions, the thing that I’m also real­ly inter­est­ed in and we should talk about this too, Trevor, is like, what do we do about it? What are the things that we could do about this? And we’ve talked a lit­tle bit about sort of exist­ing leg­isla­tive frame­works. I think most of the time, they’re not actu­al­ly up to this chal­lenge. I think we have a lot of work to do to think about where we get account­abil­i­ty and due process in these kinds of quite opaque sys­tems.

The thing that I’ve been work­ing on recent­ly was we did a White House event on the social and eco­nom­ic impli­ca­tions of AI with Meredith Whittaker, who’s here tonight, look­ing specif­i­cal­ly at what we could do. And I know this was some­thing that’s inter­est­ing to you Trevor, because one of these ques­tions is how do we give access to peo­ple? How do we make sure that peo­ple get access to these tools. But then sec­ond­ly, if you’re being judged by a sys­tem, how do we start think­ing about due process mech­a­nisms?

So I think that’s one of the areas where I think we have the most work to do. But I also think that col­lec­tive­ly, we could actu­al­ly real­ly start pres­sur­ing for these kinds of issues.

Paglen: So in the due process case, you can’t have a black box that’s send­ing peo­ple to prison or not. I mean that’s a real sim­ple thing, right?

Crawford: Yeah. And of course pre­dic­tive polic­ing is anoth­er big thing here, too. Are you hav­ing much pre­dic­tive polic­ing in Germany? Is this a thing that’s hap­pen­ing here, not that any­body knows about— Yes, a lit­tle bit? A lit­tle bit? Okay, alright.

Well, I would be keep­ing a close eye on that. This is trag­i­cal­ly one of the areas that the US has real­ly be lead­ing the way. There are pre­dic­tive polic­ing sys­tems in New York, in Miami, in Chicago, in LA. And there’s been a real­ly inter­est­ing set of stud­ies look­ing at how these sys­tems are work­ing. They’re on often built by Palantir. Palantir is one of the major— I’m sure many of you are famil­iar with Palantir as a com­pa­ny that pro­vides a lot of tech­nolo­gies to var­i­ous mil­i­tary orga­ni­za­tions around the world.

But this inter­est­ing thing has just hap­pened. We just got the first study that looked at the effec­tive­ness of pre­dic­tive polic­ing in Chicago. This was by Rand. So it’s not a rad­i­cal orga­ni­za­tion. And they found that it was com­plete­ly inef­fec­tive at pre­dict­ing who would be involved in a crime. But it was effec­tive at one thing, which was increas­ing police harass­ment of the peo­ple on the list. So you know, if you’re on a heat list you’re going to get a lot of atten­tion, but it’s not nec­es­sar­i­ly going to help pre­dict who’s going to be involved in a vio­lent crime. So we’re already start­ing to see just like, empir­i­cal test­ing of these sys­tems? is that they’re not even meet­ing base­line cri­te­ria of what they say they’re going to do.

So I think this is where we have a lot of poten­tial to move, and poten­tial to work col­lec­tive­ly around polit­i­cal issues, is to say, Show us the evi­dence that this pre­dic­tive polic­ing sys­tem will actu­al­ly work, and work with­out pro­duc­ing dis­parate impact.”

Paglen: Yeah, I mean I think there’s two lay­ers of con­cerns here. I tend to take the big­ger con­cer— Like you know, more of a meta con­cern, which is that the prob­lem with these AIs being used in polic­ing is not that they’re racist. It’s that the idea of quan­ti­fy­ing human activ­i­ty in the first place, I find very vio­lent, you know. Like for exam­ple labor or some­thing like that. If you’re going to have a cap­i­tal­ist soci­ety, then cap­i­tal­ism was all about opti­miza­tion, right? Creating effi­cien­cies. That’s one of the ways in which you make mon­ey. So how do we start to recon­ceive of… I guess my con­cern is that we actu­al­ly don’t even have a polit­i­cal or eco­nom­ic frame­work with­in which to address some­thing like a 40% increase in effi­cien­cy across a logis­tic sec­tor or some­thing like that, you know.

Crawford: No, I think that’s right. And I think this is part of the issue around what’s hap­pen­ing now. And I want to real­ly avoid the kind of tech­no­log­i­cal inevitabil­i­ty argu­ments which come up a lot where peo­ple say this is the new thing, so it’s going to hap­pen, and it’s going to touch every part of life.

Not nec­es­sar­i­ly. And what’s inter­est­ing, what I’ve been doing is going back to… A lot of the ear­ly works are being writ­ten about AI in its sort of first decades of devel­op­ment, basi­cal­ly back to the 1970s. There’s an extra­or­di­nary AI pro­fes­sor called Joseph Weizenbaum, who wrote the pro­gram ELIZA—you might have seen this pro­gram. It’s a nat­ur­al lan­guage processing…very ear­ly pro­gram designed to sim­u­late con­ver­sa­tion. Very basic. But he was amazed by how peo­ple were tak­en in by it. And it was…you know, a very sim­ple kind of Turing test. Like, we have a con­ver­sa­tion and oh, it sounds like a real per­son.

He very quick­ly start­ed to ask crit­i­cal social ques­tions about AI. And he had this total con­ver­sion moment where he was like, if we start deploy­ing AI into all of our social sys­tems, it will be a slow-acting poi­son. So it’s a pret­ty harsh cri­tique. But what it did was it start­ed to make peo­ple think about where can this work, and where might it not work. I don’t think we’re going to win, Trevor, between you and I, in sort of try­ing to say, Well, not all of life should be metri­cized.’” I think that’s been hap­pen­ing for well over a cen­tu­ry.

But I think we have the chance to push back when it comes to this issue of where should this be deployed? Are there areas where we sim­ply don’t have sophis­ti­cat­ed enough sys­tems to pro­duce fair out­comes?

Paglen: You know, one of the things that I know you’ve done a huge amount of work on, too, is just in terms of the ethics of research that goes into this. You know, like what are the human sub­jects impli­ca­tions for peo­ple in uni­ver­si­ties, doing the kind of ground­work that you know… Doing the kinds of stud­ies, writ­ing the kinds of algo­rithms that will even­tu­al­ly become a DeepFace or a DeepMind, or what­ev­er Google, or what have you. But, could you talk about that a lit­tle bit? Just what are the research ethics.

Crawford: This is a real­ly inter­est­ing space. And I’m going to basi­cal­ly give away a forth­com­ing research paper that we have that’s about to be pub­li­cized. But basi­cal­ly we’ve been look­ing into what I think is a real­ly, real­ly inter­est­ing shift. We already had a cul­ture where a lot of sci­en­tists and aca­d­e­m­ic researchers—particularly com­put­er scientists—felt as though, This is data that we’ve just col­lect­ed from mobile phones. It’s not human sub­jects data. We can do what we want with it. We don’t have to ask about con­sent. We don’t have to think about life­time of the data. We don’t have to think about risk.” Because com­put­er sci­ence has nev­er real­ly thought of itself as a human sub­jects dis­ci­pline. So it has been out­side of all of that sort of human sub­jects work that hap­pened to the crit­i­cal social sci­ences and human­i­ties in the late 20th cen­tu­ry.

But here’s where gets real­ly weird. There’s this thing that has just start­ed to hap­pen, and by just I mean prob­a­bly in the last twenty-four months, where we’re mov­ing to forms of autonomous exper­i­men­ta­tion. What that means is that these are sys­tems where there isn’t a per­son design­ing” the exper­i­ment and look­ing at the result. This is basi­cal­ly a machine learn­ing algo­rithm that is look­ing at what you’re doing, pok­ing you to see if you will click on our ads if we show you these images in quick suc­ces­sion. If that gets a good response it will con­tin­ue to opti­mize and opti­mize, and reex­per­i­ment and reex­per­i­ment.

And this could hap­pen to you thou­sands of times a day and you won’t be aware of it. There cer­tain­ly isn’t any kind of ethics frame­work around autonomous exper­i­men­ta­tion. But there’s a new set of plat­forms, things called mul­ti­world test­ing, where this is being deployed into things like every­thing from basi­cal­ly how you read news—so exper­i­ment­ing and see­ing what kinds of news will make you buy more ads, to traf­fic direc­tions, right. So if you’re in an autonomous exper­i­ment, some­one will be allo­cat­ed to the opti­mal route so you’ll get to work faster. But some­body has to be allo­cat­ed to the subopti­mal route, oth­er­wise we’ll put you on the same road, so that won’t work.

Now that might be okay if all it means is that you’re going to be late to work five min­utes. No big deal. But what if you’re rush­ing to hos­pi­tal? What if you’ve got a sick kid? What if you have no way to say, Do not assign me to the exper­i­men­tal con­di­tion which is sub­op­ti­mal, please.” Like, there’s no con­sent mech­a­nism, there’s no feed­back mech­a­nism. So once you start deploy­ing some­thing at that scale, we’re kind of used to the traf­fic opti­miza­tion thing, because we can see it. But what hap­pens when that’s in a whole range of sort of back­end data sets where you’re being opti­mized and exper­i­ment on mul­ti­ple times a day?

So for me, I’ve been col­lab­o­rat­ing with peo­ple specif­i­cal­ly in machine learn­ing and infor­ma­tion retrieval. And we’ve been test­ing these sys­tems, and look­ing at them, and going okay, what are the pos­si­ble down­sides here? How might you cre­ate mech­a­nisms of feed­back so peo­ple would be able to say, Look, to me it’s worth it real­ly not to be exper­i­ment­ed on when I’m sick and rac­ing to hos­pi­tal.”

But these are mech­a­nisms that haven’t been designed yet. So what I’m most inter­est­ed in doing right now and where I think we have a big job to do is to cre­ate a field around what are the social impli­ca­tions of AI? Get peo­ple work­ing on these sys­tems try­ing to test them. Sometimes that will reverse engi­neer­ing from afar. There are legal restric­tions there like the CFAA in the US that real­ly wor­ry me. But I think that process of real­ly try­ing to hack and test these sys­tems is going to be crit­i­cal.

Paglen: One of the rec­om­men­da­tions that you made in the [2016] AI report that I think is actu­al­ly quite impor­tant is that you’re call­ing for more diver­si­ty in the research. And that you were play­ing with some AI sys­tems in the stu­dio, there sure are autonomous exper­i­ments where there’s nobody in con­trol. But some­times you see the very spe­cif­ic sub­jec­tiv­i­ties of the peo­ple who are cre­at­ing these sys­tems.

So for instance if you’re run­ning object recog­ni­tion, we ran it on a paint­ing, it says oh, this looks like a bur­ri­to.” A bur­ri­to is only a thing that you would think of as a class of things that you would want to iden­ti­fy if you were a white young per­son liv­ing in San Francisco in par­tic­u­lar. So there are these moments where you real­ly do see the speci­fici­ties of the expe­ri­ence of the peo­ple devel­op­ing the soft­ware.

And I think that that trans­lates into many oth­er kinds of spheres. So for exam­ple if you are a big data cor­po­ra­tion and you decide oh, we’re not going to encrypt this data because it doesn’t real­ly hurt any­body, I don’t real­ly have any­thing to hide. That is you com­ing from a class posi­tion and a race posi­tion where yeah, maybe you don’t have any­thing to hide. You are not being preyed upon by police, by oth­er kinds of agen­cies. And so to me that was real­ly inter­est­ing, one of your rec­om­men­da­tions; like, actu­al­ly you need more diverse peo­ple work­ing—

Crawford: Yeah…I mean, this is actu­al­ly where we are not doing very well at all. So, right now if you look at the stats on what the engi­neer­ing depart­ments are like at the big kind of sev­en tech­nol­o­gy com­pa­nies, the ratio—basically it’s around 80 to 90%, depend­ing on which com­pa­ny, men. So just get­ting women into those rooms has been extreme­ly dif­fi­cult, for a whole lot of rea­sons.

And then if you look at peo­ple of col­or, the num­bers are even more dis­mal. And under­rep­re­sent­ed minori­ties. I mean, this is an extra­or­di­nar­i­ly homo­ge­neous work­force. The peo­ple in these rooms design­ing these sys­tems look like each oth­er, think like each oth­er, and come from gen­er­al­ly speak­ing very upwardly-mobile, very wealthy kind of sec­tors of soci­ety.

So they’re map­ping the world to match their inter­ests and their way of see­ing. And that might not sound like a big deal. But it is a huge deal when it comes to the fact that cer­tain ways of life sim­ply don’t exist in these sys­tems. I mean, it’s inter­est­ing that I think strange­ly, sort of race and gen­der, which has always been an issue in com­put­er sci­ence, is actu­al­ly even more impor­tant in AI. Because AI, it’s not just an eco­nom­ic” argu­ment about get­ting peo­ple jobs and get­ting peo­ple skills.” It’s about these are the peo­ple map­ping the world, and they are only see­ing this nar­row, dom­i­nant slice. If we don’t get more diver­si­ty in those spaces or at least dif­fer­ent ways of think­ing about the world, we are going to cre­ate some seri­ous prob­lems.

Paglen: Absolutely. And so for me one of the take­away things to think about when we’re think about AI, this is not neu­tral. There’s spe­cif­ic kinds of pow­er that these sys­tems are opti­miz­ing for. Some of them are maybe uncon­scious, you know, kind of racial posi­tions or that sort of thing. But some of them are quite con­scious, you know, like the kinds of sys­tems that are going to become more prof­itable and repro­duce them­selves are ones that are going to make mon­ey. Are ones that are going to enhance mil­i­tary effec­tive­ness. Ones that law enforce­ment would want to cap­i­tal­ize on. These are the kinds of vec­tors of pow­er that are flow­ing through these. And so I think for me it’s always impor­tant to make that point, is that this is not hap­pen­ing in a vac­u­um. It’s not a lev­el play­ing field. And that’s prob­a­bly part of the civic project, is to think about what kinds of pow­er do we want flow­ing through these opti­miza­tions.

Crawford: And I think show­ing peo­ple how pow­er works is real­ly key here. And this is where I think of your work now on machine vision, is that you’ll real­ly show­ing peo­ple like, these are the dif­fer­ent ways that bod­ies are tracked and under­stood. It’s very dif­fer­ent to human see­ing. That it has a whole range of capac­i­ties that peo­ple are not used to look­ing at. And I know you’re mak­ing a series of works that will real­ly start to show peo­ple what this quite alien way of see­ing looks like. And I think that is quite a rad­i­cal and impor­tant act right now, is sim­ply a lot of peo­ple are not aware of how much these sys­tems are around us all the time.

So part of I think what we can do now and where I think artists and activists and aca­d­e­mics can all real­ly start to work togeth­er is first of all how do we show to peo­ple the mate­ri­al­i­ty of these sys­tems? And how do we start to think polit­i­cal­ly not just about hid­ing from them, or that encryp­tion is going to be the answer. Because I fear that we’re in an arms race now. It’s actu­al­ly going to take a lot more polit­i­cal pres­sure. It’s going to take a lot more research. And it’s going take a lot more pub­lic inter­est in this ques­tion. Because I mean, one of the things that I know we agree on is that this feels like a very big storm cloud that’s on the hori­zon. Like a lot of changes are about to hap­pen, and a lot of peo­ple are just not aware of it yet. So a part of just mak­ing this pub­lic aware­ness a big­ger issue I think is real­ly impor­tant at this point.

Paglen: Absolutely. So I think that with that maybe we have time for ques­tion or two. I’m not quite sure, but—

Crawford: Are we allowed to do ques­tions? No, we’re not. Sorry about that. You can come and talk to us lat­er, or tonight at the par­ty. But thank you so much. Thank you, Trevor.

Paglen: We’ll see you at the par­ty. Thank you guys.


Invisible Images (Your Pictures Are Looking at You), Trevor Paglen at The New Inquiry


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.