Kevin Bankston: I am not going to intro­duce these fine folk. They’re going to intro­duce them­selves, and what I’m ask­ing them to do is intro­duce them­selves in the usu­al way while also answer­ing the ques­tion of what are the key ques­tions around AI that’re com­ing up in your field as you char­ac­ter­ize your field, or put anoth­er way, what do you find your­self usu­al­ly talk­ing about when you are talk­ing about AI. So, let’s start with you, Rumman.

Rumman Chowdhury: My name is Dr. Rumman Chowdhury. I lead Responsible AI, which is our ethics in AI arm of Accenture, a mas­sive glob­al con­sult­ing firm. So, I actu­al­ly talk about a lot of things when I talk arti­fi­cial intel­li­gence, but specif­i­cal­ly how it impacts com­mu­ni­ties, things like bias dis­crim­i­na­tion, fair­ness. But also how do we get the right kinds of nar­ra­tives to build our tools and prod­ucts as com­pa­nies start to actu­al­ly imple­ment and enact arti­fi­cial intel­li­gence. When I say com­pa­nies” the inter­est­ing thing is I’m not actu­al­ly talk­ing about the Elon Musks and Jeff Bezoses of the world. I’m talk­ing about like, Nestlé, and Unilever, and Coca-Cola. So the com­pa­nies that are already in our dai­ly lives adopt arti­fi­cial intel­li­gence, what does that mean and how do we do responsibly? 

Miranda Bogen: My name’s Miranda Bogen. I’m a Senior Policy Analyst at Upturn, which is a non­prof­it based here in DC that pro­motes equi­ty and jus­tice in the design, use, and gov­er­nance of dig­i­tal tech­nol­o­gy. And what that means is that we’re look­ing at two main areas: eco­nom­ic oppor­tu­ni­ty and crim­i­nal jus­tice. And so when we think about AI, what we’re often think­ing about is scor­ing peo­ple; you know, how are peo­ple find­ing jobs, cred­it, hous­ing; how are peo­ple being rat­ed on their risk of going on to com­mit a crime, things like that. And while it often is talked about in terms of AI, very rarely what we’re actu­al­ly see­ing is AI. We’re still very ear­ly stages of like, sta­tis­tics. But at the same time, using the frame AI has got­ten a whole range of new peo­ple inter­est­ed in these sort of lega­cy issue areas of civ­il rights that weren’t inter­est­ed before, because there’s both…it’s kind of a sexy new thing, but also that there’s a new oppor­tu­ni­ty to make change that maybe we can break out of some of the pol­i­cy pat­terns that we’ve done in the past. 

Elana Zeide: I’m Elana Zeide. I’m a PULSE Fellow in AI, Law, and Policy at UCLA’s School of Law. And there I also study auto­mat­ed decision-making, look­ing at it most­ly from the realm of edu­ca­tion tech­nol­o­gy through­out life-long learn­ing. So, I’m look­ing at scor­ing sys­tems that are sup­posed to be scor­ing sys­tems, and how they struc­ture human val­ue, human cap­i­tal, and affect human devel­op­ment. And also in that vein, things like effi­cien­cy, pro­duc­tiv­i­ty, access to opportunity. 

Lindsey Sheppard: Hi, and I’m Lindsey Sheppard. I’m an Associate Fellow at the Center for Strategic & International Studies. We’re defense and secu­ri­ty think tank here in DC. Very focused pri­mar­i­ly on emerg­ing tech­nol­o­gy, nation­al secu­ri­ty, and defense issues. So, what I work on, arti­fi­cial intel­li­gence, pri­mar­i­ly we’re think­ing about how do we dis­pel the myths? How do we set expec­ta­tions? What does rea­son­able use and what does use actu­al­ly look like? And then how do you actu­al­ly go about the process of bring­ing these tech­nolo­gies into our defense and intel­li­gence struc­tures? So I would say kind of the big macro ques­tion that we focus on is not on the algo­rithms them­selves. So we look at the under­de­vel­oped ecosys­tems sur­round­ing the algo­rithms. So look­ing at how do you bring in the right work­force? How do you train your work­force? How do you get the com­put­ing infra­struc­ture and net­work­ing infra­struc­ture? And how do you have that top-level pol­i­cy guid­ance to actu­al­ly bring this tech­nol­o­gy in to sup­port US val­ues and interests? 

Bankston: Great. So, we’re talk­ing a lot about algo­rith­mic decision-making. Or we could also char­ac­ter­ize that as nar­row, or arti­fi­cial nar­row intel­li­gence as opposed to say, arti­fi­cial gen­er­al intel­li­gence like Skynet or most things you see in sci­ence fic­tion. We’re talk­ing about algo­rithms that’re trained on sets of big data—remember when we used to say big data” all the time? We don’t say that any­more, we say AI. But often that data can reflect bias­es in our real soci­ety, or can be biased data sets, which leads to issues of algo­rith­mic fair­ness. Which is the cen­ter of your work, Elana, in many ways so I was won­der­ing if you could start by talk­ing gen­er­al­ly about what are the top issues around algo­rith­mic bias as they apply to human poten­tial generally?

Zeide: Sure. So, there are many ways bias can creep into algo­rithms that can come in from the data itself. Historical data that reflects pat­terns of inequity. It can trick­le into the mod­els that are then used to judge peo­ple. And it can trick­le in in terms of what I talk about on a day-to-day basis, into the tech­nolo­gies that are then used to deter­mine where peo­ple should be in life. What lev­el they should be at in school. We’re look­ing increas­ing­ly at the idea of com­plete­ly auto­mat­ed per­son­al­ized teach­ing sys­tems, so what you should learn, what lev­el you should be at. And rec­om­men­da­tion sys­tems. Where should you go to col­lege? What should your major be? What should your pro­fes­sion­al devel­op­ment be? 

And then it moves into the hir­ing realm. So, in this way you get…because you’re using pre­dic­tive ana­lyt­ics, you’re real­ly repli­cat­ing exist­ing pat­terns. And the ques­tion is do we want to do that in human devel­op­ment and in places where oppor­tu­ni­ty at least is the rhetoric that we use? 

Bankston: Miranda, you often address these issues in the context…well, on a vari­ety of issues, but espe­cial­ly in the con­text of crim­i­nal jus­tice. Can you talk a bit about that.

Bogen: Yeah. I think that’s anoth­er place where some of the sci-fi tropes hon­est­ly I think have inspired what we’re see­ing in crim­i­nal jus­tice. You know, RoboCop type of things. 

Bankston: Minority Report.

Bogen: Yeah, Minority Report certainly. 

Zeide: Yeah, I for­got to do that.

Bankston: We’ll talk more about sci-fi. [crosstalk] You don’t have to intro­duce it to every point.

Bogen: Yeah. But just to plant to seeds. But I do think that that has moti­vat­ed things, because we see body-worn cam­eras. We see the ven­dors who are build­ing those cam­eras think­ing about how to incor­po­rate facial recog­ni­tion, which I’d say is one of the clos­est things to AI that I actu­al­ly see on a day-to-day basis. It’s enor­mous amounts of data that you know, a human mind maybe can’t draw con­nec­tions to but with enough data you can. Theoretically, if it’s accurate. 

The oth­er thing we’re see­ing is in the crim­i­nal jus­tice sys­tem, decid­ing who can be released on bail or not. Where police should be deployed. You know, and some­times this is jus­ti­fied as mak­ing the sys­tem more fair,” with the idea that if we’re rely­ing on data we’re kick­ing out human bias­es. In oth­er cas­es it’s that there’s a lim­i­ta­tion in resources and so by using data we can more effi­cient­ly deploy resources. But I think it’s the exact same prob­lem as in the oppor­tu­ni­ty space which I kind of strad­dle. All of that data, espe­cial­ly in the crim­i­nal jus­tice sys­tem and espe­cial­ly in the US, is so taint­ed with our own his­to­ry. You know, if you’re look­ing at where police ought to go based on where they’ve gone in the past, where did they go in the past? Where they thought crime was going to be, which was based on their stereo­types of which neigh­bor­hoods were gonna be bad neigh­bor­hoods. So, pretending—and I think a lot of the tech­nol­o­gists build­ing these tools either pre­tend or real­ly tru­ly believe that there’s a ground truth out there that they can just vac­u­um up and then turn into a pre­dic­tive model—if we rely on that data as if it’s real­i­ty, we’re going to again be not only repli­cat­ing the past but entrench­ing it. Because if it ends up in these sys­tems and then the sys­tems get more com­pli­cat­ed and we get clos­er to what we think of when we mean AI, it’ll be hard­er and hard­er for us to actu­al­ly change that in the future. And I think that’s one of the big risks when we’re talk­ing about bias kind of creep­ing in. 

Bankston: Yeah, so like over-policing of com­mu­ni­ties of col­or, for exam­ple, that data then feeds into these process­es that results in more over-policing of com­mu­ni­ties of col­or, [crosstalk] and on and on and on.

Bogen: Exactly. And it just aug­ments because it says you know, Go back to this neigh­bor­hood. Oh. There was more police activ­i­ty in this neigh­bor­hood last week, clear­ly that means there was more crime.” It does­n’t, but that’s what the sys­tem can kind of inter­pret. That’s the only data it has, and so then more police will go back the fol­low­ing week. And they nev­er col­lect data on those neigh­bor­hoods where they did­n’t go and so there could be crime hap­pen­ing in oth­er neigh­bor­hoods. There could be rea­son to be deal­ing with the com­mu­ni­ty out there, but if they’re rely­ing on data that’s steer­ing them in a cer­tain direc­tion you get into a feed­back loop that pre­vents the sys­tem from ever learn­ing that there are oth­er examples. 

Bankston: I’m just glad to hear you because I know you have opin­ions in this area.

Chowdhury: I have many thoughts. I always have thoughts. So just to frame what my two col­leagues just talked about from a bias per­spec­tive… As a data scien—so I’m a data sci­en­tists by back­ground; I’m also a social sci­en­tist by back­ground. So when I give this talk in let’s say Silicon Valley, I high­light the fact that when we talk about bias there’s actu­al­ly a lost in trans­la­tion moment that happens. 

When data sci­en­tists talk about bias, we talk about quan­tifi­able bias that is a result of let’s say incom­plete or incor­rect data. This could be a mea­sure­ment bias. This could be maybe a design bias, col­lec­tion bias—so if you’ve ever like tak­en a sur­vey, if you ask peo­ple whether or not they vot­ed in the last elec­tion there’s some incor­rect­ness to it. And data sci­en­tists love liv­ing in that world—it’s very com­fort­able. Why? Because once it’s quan­ti­fied if you can point out the error you just fix the error. You put more black faces in your facial recog­ni­tion tech­nol­o­gy. What this does not ask is should you have built the facial recog­ni­tion tech­nol­o­gy in the first place? 

So when non-data sci­en­tists talk about bias we talk about isms: racism, sex­ism, etc. So inter­est­ing­ly, we’ll have this moment where data sci­en­tists will say, You can’t get rid of bias,” and what they actu­al­ly mean is when we build mod­els, it is lit­er­al­ly like an air­plane mod­el. It is a rep­re­sen­ta­tion of the real world. It will nev­er be perfec—and actu­al­ly it should not be per­fect. That’s what a data sci­en­tist means. 

What a lay per­son hears is, I am not going to both­er to get rid of the isms.” So that is a con­ver­sa­tion that my group tries to bridge. So when we build things like Accenture’s fair­ness tool, etc., to the point of my col­leagues, there’s a con­text to it that’s absolute­ly crit­i­cal and impor­tant. And it is bridg­ing that lex­i­con between what we mean in soci­ety and what we mean quan­ti­ta­tive­ly that’s absolute­ly critical. 

Bankston: So, y’all have men­tioned facial recog­ni­tion, which is a type of arti­fi­cial intel­li­gence or applied machine learning-based tech­nol­o­gy. That has been a very hot pol­i­cy top­ic, not only for pri­va­cy rea­sons but because of…ism rea­sons. Anyone want to talk about what the state of the debate is there and what peo­ple are talk­ing about when they’re talk­ing about bias in facial recognition? 

Chowdhury: Sure. I mean, I can kick it off, yeah. Well it’s been an evolv­ing nar­ra­tive. I think the ini­tial nar­ra­tive was about well…and this is Joy Buolamwini and Timnit Gebru’s work about there’s not enough diver­si­ty in these data sets, so what Gender Shades showed was that face recog­ni­tion is about 98% accu­rate for white men, only about 60-something per­cent accu­rate for darker-skinned African American women. Clearly show­ing this gap which was a func­tion of like, lack of diver­si­ty in the data set. 

The nar­ra­tive now is more about appli­ca­tion. And again, cre­at­ing a more diverse data set so then police can go harass minor­i­ty chil­dren is not nec­es­sar­i­ly where the AI ethics space wants to be going. So there’s actu­al­ly a num­ber of bills about ban­ning facial recog­ni­tion. I think when the most promi­nent debates was in the state of Washington. There’s actu­al­ly a bill in Oakland, San Francisco, and other…you know, I’m not going to be able to laun­dry list all of them. 

So to your point, Kevin, it has been the issue that from a leg­isla­tive per­spec­tive but also from a human psy­che per­spec­tive we’ve latched onto the most. And I think because it’s relat­ed to these sci-fi nar­ra­tives that we’re so famil—like, we all know the sto­ry of Minority Report, so it’s much eas­i­er as a per­son who works in the AI ethics space to be able to talk that talk. I don’t have to explain what facial recog­ni­tion is. People may not nec­es­sar­i­ly know how it works, and there’s a lot of gaps to fill about actu­al­ly how inac­cu­rate it is in gen­er­al. But, peo­ple will under­stand the gen­er­al nar­ra­tive enough to know where the prob­lems may come from. And this is where you know, hav­ing this com­mon watched sci­ence fic­tion lex­i­con is quite helpful.

Bogen: But I think you know, facial recog­ni­tion is not just a prob­lem in the crim­i­nal jus­tice con­text. That’s the most fre­quent one we hear about. But facial recog­ni­tion and facial analy­sis are both pop­ping up in so many oth­er con­texts. There are tools out there that are being used to help inter­view peo­ple that are using facial analy­sis to try to map whether peo­ple are qual­i­fied for a posi­tion. And the peo­ple build­ing those tools are doing inter­est­ing things to test for fairness…but does that jus­ti­fy the col­lec­tion of your face to try and map onto this thing that should­n’t nec­es­sar­i­ly have to do with how your face moves?

Chowdhury: And to your point it’s like build­ing on— It’s not just face—it’s also the field of affec­tive com­put­ing, which essen­tial­ly puts all of human emo­tion into about six buck­ets? So every­thing about who we are and what we feel falls into like…six buckets.

Bogen: Which I think the most recent research was show­ing that black men more like­ly to be angry with a neu­tral face—

Chowdhury: Right.

Bogen: —than white faces, so. [Bankston sighs loud­ly] We’re really…pretty far behind any real­ly good use at this time.

Chowdhury: But to your point like, that’s being used to make hir­ing deci­sions. So while we can latch onto this nar­ra­tive of like we under­stand the minor­i­ty report catch­ing crim­i­nals, oh that might be bad…there are all these ways it’s creep­ing into our dai­ly lives. And the thing is from a busi­ness per­spec­tive, it’s always sold as this effi­cien­cies gain. It is a prod­uct you sell to help peo­ple do their job. And the rea­son why it often goes under the radar is that it’s sold as a tech deploy­ment. So it is not sold and has to go under…like, has to be reviewed by city coun­cil. Or you know, these dif­fer­ent groups. If you were to try to sell a team of peo­ple to mon­i­tor and pre­dict polic­ing, that may actu­al­ly have to under­go like city coun­cil, etc. If I sell you a tech deploy­ment, I am actu­al­ly under ven­dor licens­es. I actu­al­ly may not actu­al­ly have to go through the same chan­nels, and this is where things are sort of being deployed and we find out lat­er and are like what the heck, how come nobody knew? 

Bankston: So we are enter­ing a phase where we don’t have a bunch of crazy robots or megain­tel­li­gences wan­der­ing around, but we do have this mesh of algo­rithms in the back­ground of our lives, doing things. Often shap­ing what we see online, which Miranda was the sub­ject of some research you did. Could you briefly talk about that? And then we’ll move on to some oth­er issues.

Bogen: Sure. So, a lot of peo­ple have heard poten­tial­ly of the con­tro­ver­sy around employ­ers mali­cious­ly or in a dis­crim­i­na­to­ry man­ner tar­get­ing ads online for hous­ing, for jobs, for cred­it, say­ing don’t show this job for hous­ing to black peo­ple.” That’s a big prob­lem. There’ve been lots of col­lab­o­ra­tions, lots of meet­ings, lots of law­suits about deal­ing with that. 

What we were look­ing at was what’s going on in the back­ground? So let’s say I was run­ning an ad for a job. And I real­ly want­ed to reach every­one. I want­ed any­one to have the oppor­tu­ni­ty to work for my orga­ni­za­tion. So I post my ad online. On Facebook was where we had test­ed it. And I said you know, send it out.” 

And what we found was when we did that, we said any­one in the United States could see this, or any­one in Georgia. North Carolina, I’m sor­ry. But what we found was that the algo­rithm that was decid­ing who sees what ad was mak­ing its own deter­mi­na­tions about who should see which job, who should see which hous­ing oppor­tu­ni­ties. I think we found that lum­ber­jack jobs were being shown to 90% white men. Taxi jobs on the oth­er hand were being shown to about 70% African American users. And this was with­out us telling the sys­tem who we want­ed to see. We were try­ing not to discriminate. 

But the sys­tem was learn­ing from past behav­ior of users what they were most like­ly to engage in. What they were most like­ly to click on. What peo­ple like them were most like­ly to engage in or click on. And it was using that to show those peo­ple what it thought they want­ed to see, what was going to be most inter­est­ing to them, or what they were most like­ly to click on. 

So we were look­ing at it in terms of ads, in terms of jobs and hous­ing, but you know, this has come up in the past as well with like fil­ter bub­bles. Are we only see­ing news that we want to read because algo­rithms are decid­ing that that’s what we’re most inter­est­ed in and so we should see more of that? And I think that is sim­i­lar to facial recog­ni­tion. When we’re talk­ing about AI,” that’s a use case where we’re talk­ing about hun­dreds of thou­sands of pieces of data that are going into decid­ing what should be shown to you when on Facebook or on Google. 

And that’s the clos­est to AI that I get to, com­pared to like say crim­i­nal jus­tice con­texts like pre-trial risk assess­ment or who could be released on bail. When peo­ple say AI in the court­room is going to decide who’s released into bail,” often what they’re talk­ing about is like, a numer­i­cal mod­el that’s scor­ing peo­ple on a score of one to six. Which is not real­ly super high­ly com­plex math. But these oth­er sort of online sys­tems that are learn­ing from peo­ple as they inter­act with infor­ma­tion are clos­er to that. And it’s real­ly shap­ing what oppor­tu­ni­ties peo­ple have access to—exactly what you were talk­ing about. 

Zeide: Yeah. And fol­low­ing that, I often think of my job as scar­ing peo­ple. And then hope­ful­ly mak­ing them act on the basis of that fear? And what you were say­ing in terms of these scor­ing sys­tems, they’re in the back­ground. They’re not often vis­i­ble in the way it would be in like a crim­i­nal jus­tice sys­tem, an explic­it decision-making mode. And so I often use sci-fi as my ref­er­ences, to sort of help peo­ple under­stand. Nosedive,” from Black Mirror is the one that seems to chime with peo­ple the most. But Minority ReportGattaca, even, sort of in pre­views. Brave New World.

I say these things and peo­ple grasp the weight of what I’m talk­ing about in a way that is dif­fer­ent than if you just seem­ing­ly talk about what seems like an admin­is­tra­tive tool. And is often acquired you know, as an admin­is­tra­tive tool. 

Bogen: And I think any­time you hear the word per­son­al­ized…” This is a per­son­al­ized job board. It’s a per­son­al­ized news ser­vice… what I hear is stereo­type.” It does­n’t know you, it knows what type of per­son you look like. 

Bankston: In the realm of the con­tent we see, there’s also emerg­ing AI that is going to be used to deceive us in a vari­ety of ways. We’ve now seen deep­fakes, which is basi­cal­ly cre­at­ing using AI a video image of some­one say­ing some­thing they nev­er [said]. There was also this amaz­ing thing if you did­n’t see it. OpenAI, which is the AI group that Elon Musk amongst oth­ers found­ed, they came up with an algo­rithm called [GPT2] that was trained on 40 giga­bytes of Internet text to pre­dict the next word if you gave it a word. 

And so then they start­ed feed­ing head­lines into this thing to see if it could write a news sto­ry. And my favorite one was they wrote a head­line about sci­en­tists dis­cov­er­ing a tribe of uni­corns in the Andes that spoke English. And it wrote…something that read like a human wrote it. And so just imag­ine armies of these things just spew­ing out pro­pa­gan­da BS

Which gets us clos­er to the realm of geopo­lit­i­cal con­flict, which is Lindsey’s bag. And so I’m won­der­ing if you could talk a bit about the role that AI is start­ing to play in the realm of inter­na­tion­al con­flict and inter­na­tion­al sort of geopolitics.

Sheppard: Absolutely. So, this is a great exam­ple that illus­trates kind of the broad­er trend that arti­fi­cial intel­li­gence is liv­ing in. So we are at a time where we have the democ­ra­ti­za­tion of soft­ware, and the com­modi­ti­za­tion of key pri­or­i­ty tech­nolo­gies. So this means that more peo­ple, more coun­tries, more non-state actors now have access to high­ly capa­ble, diverse, robust port­fo­lios than they ever did before. And we the US are quite used to kind of being that capa­bil­i­ty provider, and increas­ing­ly oth­er coun­tries, oth­er actors, don’t have to work with us because of this kind of glob­al trend of ease of access, high­ly capa­ble, low-cost capability. 

And so that real­ly brings us back to this ques­tion of is there an AI arms race? And it’s often framed in the con­text of are we win­ning ver­sus China? How are we doing? Are we falling behind? What is going on? And you have to kind of under­stand the way in which enti­ties apply arti­fi­cial intel­li­gence or data ana­lyt­ics. You apply them to achieve your goals and accom­plish your needs and sup­port your val­ues. So the way in which for exam­ple China applies AI and facial recog­ni­tion and the abhor­rent human rights abus­es should not and will not look like the way that the US applies AI. Because those fun­da­men­tal val­ue struc­tures are dif­fer­ent.

So when we think about—

Bankston: Knock on wood.

Sheppard: Yeah. Well I mean it—it has been a lit­tle depress­ing but I tell myself those fun­da­men­tal val­ue struc­tures are different. 

So when we think about who is going to win the race, the race is going to be won by the coun­tries that fig­ure out how do we make AI work for us? How do we use AI and data-driven tech­niques, and this new port­fo­lio of highly-capable, easily-accessed tech­nol­o­gy work for us? And that’s going to be the coun­try or those enti­ties that win the race. 

If we want to real­ly pick apart how are we doing ver­sus China, we’re still lead­ing the way in research and devel­op­ment and inno­va­tion with­in the United States. And I think there is a cer­tain emu­la­tion of our mod­el that per­me­ates across the globe. But we’re real­ly falling behind on the deploy­ment. And that’s where a lot of the nar­ra­tive of we’re falling behind China, we’re falling behind these author­i­tar­i­an regimes that’re fig­ur­ing out how to make AI work for them…we’re not think­ing well about how do we actu­al­ly take the tech­nol­o­gy, lead in research, devel­op­ment, and inno­va­tion, and how do we deploy it in ways that sup­port our eth­i­cal and nor­ma­tive val­ues. And so I think con­ver­sa­tions like this think­ing about this as a high­ly capa­ble sys­tem, how do we make it work for us? 

Bankston: So I’m glad you brought up ethics. We’re gonna spend the next few min­utes talk­ing about now that we’ve set out some of the issues what’re the sort of pol­i­cy inter­ven­tions we’re see­ing? And I’d say we’re see­ing sort of self-regulation to some extent, usu­al­ly under the frame of AI affects or AI fair­ness, and then some inter­est­ing leg­isla­tive and reg­u­la­to­ry moves. 

But Rumman, you do AI ethics… What the hell are we talk­ing about, when we’re talk­ing about AI ethics?

Chowdhury: Yeah, so I have actu­al­ly a lot of thoughts on the state­ment you made. So first, I actu­al­ly have seri­ous prob­lems with fram­ing it as the AI arms race. Number one, if we’re going to talk about the inclu­sion of diverse nar­ra­tives, fram­ing every­thing in terms of a war-like patri­ar­chal struc­ture of a zero-sum game is lit­er­al­ly the worst way, and the least inclu­sive way to talk about the use of a tech­nol­o­gy. So by nam­ing it that way, set­ting it up to be A, com­bat­ive; and B, some win­ner” and leader” which sets up the hero nar­ra­tive that we were just talk­ing about as prob­lem­at­ic. So even in that name, we have set this up to be patri­ar­chal and war-like. So I actu­al­ly don’t like to refer to it as an arms race, and actu­al­ly inter­est­ing­ly have been talk­ing to some folks who want to frame the dis­cus­sion more as like the Space Race, about cre­at­ing like the International Space Station, etc. Something more col­lab­o­ra­tive. Because it’s not as if we’re all just gonna be fight­ing each oth­er over val­ues. That is a framed narrative. 

The oth­er thing I may actu­al­ly take issue with with you on is you know, to the aver­age cit­i­zen in China the use of arti­fi­cial intel­li­gence deploy­ment has been fab­u­lous. We like to harp on their treat­ment of the Uighurs. That is small minor­i­ty group. 

Now, if we were to take that same nar­ra­tive and flip it on the US, some of our deploy­ments have been no dif­fer­ent. We should point the fin­ger at our­selves, at oth­er coun­ter­parts. If you want to look at India’s Aadhaar sys­tem and the exclu­sion of low­er caste groups… And it’s by design. It is to ful­fill an inter­nal polit­i­cal design, right. 

So I don’t think we should sit on a high horse and act as if our val­ues are bet­ter, or that we’re going to do it bet­ter. Because when we take the AI arms race nar­ra­tive and we talk about it in Silicon Valley? the con­cern is not so much oh, how do we do it in a way that’s bet­ter or more eth­i­cal. It’s actu­al­ly, Shit, China’s beat­ing us, how do we get there faster?” So no one’s even think­ing about— Because we have— The arms race nar­ra­tive push­es this imper­a­tive of run­ning faster? we don’t—much like we did with the nuclear arms race—don’t actu­al­ly both­er to stop and think what should we be doing, because we’re so busy look­ing at the oth­er guy beat­ing us.” And the prob­lem with the beat­ing us” part is the other…the opponent…(our imag­i­nary opponent)…has shaped the nar­ra­tive and the met­rics for us. So it’s hard­er to—actually if we are going to have a values-aligned sys­tem, it’s hard­er for us to adhere to our val­ues if some­one else is defin­ing what the race is all about, right. Because we’re gonna have to adhere to their met­rics to get there. So that was my spiel. 

But when we talk about ethics, when we talk about—

Sheppard: Just to say, I agree with you more [crosstalk] than you may think I do.

Chowdhury: Yeah yeah yeah. Okay. Good. I’m glad.

So, to talk about— And this…it’s such a com­plex issue. Because this is actu­al­ly you know, a glob­al issue. I mean real­ly just remind­ing us that bor­ders and states and bound­aries are arti­fi­cial con­structs of pol­i­tics, right. Like that is the num­ber one thing work­ing in AI reminds you. So if you think about a law like GDPR, General Data Protection Regulation, it tran­scends bor­ders and bound­aries and that’s why it’s actu­al­ly impact­ful. If it were just focused on the EU it would not actu­al­ly have the lev­el of impact that it does on tech companies. 

So when we think about fair­ness, ethics, etc., it needs to actu­al­ly tran­scend bor­ders and think more about com­mu­ni­ties, and groups, and nar­ra­tives that can fil­ter upwards. And the dif­fi­cul­ty has been—and this is sort of why I take issue with this top-down fram­ing. So much of what we talk about is about gov­er­nance. Governing whether it’s sys­tems, or how do we cre­ate sets of val­ues. And that needs to by design be inclu­sive, and what we have not actu­al­ly fig­ured out is, how do we under­stand what ethics means to all the dif­fer­ent impact­ed groups? Because you know, who does Gmail impact? I don’t—like every­one? Great? Now let’s get the diverse per­spec­tives” to fig­ure out what the eth­i­cal frame­work is for that… Well, good luck. So it’s a tough nut to crack, but it has to do with the fact that all these tech­nolo­gies and these com­pa­nies tran­scend bor­ders and bound­aries, and they impact lit­er­al­ly every com­mu­ni­ty out there. 

Bankston: So, that all sounds very straight­for­ward and it’ll get solved by ethics boards, right?

Chowdhury: Super easy. Yeah yeah, no absolutely.

Bankston: Miranda. Ethics boards. [pan­elists all laugh]

Bogen: Oh boy. Well so I think the prob­lem with the fram­ing of ethics and how I hear it around ethics boards, but also just in gen­er­al about we need to make our AI eth­i­cal and how are we going to do that? All of that pre­sumes that at some point we’re going to come to an agree­ment or con­sen­sus on what ethics are… And have we ever done that in our soci­ety? No. We’ve been strug­gling over that for the his­to­ry of not only our coun­try but the entire world and you know, the his­to­ry of human­i­ty. All of human­i­ty. And that’s what soci­eties have been struc­tured around, is strug­gling over those val­ues and struc­tures of gov­er­nance and ethics. 

And so I think what’s real­ly impor­tant here is to set up struc­tures such that what­ev­er we build in today is mal­leable, so that if our val­ues change in soci­ety, we can ensure that the tools that we’ve imple­ment­ed to ful­fill those val­ues are also changed. Like if we had the tech­ni­cal capa­bil­i­ty to build AI sys­tems 100 years ago, what would our soci­ety look like today? It’s super fright­en­ing. And so I think boards and things like that, they’re not so use­ful in the sense that they’re gonna come up with a solu­tion, but we do need to come up with mech­a­nisms so that peo­ple are think­ing about these sys­tems in an ongo­ing way over time. But not only you know, the priv­i­leged sort of high-level peo­ple who are in those board­rooms. How are they talk­ing to the peo­ple who are not only using the tech­nol­o­gy but affect­ed by it, as Rumman said.

Chowdhury: So after what we’re sort of all laugh­ing about and refer­ring to if you’re not famil­iar is the Google ATEAC board issue that hap­pened in April. So what had hap­pened is there was a lot of push­back from the aca­d­e­m­ic and activist com­mu­ni­ty that led to the board being disbanded.

Interestingly, in the AI ethics space, we have these unique roles of indus­try ethi­cists. People like myself and my coun­ter­parts in these oth­er com­pa­nies. That’s kind of a…a new thing. And for those of us in these jobs, what I pulled togeth­er was a Medium arti­cle where we talked about essen­tial­ly what Silicon Valley is now dis­rupt­ing” is democ­ra­cy. That’s actu­al­ly what they’re try­ing to do. They’re try­ing to cre­ate these demo­c­ra­t­ic sys­tems but they’re doing it in the way only Silicon Valley knows how. Which is very prob­lem­at­ic. So what that Medium arti­cle was about was actu­al­ly field­ing the indus­try ethi­cists who were able to con­tribute, and some thoughts on how we believe we can gov­ern the use of these AI sys­tems in an eth­i­cal way. 

Bankston: Some have sug­gest­ed that these var­i­ous boards are attempts at ethics-washing, sort of giv­ing the appear­ance of some sort of self-regulation but real­ly as a way of fore­stalling actu­al reg­u­la­tion. That said, there are some ideas around actu­al leg­is­la­tion on the table, par­tic­u­lar­ly com­ing into the con­text of the debate over new pri­va­cy leg­is­la­tion? I was won­der­ing if any­one could or would speak to…[crosstalk]

Zeide: Yeah. So I’ve been in—

Bankston: …how that is shap­ing up.

Zeide: I’ve been in the pri­va­cy space, which is how I got into the data space, which is how I got into AI space, for a lit­tle while now. And I’m amazed at the leg­is­la­tion we’re see­ing, and the con­ver­sa­tion around it. Last week there was a horde of pri­va­cy pro­fes­sion­als in town. And for the first time I heard peo­ple talk­ing real­is­ti­cal­ly about the idea of leg­is­la­tion that would take into account intan­gi­ble pri­va­cy harm. So not just an eco­nom­ic harm, which is what you usu­al­ly need for a law like that to work. And talk­ing about it as immi­nent, in some way, shape, or form. I think that’s remark­able. And it shows we’ve come a long way. And that there seems to be an agree­ment that pri­va­cy is no longer the real­ly clas­si­cal idea of notice and con­sent. That peo­ple do not read terms of ser­vice. And I think increas­ing­ly, which is some­thing I’ve argued, that they don’t have a lot of choice or alter­na­tives in terms of many main­stream tools, so expect­ing peo­ple to opt out from that is a poor way to ensure privacy.

Bogen: I mean I think the rea­son peo­ple are pay­ing more atten­tion to pri­va­cy now is we’re real­iz­ing what can be done with our data. It’s not just a the­o­ret­i­cal your data’s being col­lect­ed and maybe it will leak, and some­one will steal it, and then they’ll steal your cred­it card.” It’s being used to make deci­sions. It’s being used to shape your infor­ma­tion envi­ron­ment. And I think that’s what’s insti­gat­ing a lot more atten­tion from the Hill at the moment and why peo­ple are focus­ing on pri­va­cy as the remedy. 

There’s also anoth­er inter­ven­tion that was intro­duced recent­ly called the Algorithmic Accountability Act, which is intend­ing to com­pel com­pa­nies or enti­ties that are build­ing pre­dic­tive sys­tems to check those sys­tems before­hand for their impact. To check them for bias, or dis­crim­i­na­tion, or oth­er types of harms. And I think that’s inter­est­ing because what it’s try­ing to do is get peo­ple slow down. You know, don’t go full speed ahead, try and think before you act. There’s still a lot of ques­tions in that pro­pos­al like, who gets to—know, I think they envi­sion the Federal Trade Commission enforc­ing that and cre­at­ing rules around it. But who gets to see those impact assess­ments? Do com­pa­nies real­ly have to do any­thing if they find some kind of harm? Who’s defin­ing like how much harm it would make them have to change their their model? 

But what I think is inter­est­ing there is again, the incen­tive to move to arti­fi­cial intel­li­gence or machine learn­ing is often remove the fric­tion,” you know. Make every­thing more effi­cient and easy. And I think the rea­son we have laws, and espe­cial­ly the rea­son we have civ­il rights laws—which is what I most­ly focus on, is because pure effi­cien­cy led to an awful lot of bad out­comes. And so there’s a rea­son to slow down. There’s a rea­son to not be effi­cient. There’s a rea­son to not be hyper-personalized. Because if we do that, we’re cater­ing to only a cer­tain part of soci­ety that can take advan­tage of that ease, where­as oth­er peo­ple can’t. And so I think those types of pro­pos­als of forc­ing us to not be as efficient…well, I think busi­ness­es don’t like them and we still don’t know what they’ll look like. There’s a pur­pose for that type of intervention. 

Bankston: Moving on to the ques­tion of this event, what can sci-fi teach us or not about AI pol­i­cy? I’m curi­ous for y’al­l’s takes on how AI and sci-fi has been help­ful or hurt­ful to the dis­course around AI in pol­i­cy, or help­ful or hurt­ful to your attempts to engage in that dis­course per­son­al­ly. You know, I already flagged what my pet peeve is, which is AI has con­di­tioned us to wor­ry­ing more about Skynet and less about hous­ing dis­crim­i­na­tion. And I often think that Kafka is actu­al­ly our best rep­re­sen­ta­tion of AI in the sense of his books are all about base­less bureau­crat­ic sys­tems that don’t make sense and con­trol your life. But I’m curi­ous what y’all think. 

Sheppard: So I think in engag­ing with pol­i­cy­mak­ers, par­tic­u­lar­ly in the nation­al secu­ri­ty space, the equa­tion of con­scious­ness or sen­tience with intel­li­gence or repli­cat­ing intel­li­gent func­tion pre­vents us from hav­ing an hon­est con­ver­sa­tion about when and where and how do you best use these sys­tems. To think about it as a con­scious being ver­sus an algo­rithm and data and all of the prob­lems that we’re talk­ing about, that real­ly masks the abil­i­ty to come in to your prob­lem area and to have an hon­est con­ver­sa­tion about what are the true pit­falls, what’re the true ben­e­fits, and how do we actu­al­ly bring arti­fi­cial intel­li­gence or machine learn­ing or com­put­er vision into a workflow. 

Zeide: So for me…I gave you some my touch­points a lit­tle ear­li­er. But for me the anthro­po­mor­phiz­ing of tech­nol­o­gy is a real issue. So when I talk about edu­ca­tion tech­nol­o­gy, peo­ple often think about replac­ing teach­ers and the idea of robot teach­ers. And they pic­ture the Jetsons, for those of you who may be old enough to know that. You know, a robot in the front of the class­room talk­ing. And there are things that can auto­mate instruc­tion right now that don’t look like that, that are sim­ply a plat­form. And yet they have the same sort of impact that putting a teacher at the front of the room would have in terms of what stu­dents learn and how they advance. 

I also think that the all-or-nothing aspect of a lot of sci­ence fic­tion is…it impedes some con­ver­sa­tion. So, for rea­sons that make sense, most sci­ence fic­tion says once this tech­nol­o­gy has been devel­oped and been deployed, they don’t see it devel­op­ing, they don’t see it being adopt­ed ad hoc, they don’t see it mess­ing up. And every sin­gle tech­nol­o­gy that I have ever used has messed up at some point. And I don’t think that our nar­ra­tives account for that in the way that even account­abil­i­ty is… You know, for­get some­thing as sophis­ti­cat­ed as bias. Like, what about typos? 

Bogen: For me it’s two sides of a coin. One is that I think sci-fi has helped jour­nal­ists frame old ques­tions in new ways. Like back to the crim­i­nal jus­tice con­text, if we’re talk­ing about robots in the court­room or Minority Report, that gives peo­ple an imme­di­ate frame of ref­er­ence that some­thing they thought they knew was hap­pen­ing is chang­ing and it’s chang­ing because of tech­nol­o­gy, and it’s worth pay­ing atten­tion to. So I think that has as I men­tioned ear­li­er kind of broad­ened the com­mu­ni­ty of peo­ple that are inter­est­ed in these issues. 

You know, just last week or two weeks ago The Partnership on AI, which is one of the self-governing enti­ties that’s been cre­at­ed in recent years to try and think about some of these issues, released a report about pre-trial risk assess­ment, about using AI in the court­room, com­ing out and say­ing this tech­nol­o­gy is not ready yet, we should con­sid­er whether…I believe they said whether it ever ought to be. But that there’s many open ques­tions and some severe lim­i­ta­tions to using this tech­nol­o­gy. That’s a total­ly dif­fer­ent stake­hold­er group than has been involved in the crim­i­nal jus­tice con­text for quite some time, and it lands some cred­i­bil­i­ty to have the tech­nol­o­gists say­ing you know, We know what’s going on here and we can’t build this yet and you don’t want us to build this.” So that’s interesting.

On the oth­er hand, when the media frames some of these kind of news sto­ries using a sci-fi trope, peo­ple can pre­sume that they under­stand what’s hap­pen­ing when in fact it’s a com­plete overblown per­spec­tive of what’s hap­pen­ing. So for instance if we’re talk­ing about social cred­it scor­ing in China, I think the Nosedive” episode of Black Mirror, the episode where we’re talk­ing about every­one scor­ing every inter­ac­tion that they have and you have like a score that you walk around with and that deter­mines what you have access to. People have that vision when they think of what China’s doing, and that’s just not the case. They’re much more rudi­men­ta­ry. Still work­ing on sort of patch­works of black­lists that are based in their val­ue sys­tem and so it’s not as jar­ring to the main­stream Chinese soci­ety as I think we imag­ine it would be, because a lot of us have this frame of a pop cul­ture exam­ple of what a social cred­it scor­ing sys­tem looks like. And so it it kind of redi­rects ener­gy where maybe that ener­gy could be used in com­ing up with dif­fer­ent solu­tions or think­ing about how to pre­vent what’s actu­al­ly going on here in this coun­try that we ought to care about because we’re dis­tract­ed by this frame that we think we’re famil­iar with. 

Bankston: And Rumman, and then we’ll get a Q&A.

Chowdhury: Sure. I love every­one’s points that they’ve made. I whole heart­ed­ly agree, espe­cial­ly with the anthro­po­mor­phiz­ing one. It’s extreme­ly problematic. 

I guess the one that I would raise is a prob­lem I see in Silicon Valley a lot. A fun­da­men­tal belief in maybe the tech indus­try as a whole but def­i­nite­ly in Silicon Valley which is dri­ven by some of this lit­er­a­ture is that the human con­di­tion is flawed, and that tech­nol­o­gy will save us. And this is the obses­sion behind hav­ing microchips in our brains so that we have per­fect mem­o­ries. Guess what, we don’t want per­fect mem­o­ries. Because there are peo­ple who—

Sheppard: There was a Black Mirror episode [inaudi­ble]

Chowdhury: Yeah. But there are peo­ple who actu­al­ly are alive who have a con­di­tion where they vivid­ly remem­ber every­thing that ever hap­pened, and they live in con­stant trau­ma. Imagine being able to relive your par­ent dying with the same lev­el of inten­si­ty you did when they actu­al­ly died. We are meant to for­get things. So I think there is not— Because of this notion that tech­nol­o­gy will per­fect us or fix us in a way in which you know, human­i­ty is weak and flawed, is prob­lem­at­ic because when we try to cre­ate arti­fi­cial intel­li­gence, we don’t cre­ate it around human beings, we retro­fit human beings to the tech­nol­o­gy. And espe­cial­ly liv­ing in a world of lim­it­ed tech­nol­o­gy, tech­nol­o­gy that is not quite where it should be in the sto­ries but as Elana very accu­rate­ly said is maybe 30% of the way there? We actu­al­ly try to force our­selves to fit the lim­i­ta­tions of the tech­nol­o­gy rather than appre­ci­at­ing that maybe we are the par­a­digm to which tech­nol­o­gy should fit.

Bogen: I had one more thing, Kevin. I think the oth­er thing is, even when we’re read­ing sci-fi that’s intend­ed to be dystopi­an and we’re intend­ed to read or watch it as being dystopi­an, it’s accul­tur­at­ing us to the idea often of this con­stant sur­veil­lance. That in order for the tech­nol­o­gy to work that’s in that sto­ry, the data’s need­ed and that that’s just inevitable. And so even we see it going wrong, I think we’re get­ting used to that idea, and that’s what we’re see­ing today in the push­back to facial recog­ni­tion. There’s just not enough peo­ple push­ing back to facial recog­ni­tion because we rec­og­nize that. It’s some­thing that’s inevitably com­ing. Maybe it would be bad, but it’s going to come. And I think that’s some­thing to think about as well, even if it’s clear the sto­ry is going south because of that surveillance.


Bankston: So we don’t have a whole lot of time for Q&A cause we’re jam­ming a lot of con­tent in today. But we do have time for a few. Ground rules: Questions in the form of a ques­tion. Keep them brief. Answers respon­sive to ques­tion, keep them brief. Hands raised. Yes ma’am, please wait for the mike to come to you. 

Audience 1: You know, the con­ver­sa­tion has two sides in a way. One is gov­ern­ment kind of issues and we have you know, civ­il rights kind of pro­tec­tions against that. Other is pri­vate sec­tor kind of intru­sions against pri­va­cy and sur­veil­lance and things of that nature. Putting aside the gov­ern­ment side for the moment, where on the com­mer­cial side do we have options to do push­back from a legal action per­spec­tive? Are there caus­es of action? And you know, just a final foot­note, and yet we all go out and buy Alexa and install it all over our house and leave it on, vol­un­tar­i­ly. But yeah, I’m inter­est­ed in the pri­vate against…as you know, the new phrase sur­veil­lance capitalism.” 

Chowdhury: I can maybe start that one. So what we’re see­ing and actu­al­ly Miranda men­tions, the HUD… So what gov­ern­ment is try­ing to do now is, how can we take exist­ing law and exist­ing pro­tec­tions and apply in these new set­tings? It’s a bit of an unchart­ed space, because as Amanda said you can put an ad out in good faith and then the algo­rithm is mak­ing deci­sions based on how it was trained. You may not even real­ize that it’s been deployed in a biased man­ner. So we had to come to that real­iza­tion that that hap­pened, and then be able to fig­ure out what is the angle.” And what we’re see­ing in a lot of the— And I know you want to sort of sep­a­rate gov­ern­ment but you kind of can’t in this. With the UK group the ICO (Information Commissioner’s Office) and the FCC and some of the lan­guage of the bills, we see like latch­ing on to the notion of pro­tect­ed class­es. So what are the groups that are already pro­tect­ed, and then how can exist­ing law sort of be lever­aged to fur­ther that, and that’s a start­ing point for then start­ing to build fur­ther protections.

And to your point about Alexa, you raise an issue in the AI ethics space, which is what do we have to offer? Technology com­pa­nies have nice shiny gad­gets, the abil­i­ty to look cool­er than the Joneses or what­ev­er. They offer you incre­men­tal ease. What do we offer? We offer scare nar­ra­tives. We offer… So in our space, we actu­al­ly have to fig­ure out— And yes, the notion of lib­er­ties, free­doms, and pro­tec­tion is less tan­gi­ble than a shiny new watch. Unfortunately. So what can we as the AI ethics space offer peo­ple that can com­bat this nar­ra­tive that tech com­pa­nies have honed so well?

[anoth­er pan­elist begins speak­ing; indistinct]

Bankston: I’d like to fit it in; let’s keep mov­ing. Questions. That gen­tle­man near the back. 

Damien Williams: Hi. Thank you all very very much for your con­ver­sa­tion today, it was real­ly great. My ques­tion is for Rumman. You men­tioned the trans­la­tion prob­lem between dif­fer­ent com­mu­ni­ties about bias. But I want­ed to kind of dig down a lit­tle bit on that and maybe chal­lenge it a bit, and ask you, is there not a space in which some of us might mean we can’t remove bias because we’re not talk­ing about isms but we’re talk­ing about the foun­da­tions of isms? 

Chowdhury: Oh, I like that. 

Williams: Perspectives.

Chowdhury: Yes, absolute­ly. Well and actu­al­ly there’s an entire nar­ra­tive now that we real­ly should­n’t be think­ing about fair­ness, we should be think­ing about jus­tice. We should­n’t talk about bias, we should talk about pain and harm. So absolute­ly. And I think Miranda raised this real­ly well that this is not going to be a solved space. And I think we just all have to get com­fort­able. And it’s fun­ny because in indus­try we’ve been say­ing like change is the new norm, and every­thing’s going to be— It’s just been like boil­er­plate nar­ra­tive for years when talk­ing about tech­nol­o­gy. And I think we will actu­al­ly have to grap­ple with the fact that we will just be liv­ing in a space of con­stant change, growth, and evo­lu­tion. So, absolute­ly, you’re you’re total­ly right. 

Bankston: One more quick question—

Chowdhury: Which I will not answer. 

Bankston: Hands? Hands. This gen­tle­man right there. 

Audience 3: To what extent does what’s imag­in­able in AI ethics a func­tion of the imper­a­tive for scal­a­bil­i­ty that ven­ture cap­i­tal fund­ing AI devel­op­ment demands? I’m think­ing of the scal­a­bil­i­ty of returns on invest­ment. Scale-free ver­sus con­cen­tra­tion of capital. 

Zeide: So, I’ve thought a lot about that in terms of the prac­ti­cal­i­ty of being able to imple­ment, account­abil­i­ty, explain­abil­i­ty, trans­paren­cy, eth­i­cal models…algorithmic impact mod­els. When you have a prof­it impera— And peo­ple inside these com­pa­nies, some of whom are not…evil, actu­al­ly. [Chowdhury laughs] Anyway. But, there’s an imper­a­tive. There’s a com­mer­cial imper­a­tive to, espe­cial­ly for the publicly-owned com­pa­nies, they need to pro­duce prof­it for their share­hold­ers. And when that is the ulti­mate bar, and when those results are scru­ti­nized, incred­i­bly care­ful­ly, I think it leaves com­pa­nies in a very dif­fi­cult posi­tion to be able to slow down, and increase fric­tion, and be thought­ful about imple­men­ta­tion. Because they all seem to be rac­ing against each other.

Bogen: But I think there’re some real­ly inter­est­ing exam­ples, and to your ques­tion ear­li­er of how do we push back against cor­po­ra­tions that are doing this, we’re doing that. The advo­ca­cy com­mu­ni­ty is learn­ing how to advo­cate to the tech com­pa­nies using share­hold­er action, using sort of pub­lic cam­paigns, using direct­ed research to say here’s your prob­lem and here’s how you can fix it.” And I think espe­cial­ly as the companies…the peo­ple build­ing this tech­nol­o­gy, are cre­at­ing these sys­tems that are mak­ing real­ly impor­tant deci­sions in peo­ple’s lives that may fit with the law, that may not, we I think can come to expect those actors to also be play­ing a role of gov­er­nance that we have the respon­si­bil­i­ty to pay atten­tion to in that way, and to tell them what we expect as the pub­lic. What we expect them to do, what we expect them not to do. And to get oth­er peo­ple to appre­ci­ate that fact as well.

Bankston: Well, that’s a nice clos­ing note of agency and hope. So please thank the pan­elists. Thank you.

Further Reference

Event page