This dis­cus­sion fol­lows from Lucas Introna’s pre­sen­ta­tion of his draft paper Algorithms, Performativity and Governability,” and respons­es from Matthew Jones and Lisa Gitelman. A record­ing of Gitelman’s response is unavail­able, but her writ­ten com­ments are avail­able on the Governing Algorithms web site.

Solon Barocas: Thanks so much. I will offer Lucas the oppor­tu­ni­ty to respond, if he cares to?

Lucas Introna: Yeah, I just want to be clear that I’m not say­ing that the details of the algo­rithms are irrel­e­vant. In a way they can mat­ter very much, and you know, in a cer­tain cir­cum­stance, in a cer­tain sit­u­at­ed use, it might mat­ter sig­nif­i­cant­ly what the algo­rithm does but we can’t say that a pri­ori. So we need to both open up the algo­rithms, we need to under­stand them as much as pos­si­ble, but we must not be seduced to believe that if we under­stand them there­fore we know what they do. That’s the shift, that’s the dan­ger­ous shift.

So for exam­ple I think it’s real­ly rel­e­vant that I know that in the Turnitin detec­tion sys­tem, it essen­tial­ly uses a cer­tain tech­nique for iden­ti­fy­ing the sequence of char­ac­ter strings. And because I know that, I can under­stand how cer­tain edit­ing pro­ce­dures by stu­dents when they write over their text, how that might make them either detectable not detectable. And that helps me to under­stand the sort of per­for­ma­tiv­i­ty that might flow from the actu­al use of that. So I do think know­ing the algo­rithm is nec­es­sary, but it’s irre­ducible, of course. 

I think the point about the prox­ies is real­ly valid. And I did­n’t sort of make the point at the end but I think there is a real issue with the fact that we will only know… Yeah. We’re all in the same space of igno­rance, as it were. And we will only know what we gov­ern when we engage with it. And when we engage with it, we will of course also be enact­ing changes and there would be a response to that, etc. So in a sense gov­er­nance is exper­i­men­ta­tion in a cer­tain way. So there’s a cer­tain exper­i­men­ta­tion that is implied in gov­ern­ing, which I think was a good point you [Matthew Jones] made. 

Barocas: Great. So I should have men­tioned that peo­ple can line up. I’ll take ques­tions from the floor. And just in the inter­est of time I’m actu­al­ly going to take two at a time, and we’ll let the respon­dents han­dle them in one go. So, please.

Audience 1: Okay, two at a time. This is relat­ed to the know­ing the algo­rithm and being gov­erned by algo­rithms. I just want­ed to point out sort of an ana­logue. Right now it’s poten­tial­ly pos­si­ble that we can catch every sin­gle time you speed going down the high­way. Every sin­gle time you go over 65, I can put a black box recorder and tick­et you every sin­gle time. The sec­ond your park­ing meter goes off, I can tick­et you. And that’s going to be very very very pos­si­ble when cam­eras are three dol­lars and you have these pro­gres­sive things. 

Here’s the ques­tion. I was talk­ing to you, Solon, about this. What is our desired sphere of obscu­ri­ty? Or non-scrutinizability? Do we want to guar­an­tee our­selves a cer­tain amount of law­break­ing? One of the things the algo­rithms— More data is avail­able about us. You say you can read into a per­son­’s per­son­al pref­er­ences because it’s now exposed. Before we had obscu­ri­ty because we just did­n’t know. Now, do we want to try and guar­an­tee some lev­el of obscu­ri­ty for peo­ple, some free­dom like that? It’s not nec­es­sar­i­ly the algo­rith­m’s fault, it’s because the data is now avail­able for the algo­rithms to use. This can be used in many spheres, not just search. We can now put cam­eras and watch every­body work­ing, all your work email is mon­i­tored, we can do app mon­i­tor­ing. We do— But com­pa­nies choose not to look at it because they don’t want to know. And that’s what kind of cre­ates this sphere now. But I don’t know how we would address that. How do we cre­ate you know, a sphere of obscurity?

Lev Manovich: Lev Manovich, pro­fes­sor of com­put­er sci­ence, CUNY Graduate Center. So, my ques­tion is about what I see as maybe the key kind of dimen­sion of this day so far, with trans­paren­cy ver­sus opac­i­ty, and I think your notion of flux con­nects to that. So as it was already point­ed out, right, no real soft­ware sys­tem involves a sin­gle algo­rithm, right. There are hun­dreds of algo­rithms. Plus servers, plus data­bas­es. So that’s one chal­lenge. The sec­ond chal­lenge, the sys­tems are very com­plex, right. So Gmail’s about 50 mil­lion lines of code. You know, Windows, hun­dred mil­lion lines of code. So, no sin­gle pro­gram­mer can actu­al­ly exam­ine it. 

But I want to point out with her chal­lenge, which I think is kin­da the ele­phant in the room because I haven’t heard any­body address it so far. Most…a very large pro­por­tion of con­tem­po­rary soft­ware systems—search engines, rec­om­men­da­tion sys­tems, reser­va­tion sys­tems, pric­ing systems…they’re not algo­rithms in a con­ven­tion­al sense where there’s a set of instruc­tions, you can under­stand them. So, even if you pub­lish those algorithms…it does­n’t do you any good because we use what in com­put­er sci­ence is called super­vised machine learn­ing. Meaning that there is a set of inputs, it goes into a black box, and the black box pro­duces out­put, and in some cas­es there’s a for­mal mod­el. In most cas­es, because those black box­es turn out to be more effi­cient when we don’t pro­duce a for­mal mod­el, right, you don’t know how a deci­sion has been made. [indis­tinct] with neur­al net­works net­works [indis­tinct] it becomes much worse. 

So basi­cal­ly, mil­lions of soft­ware sys­tems in our soci­ety are these black box­es where there is noth­ing to see, right. Even if you tried to make them trans­par­ent. And that I think was kind of the ele­phant in the room which I hope you can address. So things are much more seri­ous and dark than we imagined.

Introna: Yeah. So yeah, the issue of the sphere of obscu­ri­ty I think is a real­ly impor­tant one. Because one of the areas of research I’m inter­est­ed in is pri­va­cy. And one of the clas­si­cal argu­ments for pri­va­cy is that we need pri­va­cy for auton­o­my because in a sense if we have obscu­ri­ty, if we have spaces where we’re not observed, we feel free to act in the ways which we would want to act. But if we are aware— I mean, this is this Foucauldian point. The point of the panop­ti­con is that if we’re always observed, then we inter­nal­ize that obser­va­tion to the point that we observe our­selves on the behalf of the oth­ers. And nor­mal­ize our­selves. And so in a sense, as we become tracked, profiled— 

I mean, I felt— You know, the point about the Amazon…if I got to Amazon and I want to buy a book and I go to the bot­tom and I look oth­er peo­ple who bought this book also looked at these titles,” I look at those titles and I think hmm, maybe I should be read­ing those things. Maybe there’s some­thing in them that I’m miss­ing. And I’m start­ing to con­form, to become, the per­son in that strange cat­e­go­ry of peo­ple who read you know X and Y, etc. So, there’s a cer­tain sense in which I become nor­mal­ized through these sys­tems, and I do think there is a point that we need, a zone of obscu­ri­ty. In the new EU reg­u­la­tions there’s a whole data pro­tec­tion reg­u­la­tion. There’s whole issue of what they called the right to be for­got­ten,” yeah? And I think this tries to speak to that but that’s deeply problematic. 

I think the issue of machine learn­ing is obvi­ous­ly very absolute­ly cor­rect. I mean, one of the areas of research that Helen and I have done is look­ing at facial recog­ni­tion sys­tems. And one of the things that the research has shown is that facial recog­ni­tion sys­tems are bet­ter at iden­ti­fy­ing dark-skinned peo­ple than white-skinned peo­ple. And you know, you can imag­ine how that might play out in terms of race and so forth. And so we asked the pro­gram­mers, the peo­ple, why. And they said, Well we don’t know,” right. So we have these sets, these algo­rithms learn through being exposed to these sets. You know, we can open the box, but there are just these lay­ers of vari­ables and you know, we don’t know why but for some oth­er rea­son they are bet­ter at iden­ti­fy­ing dark-skinned peo­ple than— We have a hypoth­e­sis, we have some sug­ges­tions why that might be the case but we just don’t know. Yeah, so I do agree that that’s a real­ly seri­ous issue. 

Jones: I’ll just say one thing I think that’s chal­leng­ing in think­ing through the issue of obscu­ri­ty is that…many peo­ple who have strong intu­itions about per­son­al pri­va­cy out­side the realm of think­ing about these algo­rithms don’t real­ly have very good intu­itions about how eas­i­ly that obscu­ri­ty can be destroyed by [traces?]. And I think it means that in think­ing about obscu­ri­ty and pri­va­cy we also need to think about what it is that con­sent is when peo­ple don’t have an imag­i­na­tion of what is pos­si­ble because of extreme­ly pow­er­ful algo­rithms. And I think part of a dis­cus­sion and indeed part of an infor­ma­tion­al role that peo­ple like the group here can have is to begin to under­stand that there’s some­thing very dif­fer­ent about con­sent, in all sorts of ways. That we might all agree on the sort of pri­va­cy, but that’s eas­i­ly vio­lat­ed through us con­sent­ing to things that seem to us per­fect­ly triv­ial but have turned out not to be. That were triv­ial fif­teen years ago that aren’t today.

Gitelman: Yeah, maybe I’ll riff on that for one sec­ond, on the ques­tion of the sphere of obscu­ri­ty. Because I’ve always won­dered about the speed­ing ques­tion. You know, because we’ve had turn­pikes for a long time, and E‑ZPass for a lit­tle while. And I think you know, it actu­al­ly would not take an algo­rithm to catch us all speed­ing, it’d just take a sub­trac­tion prob­lem. You know, because we go a cer­tain dis­tance and we get through with our tick­et on the turn­pike in too short an amount of time. So I guess what I’m say­ing is that as much as I’m you know, embrac­ing all the intri­ca­cies of this con­ver­sa­tion about algo­rithms from its very mul­ti­dis­ci­pli­nary per­spec­tives, we also can’t let the prob­lem of algo­rithms get mys­ti­fied to the extent that it obscures us from see­ing things we can see with­out the ques­tion of algo­rithms, too, just a lit­tle sub­trac­tion problem.

Jones: Yeah, my first cal­cu­lus text­book in fact had the use of the mean val­ue the­o­rem to catch speed­ers, I remember.

Barocas: Right, the next two.

Daniel McLachlan: Hi, I am Daniel McLachlan. I’m a tech­nol­o­gist at The Boston Globe. It seems like in a lot of these dis­cus­sions of algo­rithms and gov­er­nance, a lot of the con­cerns that come up are con­cerns that exist about large orga­ni­za­tions and large bureau­cra­cies even with­out, or sort of before algo­rithms enter into the dis­cus­sion. And the increase in the usage and the pow­er of algo­rithm seems to have two main effects. I mean, the first is obvi­ous­ly that it allows to the­o­ret­i­cal­ly catch every speed­er. It sort of mul­ti­plies the pow­er of the bureau­cra­cy. But on the oth­er hand, I’m inter­est­ed in teas­ing out what your thoughts are on how the at least notion­al trans­paren­cy of the algo­rithm is an object, as opposed to a kind tan­gle of roles and rules enact­ed by peo­ple in an orga­ni­za­tion change how those orga­ni­za­tions behave and how peo­ple envi­sion them. Does it make it…you know, does it help or does it hurt?

Daniel Schwartz-Narbonne: Hi, I’m Daniel Schwartz-Narbonne. I’m a post-doc here at Courant. I already intro­duced myself. So, cou­ple things. First of all, when you had your bub­ble sort algo­rithm are you sure it should­n’t be a less-than or equal to in the for loop? [audi­ence laughs]

And sec­ond of all, a lot of the stuff that peo­ple have been talk­ing about has been…you know, these are prob­lems that have already exist­ed, right. Law is an algo­rithm. When the IRS decide you know, will they allow this par­tic­u­lar tax dodge or not, and then the lawyers come up with some new way around it, they’re actu­al­ly play­ing off an algo­rithm that’s sim­ply imple­ment­ed in a human head instead imple­ment­ed on a com­put­er. And I think the real dif­fer­ence is not you know, are we deal­ing with algo­rithms or not. The real dif­fer­ence is the rel­a­tive cost of doing var­i­ous things.

So, there’s a lot of stuff where we nev­er real­ly wor­ried about it because it was­n’t…prac­ti­cal, right. We did­n’t wor­ry about the huge amount of infor­ma­tion that was in some gov­ern­ment data­base because you lit­er­al­ly had to send some guy to go pho­to­copy it to get it out and so that was not a risk to your pri­va­cy. And now that it’s on the Web and you can scrape it, that same data is now a risk to pri­va­cy because the cost of get­ting it is a lot low­er. And in gen­er­al the costs are drop­ping, and just an exam­ple with the deanonymiza­tion— I don’t know if peo­ple are famil­iar with deanonymiz­ing the Netflix data. But Netflix released their data in order to allow peo­ple to have a com­pe­ti­tion to improve their rec­om­mender algo­rithm, and it turned out that you can actu­al­ly fig­ure out who peo­ple are from sim­ply a list of what movies they watched at what times and what rank­ings they gave them, and then use this to pre­dict oth­er movies by look­ing at things like peo­ple’s blogs. So, the abil­i­ty to col­lect all this data has become huge. And that I think is real­ly the big ques­tion we have to look at you know, as the cost of doing things is chang­ing. But the fun­da­men­tal ques­tion of deal­ing with algo­rithms does­n’t seem to me to have real­ly changed from when we were deal­ing with the law. 

Introna: Yeah, those are two real­ly good points. I think your point is almost an answer to the per­son before you. And that is, what’s the dif­fer­ence, what changed? We always have had bureau­cra­cies and we’ve always been con­cerned about these things. But you know, in algo­rithms, because the cost of doing it has reduced so sig­nif­i­cant­ly for exact­ly that reason—the issue of elec­tron­ic vot­ing, for exam­ple. Why are we so con­cerned? People would say Well you know, when we had paper vot­ing, peo­ple could also rig the elec­tion so why have we got all these hugely-complex process­es to try and ver­i­fy the algo­rithm for the elec­tron­ic vot­ing, you know? We don’t have these huge process­es when we do paper vot­ing.” Well yeah, but you can not real­ly rig the elec­tion if you have a cou­ple of peo­ple get togeth­er and— You know you, it’s quite cost­ly to go and find the bal­lot papers, get hold of them ille­gal­ly, then put all the cross­es on and get them all in the box. It’s quite a com­plex process. Whereas if you get to the algo­rithm, you could change the elec­tion. I mean, you could change a mil­lion votes in…a click. So you know, the cost is real­ly the issue. And because that is the case, it real­ly mat­ters where these algo­rithms sit, what they do, etc. So, I think your sort of point is almost an answer to his, the rel­a­tive cost point.

Jones: I would just say one of the things that I did­n’t com­ment on in Lucas’ paper but I did in my writ­ten thing is that it’s enor­mous­ly help­ful in mak­ing us look very c—of not fetishiz­ing the algo­rithm. That is, in many cas­es things we’re going to claim are dif­fer­ences of scale. And the place where we need to look is the mate­r­i­al and social con­di­tions under which these algo­rithms are being deployed. And that’s where the con­ti­nu­ity with say, bureau­crat­ic or legal pro­ce­dure— And I think that’s enor­mous­ly ana­lyt­i­cal­ly and very prac­ti­cal­ly impor­tant. It’s also impor­tant to get at those moments pre­cise­ly when there has to be some­thing dif­fer­ent about how we think of them, those moments in which the fact that it is some­thing being done with com­put­ers and [indis­tinct] is qual­i­ta­tive­ly dis­tinct from what might’ve hap­pened with bureau­crat­ic pro­ce­dure. I sus­pect there are few­er of those than we expect. Because it’s easy to get caught up in the tech­no­log­i­cal deter­min­ist nar­ra­tive of the neces­si­ty of these sorts of things. But I think it focus­es our atten­tion on the one hand on what is it that enables algo­rithms and mate­r­i­al con­di­tions, and then on those spe­cial exam­ples, what is dis­tinct about them. And I think that’s impor­tant, both ana­lyt­i­cal­ly and then very practically.

Nick Seaver: Hi, I’m Nick Seaver again. Thanks for a bunch of real­ly inter­est­ing papers. I have a ques­tion about anoth­er thing that is an old ques­tion but feels some­times knew about exper­tise. And it was real­ly inter­est­ing to me to hear how all three of you touched upon this knowl­edge ques­tion. I think Matthew you stat­ed it very straight­for­ward­ly in even if they hand­ed us the algo­rithm, we would­n’t know it in any way that real­ly mat­ters.” And what that gets at seems like to me is this ques­tion that sort of ani­mates a lot of this dis­cus­sion in that we’ve got two dif­fer­ent camps of exper­tise, right. You’ve got peo­ple who know algo­rithms, and peo­ple who know soci­ety, law, what­ev­er, ethics on the oth­er hand. And we want to some­how bridge this gap between these two sets of peo­ple. And per­son­al­ly I won­der whether that assump­tion is unfound­ed and that you’ve got inter­est­ing eth­i­cal and legal think­ing that hap­pens on the side of them, and you’ve got inter­est­ing sort of algo­rith­mic think­ing that hap­pens on the side of us. But I’m also won­der­ing what that means for say, when speak­ing as a we” on the like ethics, what­ev­er, side, want to talk about algo­rithms, what do we make of these claims to exper­tise about what they are like, and about how they work? And how do we sort of recon­fig­ure our ques­tion say, for exam­ple if it turns out that the bub­ble sort algo­rithm per se may not actu­al­ly be what we mean when we say I care about the Google algo­rithm” or some­thing like that. How do we rede­fine our ques­tions in response to this sort of pre­sumed exper­tise of others?

Helen Nissenbaum: Hi, I’m Helen Nissenbaum, NYU MCC and ILI. This is more an invi­ta­tion to reflect on what I’m call­ing a defense that says it’s bet­ter than noth­ing.” And that is the way in order to run cer­tain expe­ri­ences through the algo­rithms we have, we have to per­form the reduc­tion that Lisa talks about. And then we find that the results are good—like you know, take Turnitin. All these mil­lions of doc­u­ments it’s able to adju­di­cate. And it’s true that it may not cap­ture all the pla­gia­rists, but it’s gonna cap­ture many of them, so isn’t that bet­ter than noth­ing? And we have—you know, even say with the facial recog­ni­tion study that you men­tioned, Lucas, you might say well okay, so it rec­og­nizes dark­er faces bet­ter than lighter faces, but at least it’s rec­og­niz­ing faces, so what’s the prob­lem? And I think it also gets to the ques­tion that Sasha was ask­ing last night of Claudia Perlich, and that is Claudia was say­ing well, we only get say a 4% bump in accu­ra­cy by doing this entire back­end machin­ery and then tar­get­ing. But that’s good enough for me. You know, that can make my busi­ness run. 

So I think these are issues of jus­tice at root, but I still don’t know how we’re gonna defend our­selves against this rejoin­der that these algo­rithms, as imper­fect as they are are bet­ter than nothing.

Introna: Yeah. That’s what my col­leagues tell me all the time when I don’t want to use Turnitin. I think it’s… You know I— Well… We can use Turnitin after we’ve had the debate on why we’re doing this. Which we don’t do, we just use the tech­nol­o­gy. And that’s the prob­lem for me. The bet­ter than noth­ing” is… What’s inter­est­ing about the defense for Turnitin some of my col­leagues have is they say the rea­son why we use this is because the non-native speak­ers, when they pla­gia­rize, when they copy, we can iden­ti­fy it eas­i­ly because we can see there’s a change in the style of the writ­ing. But the native speak­ers, they have the lin­guis­tic abil­i­ty to write over the stuff they copy in such a way that it becomes indis­tin­guish­able from the text around it. And there­fore actu­al­ly the rea­son why we should use Turnitin is because it’s fair­er, you know. It’s fair­er because it catch­es every­body equally. 

It seems to me one of the things there is that—and some peo­ple have touched on this—is the idea that there’s sort of a math­e­mat­i­cal or com­pu­ta­tion­al objec­tiv­i­ty. And that this com­pu­ta­tion­al objec­tiv­i­ty some­how is valu­able enough so that you know, it’s bet­ter than noth­ing, we do catch some of them. Yes, but what about the ones we don’t catch? And the con­se­quences for the peo­ple who are caught, against those who are not caught… I mean, in most uni­ver­si­ty sys­tems you get expelled, right? So is this a mat­ter of jus­tice? And is it jus­tice for all? 

So my response to my col­leagues is let’s first have a debate. Let’s under­stand the lim­i­ta­tions of the sys­tem. Let’s under­stand what it does and what it does­n’t do. And if we have that debate, and we then use it, and we can use it in a for­ma­tive way, if we use it in a way that is not puni­tive, that’s not legal­is­tic and we say Let’s use it to iden­ti­fy the stu­dents that copy. Let’s talk to them about copy­ing and why they copy. And let’s use that as an oppor­tu­ni­ty to edu­cate them in terms of the sort of writ­ing that we expect from them,” etc., now that’s a com­plete­ly dif­fer­ent socio­ma­te­r­i­al con­fig­u­ra­tion that we’re putting togeth­er. So yes, I think it can serve a pur­pose, but that pur­pose needs to be under­stood with­in the way in which it oper­ates with­in those sit­u­at­ed practices. 

And sim­i­lar­ly you know, yes, we want to catch peo­ple who speed. But do we under­stand how that tech­nol­o­gy oper­ates? Do we under­stand the con­di­tions under which it oper­ates? Have we had a dis­cus­sion of what we’re real­ly try­ing to do here? Are we real­ly just— Are we try­ing to help peo­ple dri­ve safe­ly, or are we sim­ply try­ing to make mon­ey? And in the UK, most local author­i­ties will tell you speed­ing is a seri­ous form of income for them, and they want speed cam­eras. The more they have the bet­ter because the more mon­ey they make. This is not about road safety. 

So you know, I think what we need to under­stand is the sociotech­ni­cal prac­tices with­in which it oper­ates. Why it oper­ates in the way it does. So yes, bet­ter than noth­ing, but.

Gitelman: I guess I would agree with that. I think the way that we could trans­pose that bet­ter than noth­ing” rejoin­der into a kind of accep­tance of good enough” and there to sort of press the con­ver­sa­tion of if you say good enough for your detect­ing faces or cheats, what’s good, right, and what’s enough? To real­ly kind of push those issues there, the good enough can be a ques­tion of opti­miza­tion. So broad­en that dis­cus­sion and try and get peo­ple engaged, not let­ting a sin­gle ven­dor, say, answer the ques­tion. I think by and large that’s it’s an argu­ment or a dis­cus­sion that we can per­suade peo­ple to have. I mean, I think that we could make some rhetor­i­cal adjust­ments to the bet­ter than noth­ing” that might make a more pro­duc­tive chan­nel there. 

The they/we question…I mean, the oth­er kind of strate­gic, rhetor­i­cal ques­tion that I think is real­ly hard to address. I mean I do, just over the last day and a half to have a kind of an intu­itive response that the they/we…you know, is some­thing that we need to run from, to find ways around. And I mean real­ly this is Helen’s incred­i­ble tal­ent with her co-organizers of putting so many dif­fer­ent peo­ple in the room togeth­er who don’t make a sin­gle they and a sin­gle we. And to some­how sort of go for­ward with that and think strate­gi­cal­ly about how that hap­pens and how that can hap­pen in more settings. 

Jones: Yeah. I would…just com­bin­ing the two ques­tions. I mean, I guess third, what has just been said, that con­sid­er­ing any sit­u­a­tion of some­thing being judged to be good enough, or bet­ter than noth­ing, it’s not that an algo­rithm is nec­es­sar­i­ly neu­tral but it’s prob­a­bly not the right place to look when mak­ing that deci­sion. And that’s ask­ing for the kind of exper­tise of peo­ple who look at socio­ma­te­r­i­al conditions. 

But the exper­tise of the peo­ple who actu­al­ly build algo­rithms I think is also use­ful. It’s the peo­ple in between who cel­e­brate them with­out much under­stand­ing that are the sort of— Because if you ask the peo­ple who build the algo­rithms, or you ask data min—you know, indus­try or machine learn­ing peo­ple, what you get is refresh­ing can­dor about lim­i­ta­tions. The whole field­’s about like, you know, we don’t know… You know, How does this work? We don’t know.” Or these com­pli­cat­ed mod­els. That refresh­ing can­dor, that con­ver­sa­tion, it’s a rich resource for say­ing this is the wrong kind of thing to be doing if we want to reg­u­late this sort of sys­tem. Even if we agreed that it was the val­ue we want­ed to have. 

So I think actu­al­ly get­ting into these dif­fer­ent pock­ets of exper­tise and away from sort of rather unre­flec­tive cel­e­bra­tion or denun­ci­a­tion is going to be more pow­er­ful way to think about this.

Katherine Strandberg: So, Kathy Strandberg from NYU Law School. So I just had two com­ments. One was I thought maybe there would some­thing that could be added to the list of what are we con­cerned about about algo­rithms. Because I think in many but not all cas­es of con­cern about algo­rithms, in addi­tion to the secre­cy con­cern and the auto­mat­ed concern…and maybe even more impor­tant in many cas­es, it is this fact that in many appli­ca­tions algo­rithms are using prob­a­bilis­tic infer­ence to make deci­sions or have impli­ca­tions for indi­vid­u­als. It seems to be that’s some­thing we haven’t real­ly talked about, and that might be okay in some cir­cum­stances and not in others. 

The sec­ond thing I want­ed to do was sug­gest that one con­cept that might be help­ful to us in think­ing about this whole area is a con­cep­tion from eco­nom­ics of cre­dence goods.” So, some­times in the case of algo­rithms, we are in a sit­u­a­tion where we’re get­ting out­put and we don’t know how to eval­u­ate whether this out­put is good or not. So Google says These are the top ten search results” and you know, we don’t know whether we’d like some oth­er arrange­ment of search results bet­ter. We can only say okay, it seemed alright. And that’s actu­al­ly an area that we’ve— So many cas­es I think we’re in that sit­u­a­tion. And that actu­al­ly is a sit­u­a­tion that at least in the law we’ve dealt with quite a bit, but we don’t we’re not think­ing there about… You know, nobody’s too both­ered about the fact they don’t under­stand exact­ly what’s going on inside their tele­vi­sion set, right? Because you know, you see the TV show or you don’t see the show, and it’s work­ing or it isn’t.

So instead of think­ing about tech­nolo­gies like that, I think we should be think­ing about peo­ple like lawyers, and doc­tors, peo­ple who are pro­vid­ing things that even after you get it you can’t real­ly tell whether it was good or not. And legal­ly, we deal with those things in a cou­ple dif­fer­ent ways. So one is cer­tain kinds reg­u­la­to­ry regimes. But one of the big ways that we deal with this kind of issue is through pro­fes­sion­al ethics. And I’m won­der­ing if sort of the fact that that isn’t real­ly hap­pen­ing, or we don’t know how to make that hap­pen with some of these things that real­ly are the equiv­a­lent of cre­dence goods is part of what’s dis­turb­ing us. 

So for exam­ple I think the Turnitin exam­ple is inter­est­ing because if the out­put is pla­gia­rism or not, and we don’t know any­thing about what they’re doing, then it’s a cre­dence good. Once we know they’re count­ing a cer­tain num­ber of char­ac­ters, we might or might not think that’s a good way of mea­sur­ing pla­gia­rism, but we’re in a sit­u­a­tion of we can decide whether we think it’s good—we can eval­u­ate it. 

So…going on for too long. But any­way, I think maybe that point about can we eval­u­ate the out­put is an impor­tant one. 

Barocas: So I’d love to take anoth­er ques­tion but I’m afraid we prob­a­bly have to end there with ques­tions. But please, panel.

Introna: Yeah. Thank you. Yeah, I absolute­ly agree with you about the prob­a­bilis­tic infer­ence. I think that’s a real­ly impor­tant point. And indeed this is some­thing I think where peo­ple are real­ly con­cer— When you go to Google, you can do the dash­board, right. And you can look at who they think you are. So you go to the dash­board and there’s an option; you can see what are the cat­e­gories under which they have clas­si­fied you. So I went there and I dis­cov­ered that I was a woman. And I was younger than I am. So I thought that’s not a bad clas­si­fi­ca­tion. [audi­ence laughs] But clear­ly that’s what they use to serve me ads. So maybe that’s not such a great idea. So I do think that’s a real­ly impor­tant point. 

The issue of evaluation…yeah. I just think pro­fes­sion­al ethics is not real­ly the way to go. I mean, not that I think there’s a prob­lem with pro­fes­sion­al ethics. But one of my areas of research is busi­ness ethics. And one of the areas in which busi­ness ethics have gone for a long time now is the whole notion of codes of ethics and the idea that orga­ni­za­tions have codes of ethics and that employ­ees sign up for the codes of ethics and so forth. The prob­lem is those very codes of ethics become the way of avoid­ing to do ethics, right. So we can say we have a code of ethics but yet you know, the prac­tices don’t con­form. But if you ques­tion the prac­tices you’ll always be referred back to Well, we have a code of ethics.” So I think pro­fes­sion­al ethics is a com­plex thing, and I don’t think it’s a sort of sim­ple… Well I’m not sug­gest­ing you’re say­ing it’s sim­ple, but I think it’s a very com­plex route and may even become a way of avoid­ing address­ing the issues that we want to address. 

Barocas: Okay. Well I think we’re exact­ly on time. Please join me in thank­ing the pan­el for a good session.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Cash App, or even just sharing the link. Thanks.