Lucas Introna: Thank you very much for the invi­ta­tion, and I’m real­ly awestruck at the the qual­i­ty of the debate this morn­ing. And so I’m a bit ner­vous to present a 8,000-word paper in twen­ty min­utes, which I can’t read at all. I timed myself this morn­ing, it takes five min­utes to read a page. And I have sev­en­teen pages so…that’s not gonna work. And if I’m going to try and present it by not read­ing it, I’ll prob­a­bly present a very unsub­tle, crude ver­sion of the paper. So I do encour­age you to hope­ful­ly just read the paper and not depend too much on my presentation. 

Okay so, I’ve been inter­est­ed in algo­rithms for many years. I start­ed my life as a com­put­er pro­gram­mer and I’ve always been very intrigued by the pre­ci­sion of that lan­guage and the pow­er of it. So I was seduced by it ear­ly on in my life. And I’ve been very inter­est­ed in the way in which algo­rithms func­tion to pro­duce a world for us which we engage in. So this paper real­ly is try­ing to take a very par­tic­u­lar per­spec­tive on algo­rithms, and that’s the per­spec­tive of per­for­ma­tiv­i­ty, draw­ing on the work of Butler, Latour, Whitehead, and so forth. And I’m real­ly inter­est­ed to see whether this helps us or not address the ques­tion which is cen­tral to the pro­gram, which is governability. 

Okay, so I’ll do a few things. I did ten slides, which means I have two min­utes per slide so I’m gonna save on this one and not go through the agen­da, I’m just gonna jump in. 

So, the ques­tion of gov­ern­abil­i­ty of course has to do—is very cen­tral­ly con­nect­ed to the notion of agency. And that is the ques­tion of what algo­rithms real­ly do. And you know, there’s this ques­tion that comes up as we’ve had the debates, you know what do algo­rithms do, how do they do it, whether they do it, etc. 

So when I was a com­put­er sci­ence pro­gram­mer, the first task I was giv­en was to write…well, one of the first, was to write the bub­ble sort algo­rithm. And the bub­ble sort algo­rithm is an algo­rithm that tries to sort a list of things (num­bers, or let­ters, what­ev­er) by com­par­ing adja­cent pairs. And if the one is small­er than the oth­er they would swap them around and put them in the right order. And every time you do one less until you’ve sort­ed the whole list. And that’s the algo­rithm. On the left it’s pre­sent­ed in words and on the right hand side it’s pre­sent­ed in C++, which is some­thing I did­n’t write in at all. My pro­gram­ming lan­guage was Fortran and I did it on punch cards. So the whole object-orientation came after my life as a programmer. 

But what I can do is I can still under­stand some of this code. And so if you look at the green box, the green box does what the green text says, which is to com­pare every adja­cent pair. And the blue box does what the blue text says, and that is to swap their posi­tion if they’re not in the right order, okay. 

And so at a very basic lev­el we could say the green box com­pares. That’s what it does. The green box com­pares. The blue box swaps, alright. And the red box sorts. Okay. So, we com­pare in order to swap, and we swap in order to sort. And we sort in order to…? So what is the in order to?” To allo­cate? So maybe this is a list of stu­dents and we want to allo­cate some fund­ing and we want to allo­cate it to the stu­dent with the high­est GPA, okay. So we sort in order to allo­cate. We allo­cate in order to…administer funds. We admin­is­ter funds in order to… Right. 

So the point is what the algo­rithm does is in a sense always deferred to some­thing else, right. So, the com­par­ing is defer­ring to swap­ping, the swap­ping to sort­ing, the sort­ing to allo­cat­ing and so forth. So, at one lev­el we could say what the algo­rithm does is clear­ly not in the code as such. The code of course is very impor­tant, but that’s not exact­ly what the algo­rithm does. 

So one’s response to this prob­lem is to say well, the algo­rith­m’s agency is sys­temic. It’s in the sys­tem, right. It’s in the way in which the sys­tem works. That’s what algo­rithms do. 

Now, why do we think algo­rithms are prob­lem­at­ic or dan­ger­ous? I’m remind­ed of Foucault’s point that pow­er is not good or bad, it’s just dan­ger­ous. So algo­rithms are not good or bad, they’re just dangerous. 

So why do we think they’re dan­ger­ous? So one argu­ment is they’re dan­ger­ous because they’re inscrutable. They’re sub­sumed with­in infor­ma­tion tech­nol­o­gy infra­struc­tures, and we just don’t have access to them. And so one response to that would be we need more trans­paren­cy. But as I sort of indi­cat­ed just before, even if we look at the code, we would not know what the algo­rithm does. So if we inspect the code, are we going to achieve more? Not that I’m say­ing that’s not impor­tant, but that’s not going to help us in terms of under­stand­ing what algo­rithms do as such. 

So, anoth­er prob­lem— And I mean, this inscrutabil­i­ty’s deeply com­plex, because one of my PhD stu­dents did a PhD on elec­tron­ic vot­ing sys­tems. And one of the prob­lems in elec­tron­ic vot­ing sys­tems is exact­ly how do you ver­i­fy that the code that runs on the night of the elec­tion is in fact allo­cat­ing votes in the way it should do? Now, you can do all sorts of tests. You can look at the code, you can run it as a demon­stra­tion. And there’s all sorts of things, but you can’t answer that real­ly in the final instance. 

The oth­er prob­lem is we say that it’s auto­mat­ic. And the argu­ment is always made that code or algo­rithms are more dan­ger­ous because it can run auto­mat­i­cal­ly. It does­n’t need a human, right. It can run in the back­ground. And we heard the pre­sen­ta­tion last night from, what’s her name? Claudia, yeah. So she explained not only that the algo­rithms run auto­mat­i­cal­ly, but the super­vi­sion of the algo­rithms is algo­rith­mic, right. So we see that as dangerous. 

Okay. So what to do? So I think this ques­tion what do algo­rithms do,” which points to the ques­tion of agency, I think is an inap­pro­pri­ate way to ask the ques­tion. I think we should rather ask the ques­tion, what do algo­rithms become in sit­u­at­ed prac­tices? And that’s dif­fer­ent to just say­ing how they are used, okay. So how they are used is one of the in-order-to’s. So how do the users use it, in order to do some­thing? But that in order to do some­thing” is con­nect­ed to a log­i­cal con­text of say a busi­ness mod­el or some­thing. So there’s a whole set of things that are impli­cat­ed in the sit­u­at­ed prac­tice. Not just the user, it’s the busi­ness log­ic, it’s many oth­er things. Legal frame­works, reg­u­la­tions and so forth. 

So I guess what I’m say­ing is that what we need is we need to under­stand the per­for­ma­tive effects of algo­rithms with­in sit­u­at­ed socio­ma­te­r­i­al prac­tices. Sociomaterial prac­tices mean­ing the ways in which the tech­ni­cal and the social act togeth­er as a whole, as an assem­blage. So why per­for­ma­tive effects, or why per­for­ma­tiv­i­ty is just for a shorthand. 

So, the ontol­ogy of per­for­ma­tiv­i­ty, which…you know, I’m not going to talk about Deleuze or any of those. Basically this ontol­ogy says one of the prob­lems we have is that we start with sta­bil­i­ty. So we start with an algo­rithm. And then we try and decode it. Or we start with attrib­ut­es, and they were try and clas­si­fy it. Whereas in fact, what we have is we have flux and change, and sta­bil­i­ty is an ongo­ing accom­plish­ment. It takes hard work, sta­bil­i­ty, to achieve. And this accom­plish­ment is achieved by incor­po­ra­tion of var­i­ous actors, of each oth­er, in order to achieve this accom­plish­ment. And so, there is no agency in the algo­rithm, or in the user, or in…what­ev­er. It is the way in which they inter­act. Or as Barad says, intra-act; Karen Barad. And through the intra-action they define and shape and pro­duce each other. 

And there­fore there’s a cer­tain fun­da­men­tal inde­ter­mi­na­cy that’s oper­at­ing in these socio­ma­te­r­i­al assem­blages. This inde­ter­mi­na­cy means that we can’t sim­ply locate the agency in any spe­cif­ic point. A beau­ti­ful exam­ple that was used this morn­ing about the spi­der’s web. There’s a spi­der’s web, but we don’t know where the spi­der is. When we try and get to it moves, right. And it’s mov­ing all the time. And that cre­ates a very fun­da­men­tal prob­lem for us in terms of governance. 

This is a quote from Donald MacKenzie’s book which has also been quot­ed a num­ber of times this morn­ing. His point is that these per­for­ma­tive effects are there even though those who use them are skep­ti­cal of them, of their virtues, unaware of their details, or even igno­rant of their very exis­tence. So we do not need to know or under­stand for these effects to happen. 

Okay so, I want to illus­trate this by using a real­ly small exam­ple, which is the exam­ple of pla­gia­rism detec­tion sys­tems. And I have three ques­tions which I sort of use in the paper to guide this analysis. 

So the ques­tion is for me why does it seem obvi­ous for us to incor­po­rate pla­gia­rism detec­tion sys­tems into our teach­ing and learn­ing prac­tice? Why is it obvi­ous that for some­thing like 100 thousand…I don’t know, Turnitin quotes on their site some­thing like 100 thou­sand insti­tu­tions in 126 coun­tries use Turnitin, that 40 mil­lion texts of var­i­ous sources (essays, dis­ser­ta­tions, etc.) are sub­mit­ted to this data­base every day? Why is it so obvi­ous to us that we need this?

The sec­ond ques­tion is what do we inher­it when we do that? This incor­po­ra­tion of Turnitin into our teach­ing and learn­ing prac­tice, what does it pro­duce? What are the per­for­ma­tive outcomes? 

So, the first thing to note I think, and which I make a point in the paper is… And of course plagiarism…you know, academics…you know, this is not the best top­ic to choose when you talk to aca­d­e­mics. Because pla­gia­rism, that’s obvi­ous­ly a big no-no. And a cou­ple of times when I’ve talked about pla­gia­rism and pla­gia­rism detec­tion algo­rithms, the dis­cus­sion was not on algo­rithms but why should we actu­al­ly use these sys­tems to catch these cheats? 

So, the ques­tion for me is why do we have pla­gia­rism detec­tion sys­tems in our edu­ca­tion­al prac­tice? Now the issue of pla­gia­rism of course is con­nect­ed to the whole issue of com­mod­i­fi­ca­tion. And the his­tor­i­cal root to this is the Roman poet Martial who basi­cal­ly first coined the phrase pla­gia­rism” because his poet­ry was in fact copied by oth­er poets. And there’s a whole issue of how poet­ry moved from an oral tra­di­tion to a man­u­script tra­di­tion. And when it moved to the man­u­script tra­di­tion, there was this claim of plagiarism. 

So it’s the com­mod­i­fi­ca­tion that is an impor­tant issue. And what I say in the paper is I think one of the rea­sons why pla­gia­rism and the way in which it’s instan­ti­at­ed in our edu­ca­tion­al prac­tice is an issue is because edu­ca­tion has become com­mod­i­fied. And that in the com­mod­i­fi­ca­tion of edu­ca­tion, pla­gia­rism as a charge makes sense, right. Because this is about prop­er­ty. This is about the pro­duc­tion. So the argu­ment in the paper is that we have this process of com­mod­i­fi­ca­tion where writ­ing is to get grades, grades is to get degrees, degrees is to get employ­ment, and so forth. So we have a cer­tain log­ic there that is mak­ing it obvi­ous to us that what we want to do is we want to catch those that are offer­ing coun­ter­feit prod­uct, right. And we call them cheats and thieves because they’re offer­ing coun­ter­feit prod­uct in this practice. 

And in a sense the ques­tion is, these systems—plagiarism detec­tion systems—don’t detect pla­gia­rism. They detect strings of char­ac­ters that are main­tained from a source. And if you keep a long-enough set of char­ac­ters, you get detect­ed because you have copied. These are copy detec­tion sys­tems, right. And there’s a whole debate as to why peo­ple copy, and why copied texts are in stu­dents’ man­u­scripts, etc., which is edu­ca­tion­al and has noth­ing to do with stealing. 

So for me the ques­tion is why did we frame the edu­ca­tion­al prac­tice in that way? Secondly, once we’ve incor­po­rat­ed that, what does it pro­duce? And for me the impor­tant thing is, as I try to show in the paper, is it pro­duces a cer­tain under­stand­ing of what writ­ing assess­ments are about, it pro­duces a cer­tain under­stand­ing of the sort of per­son that I am when I pro­duce these assess­ments as pro­duc­ers of com­modi­ties, and there­fore stu­dents have no prob­lem in sell­ing their essays on eBay. Because this is a com­mod­i­ty. And sec­ond­ly, they have no prob­lems in out­sourc­ing this task to some­body else, because it’s a pro­duc­tion of a com­mod­i­ty and the eco­nom­ic log­ic says you know, you should do it in the most effi­cient sort of way. 

So we have a sit­u­a­tion in which the agents in this assemblage—the stu­dents, the teach­ers… And the teach­ers often have very clear rea­sons why they adopt this tech­nol­o­gy. So we have an agency in which these actors become per­for­ma­tive­ly pro­duced in a way that is very counter to any of the agents’ inten­tions. So there’s a log­ic there that is play­ing itself out that tran­scends the agency of any of the actors. 

Okay so, my last slide. Two min­utes. Thank you. So, per­for­ma­tiv­i­ty— In the paper I also have an exam­ple of Google and so forth which you can have a look at. 

So, the ontol­ogy of becom­ing means for me that gov­er­nance is an ongo­ing endeav­or. There is no once-and-for-all solu­tion. If we could locate the agency in a par­tic­u­lar place, if we could find that loca­tion, then of course we could address that loca­tion. But since the agency is not in any par­tic­u­lar loca­tion but is shift­ing between the actors as they immerse in this prac­tice, there is no once-and-for-all solu­tion. We can’t find the right mod­el. There isn’t a right mod­el. We can’t pro­gram— Even if we would open the algo­rithms and look at them we would­n’t find the agency there. It’s a part. Of course it is one of the actors. What we need to under­stand is how these var­i­ous actors—the pla­gia­rism detec­tion sys­tem, the stu­dents, the teach­ers, the edu­ca­tion­al practice—how they func­tion togeth­er. How they become what we see them to be. 

And then sec­ond­ly, we need to also under­stand that our attempt to gov­ern is itself one of the inter­ven­tions that needs to be gov­erned. So if we start and change… For exam­ple, if we say to Turnitin, Change the algo­rithm. Change it in this way. Or make it vis­i­ble. Make it trans­par­ent,” that act of mak­ing it trans­par­ent, as has been point­ed out many times, would lead to gam­ing, which would then change the game, which then would need some fur­ther inter­ven­tion. So there is no once-and-for-all, and our very attempts at gov­ern­ing would then pro­duce out­comes, per­for­ma­tive­ly, which we have not antic­i­pat­ed and which them­selves need gov­ern­ing again. So, there is no one point in the spi­der’s web. Thank you.

Further Reference

The final ver­sion of Introna’s paper, pub­lished as Algorithms, Governance, and Governmentality: On Governing Academic Writing

Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Cash App, or even just sharing the link. Thanks.