[This is a response to Lucas Introna’s pre­sen­ta­tion of his draft paper Algorithms, Performativity and Governability.”]

Matthew Jones: So I want to begin with a great failed data min­ing project from around 2001. It was was a great data min­ing project that would apply automa­tion to team process­es so more infor­ma­tion can be exploit­ed, more hypothe­ses cre­at­ed and exam­ined, more mod­els built and pop­u­lat­ed with evi­dence, and over­all more crises dealt with automatically.” 

And this project—and I think it’s very inter­est­ing for out point—it rest­ed fun­da­men­tal­ly on a cri­tique of human rea­son, of human indi­vid­ual rea­son. It said that what it wished to fund were A: cog­ni­tive tools that allow human and machines to think togeth­er in real-time about com­pli­cat­ed prob­lems; B: means to over­come the bias­es and lim­i­ta­tions of the human cog­ni­tive sys­tem; C: cog­ni­tive ampli­fiers to help teams rapid­ly and ful­ly com­pre­hend com­pli­cat­ed and uncer­tain situations. 

This was a mas­sive data min­ing effort pro­duced in the wake of what were wide­ly seen as fail­ures with­in arti­fi­cial intel­li­gence and machine learn­ing to pro­duce evi­dence that was­n’t ground­ed on such an under­stand­ing of the lim­i­ta­tions of human abil­i­ty. It went under the name Total Information Awareness. Some of you may remem­ber this. This was a mas­sive coun­tert­er­ror­ism effort. 

And what inter­ests me about this and the rea­son I men­tion it is that it seems that those of us dis­cussing the inscrutabil­i­ty of algo­rithms, as Lucas and oth­ers today have done such a won­der­ful job discussing—not just the fact but what the eth­i­cal impli­ca­tions that stems from that fact—that that’s cen­tral in fact to the entire effort to pro­duce a whole new series of algo­rithms, the premise of which are the lim­i­ta­tions of human abil­i­ty, and the way to get com­put­ers to help us do that. 

Now, haunt­ing our dis­cus­sion today… So, what I’m inter­est­ed in is that we, and many of the peo­ple that were inter­est­ed to dis­cuss the pos­si­bil­i­ty of gov­ern­ing, are simul­ta­ne­ous­ly oper­at­ing in a con­di­tion of the inscrutabil­i­ty of that which we want to know. So, haunt­ing much of our dis­cus­sion of algo­rithms is I think a con­cern that we would become wor­ried when algo­rithms gov­ern us and not us them. 

And this isn’t sort of a sim­ple ver­sion where there’s a grand machine Matrix-like that’s mak­ing us think some­thing, or for philoso­phers the Cartesian demon, but rather a dis­per­sion of algo­rithms that are even hard­er to pin down. To use an algo­rithm well autonomous­ly is to lay down the law of using it. It is to know in some fun­da­men­tal way. 

Now, Lucas chal­lenges this analy­sis I think in deeply fu—in at least three ways. First, there’s no sim­ply know­ing an algo­rithm. Second, there’s no sort of pre-given human per­son­hood or sub­ject that is the ori­gin that we’re going to use in judg­ing algo­rithms. So we need to look at peo­ple dif­fer­ent­ly and the process of them. And then third­ly, there’s a real­ly grave dan­ger when we’re doing either sort of high-faluting aca­d­e­m­ic work look­ing at the effects of algo­rithms, or think­ing about con­crete eth­i­cal and polit­i­cal, or indeed cod­ing chal­lenges to exam­in­ing them. 

So, to begin look­ing at this, I want to begin with— It’s some­thing that’s an old chest­nut among this com­mu­ni­ty. I most­ly spend my time with bor­ing his­to­ri­ans so they don’t think this is so unex­cit­ing. But in the very first PageRank paper there’s a remark­able thing, which is of course that it’s always about per­son­al­ized search. It was always going to have it. Brin and Page wrote, Personalized page ranks may have a num­ber of appli­ca­tions…” Indeed. “…includ­ing per­son­al search engines. These search engines could save users a great deal of trou­ble by effi­cient­ly guess­ing a large part of their inter­ests giv­en sim­ple input such as their book­marks or home page.”

Now, from this sim­ple begin­ning of course some­thing more search­ing arose. Fundamental to defens­es of the data min­ers of users is a claim indeed that the cog­ni­tive limitations—that’s our inabil­i­ty to think well—of human beings require us both to data mine to learn things, but as cen­tral­ly to be data mined. In this vision, data min­ing helps us to become free. Because the space of poten­tial things we might go and learn about is sim­ply too large, if it were acces­si­ble. In fact, being data mined allows us in this argu­ment to move from a sort of for­mal dec­la­ra­tion that we are free to go and learn any­thing, towards an actu­al­iza­tion, because we are being helped. Something that is judg­ing us is help­ing us to over­come our own cog­ni­tive limitations. 

Now this kind of argu­ment is com­ple­ment­ed by the sorts of argu­ments that the mas­sive data bro­kers make. And there’s a remark­able set of— How many min­utes? Three. Okay, I’ll be quick.

So the data bro­kers quite remark­ably make a sim­i­lar argu­ment. One of them says, for many pop­u­la­tions for whom online ser­vices are made free, infor­ma­tion tru­ly is a direct con­duit to Liberty.” So there’s a dou­ble enabling of free­dom, that is, that allows us to become the peo­ple we want to be. To actu­al­ize our­selves. To become who we real­ly are. And what’s inter­est­ing is Lucas is giv­ing us an account, using the per­for­ma­tive the­o­ry, of becom­ing in which there isn’t an essence. But many of the peo­ple who are inter­est­ed in gov­ern­ing are pro­found­ly inter­est­ed in pro­vid­ing ser­vices that are about becom­ing who we real­ly are. And I think that shared and dif­fer­ent ontol­ogy is well worth think­ing about. 

Okay. I want to end just on this ques­tion of gov­ern­ing with­out knowl­edge. And we keep com­ing back to this when we talk about trans­paren­cy. We’ve talked about the prob­lems of trans­paren­cy both epis­te­mo­log­i­cal (we can’t real­ly know these sorts of things), as well as things like trade secrets. We can’t gov­ern through knowl­edge, prop­er­ly speak­ing. Even if many algo­rithms are trade secrets, Lucas and oth­ers have remind­ed us near­ly all would not be sur­veil­l­able by human beings, even if we had access to their source code. We have to begin what­ev­er process from this fun­da­men­tal lack of knowl­edge. We need to start from the same epis­te­mo­log­i­cal place that many of the pro­duc­ers of algo­rithms do. 

And so I think, curi­ous­ly enough, at the moment that we’re critical—as I think we right­ly are—of prox­ies like Turnitin, we’re in des­per­ate need of prox­ies in think­ing about how to gov­ern the kinds of algo­rithms that wor­ry us. We can­not know them. Even if some­one hand­ed them to us, we could­n’t know them. And so I think that’s one of the sort of things we real­ly need to know. We real­ly need to pro­duce prox­ies. Now I sus­pect, and here I’ll end, that this is of course going to cause gam­ing the sys­tems. But I won­der if mutually-assured gam­ing isn’t one of the best things we can do if we’re inter­est­ed in gov­ern­ing. And those who are inter­est­ed in gov­ern­ing us…we have a shared inter­est in actu­al­ly both gam­ing the sys­tem. Okay, I’ll end there.

Further Reference

Jones’ response paper


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Cash App, or even just sharing the link. Thanks.