[This pre­sen­ta­tion was in response to Tarleton Gillespie’s The Relevance of Algorithms”]

I’m real­ly excit­ed to be here. I don’t often meet peo­ple who think about algo­rithms.

Just by way of pref­ac­ing, I don’t actu­al­ly study algo­rithms. What I study is a com­pa­ny that has been in the busi­ness of mak­ing algo­rithms for finan­cial insti­tu­tions for about 50 years. And my top­ic of research is a com­pa­ny called Fair Isaac, which most Americans have prob­a­bly heard of. They are the com­pa­ny behind the American cred­it score that’s known by the acronym FICO.

Now, because of my inter­est in the busi­ness of mak­ing algo­rithms and the his­to­ry of cred­it mar­kets, and of infor­ma­tion that is sold to run the cred­it mar­kets, I have cer­tain pre­oc­cu­pa­tions which don’t often appear in con­fer­ences about media stud­ies and top­ics of media. The his­to­ry of algo­rithms that I’m inter­est­ed is not the math­e­mat­i­cal his­to­ry, it’s the busi­ness his­to­ry. And it’s the busi­ness his­to­ry with­in the era of com­put­ing. And I would just note (I’m not sure this fits yet) to Kate that ago­nis­tic plu­ral­ism is of course the basis of the free mar­ket, and of the way that things are sup­posed to be exchanged in free mar­kets run by infor­ma­tion. I think there’s a link between the way I’m think­ing about algo­rithms and the way that Kate is. I just give that back­ground for you to keep in mind because I’m not going to step into the lan­guage of media stud­ies. I’m going to stick to the lan­guage that I know, and I hope it doesn’t feel too for­eign to you.

My presentation’s orga­nized in three parts.

In the first part I’d like to talk about the ecol­o­gy of com­mer­cial algo­rithms. What’s strik­ing to me as some­one who stud­ies algo­rithms in a com­plete­ly dif­fer­ent indus­try from Tarleton is that his state­ment of rel­e­vance applies first and fore­most to algo­rithms that play a role in the dis­tri­b­u­tion of media to a broad pub­lic. And Tarleton’s analy­sis applies to those algo­rithms whose com­mer­cial pur­pose (kind of like a light switch) is to bring a flow of infor­ma­tion, like the flow of elec­tric­i­ty, into the room to many peo­ple in many places. And those peo­ple are con­ceived of as con­sumers and as the pub­lic.

But not all algo­rithms, and cer­tain­ly not all com­mer­cial algo­rithms, actu­al­ly inter­face with pub­lic life. So to help refine the con­tours of what Tarleton is up to, I thought I’d try to sketch out a very pro­vi­sion­al and incom­plete an ecol­o­gy of the oth­er kinds of algo­rithms that cir­cu­late, so that we can sit­u­ate exact­ly what he’s talk­ing about in a more spe­cif­ic way. To me what Tarleton’s top­ic is will be the third and youngest cat­e­go­ry of algo­rithm.

The first cat­e­go­ry is maybe the grand­fa­ther of the com­mer­cial algo­rithms. Those are the ones that were orig­i­nal­ly built and sold to busi­ness in the post-war peri­od, that had noth­ing to do with pub­lic com­mu­ni­ca­tion. The first com­mer­cial algo­rithms had as their pur­pose to help a very small num­ber of peo­ple, a set of peo­ple in exec­u­tive posi­tions, to wrest con­trol over large orga­ni­za­tions and to make bet­ter deci­sions in their posi­tion as exec­u­tives. So the first cat­e­go­ry of algo­rithms I would point to were man­age­r­i­al aids made by com­pu­ta­tion­al experts and offered to firms and gov­ern­ments to improve per­for­mance in the three-dimensional world.

And our men­tor, Chandra Mukerji, who is a men­tor both to me and to Tarleton, calls this pre­oc­cu­pa­tion with the move­ment of peo­ple and paper and things out­side in the three-dimensional world logis­ti­cal pow­er.” She says this pow­er orig­i­nates with the state. And the very help­ful dis­tinc­tion she draws is that logis­ti­cal pow­er is not the pow­er of knowl­edge; it is the pow­er of engi­neer­ing. So state pow­er, she’s argu­ing, is not nec­es­sar­i­ly found­ed in knowl­edge. It may well be found­ed in the pow­er to engi­neer. So hold that thought because I’d like come back to it at the end.

Somewhere along the line, com­put­er sci­en­tists will engi­neer algo­rithms into the machine. So the algo­rithm will become part of the inside of the dig­i­tal infra­struc­ture. Now, inside the machine, the pur­pose of the algo­rithm will change. Its pur­pose will no longer be to pro­vide infor­ma­tion to an inde­pen­dent human deci­sion mak­er, but its pur­pose appears to me (as a non-technical per­son) to be to move infor­ma­tion itself inside of dig­i­tal infra­struc­ture.

So, inside infor­ma­tion sys­tems, algo­rithms seem to play the role that mechan­ics play in indus­tri­al pro­duc­tion. The algo­rithm is an instru­ment of con­sis­tent repli­ca­tion of move­ment that brings the spir­it of indus­tri­al con­sis­ten­cy to bureau­cra­cy and infor­ma­tion man­age­ment. But of course it does this with one very impor­tant dif­fer­ence. Unlike its indus­tri­al pre­de­ces­sors, the algo­rithm as a machine does some­thing dif­fer­ent than a phys­i­cal mechan­i­cal sys­tem which sim­ply repeats the same action over and over and over and over. The algo­rithm has a kind of flex­i­bil­i­ty in it in its struc­ture, through math, that allows it to exe­cute action with a degree of respon­sive­ness. And that inter­nal math­e­mat­i­cal struc­ture allows it to adjust out­put depend­ing on chang­ing input con­di­tions.

The third cat­e­go­ry of algo­rithm, then, mov­ing on from algo­rithms that help exec­u­tives make deci­sions, to algo­rithms that move things inside machines… I think that third kind of algo­rithm is the one that belongs to Tarleton. His algo­rithms are inside machines, but they are medi­at­ing the move­ment of infor­ma­tion not to a small num­ber of peo­ple, and not to the machine itself, but they are medi­at­ing the trans­fer of infor­ma­tion to a broad­er spec­trum of users. And this cat­e­go­ry of com­mer­cial algo­rithms obvi­ous­ly does not exist until you have the wide­spread use of per­son­al com­put­ing. And that’s why I call it the youngest algo­rithm; that’s why I place it last in time.

So of course by now my ecol­o­gy isn’t real­ly just an ecol­o­gy, it’s also a chronol­o­gy. It’s about the trans­for­ma­tion of the use of com­mer­cial algo­rithms in dif­fer­ent ways. So as the sec­ond part of my pre­sen­ta­tion, I’d like to raise the ques­tion of how algo­rithms have changed in time.

Since I’m not used treat­ing algo­rithms as an inde­pen­dent top­ic, I hopped over to the Department of Management as LSE to look up my new friend Keith, who is a retired oper­a­tions researcher for British Airways. And of course the use of algo­rithms in con­trol­ling busi­ness was pio­neered in the air­line indus­try because get­ting peo­ple and planes togeth­er to move between geo­graph­i­cal loca­tions on time is a prob­lem that has large­ly been man­aged by algo­rithms. So Keith, who’s worked for British Airways for his entire career, it seemed to me he was the per­fect per­son to post the ques­tion, What is an algo­rithm, and what is the scope of things I can expect to encounter at this con­fer­ence I’m going to in New York?” So this is what he said.

That’s a very good ques­tion. When I start­ed,” he said, there was a fair­ly pre­cise mean­ing of the term. An algo­rithm was a set of rules, which would gen­er­ate an opti­mum answer to the prob­lem that you’d posed it. It was a state­ment of rules that gave you the best pos­si­ble answer in a finite amount of time.” And he empha­sizes finite amount of time” because he’s talk­ing about the days when he was still run­ning punch cards to do the com­pu­ta­tion. As time has gone on”, he con­tin­ued, the def­i­n­i­tion of algo­rithm has got­ten weak­er and weak­er. The strong def­i­n­i­tion still applies, but it’s not what most peo­ple mean when they use the term.”

So what about Google?” I asked him, sort of press­ing him on.

And he says, That’s not an algo­rithm. At least not the way that I mean it.”

So what can we make of this lit­tle frag­ment of empir­i­cal data? It seems to me that the two key com­po­nents in Keith’s response, the two key things that define an algo­rithm for him, are that they pro­vide not just any answer but an opti­mum answer, and it does so in a rea­son­ably finite peri­od of time.

Here’s my take on what has hap­pened: I think Keith is giv­ing me a clas­sic def­i­n­i­tion. And over the past fifty years there has been a per­mu­ta­tion in how algo­rithms are made, what they do, and what they’re sold for. And if my intu­ition is cor­rect, then we need to be very care­ful about how we frame the ques­tion raised by con­tent man­age­ment and finan­cial infor­ma­tion. Because it seems to me that to con­front algo­rithms on their own terms, we may have to mod­i­fy our pre­oc­cu­pa­tion with the pol­i­tics of knowl­edge and take up an inter­est in the pol­i­tics of logis­ti­cal engi­neer­ing.

So this is my way of sort of rais­ing a ques­tion of geneal­o­gy. What is the geneal­o­gy of these algo­rithms? From a busi­ness per­spec­tive, if you trace the mak­ing of algo­rithms for sale as com­mer­cial objects, then Google looks a lot less like a pub­lic library and it looks a lot more like UPS.

Part Three, very briefly.

How might think­ing in terms of con­trol over logis­tics help us to fig­ure out what it might mean to gov­ern algo­rithms? I hope you don’t feel like I’ve changed the top­ic too abrupt­ly. But in case you have, let me just tie quick­ly back to Tarleton’s work.

Tarleton has made a very point­ed obser­va­tion about val­ues of knowl­edge and objec­tiv­i­ty and how these become
resources for con­tent medi­a­tion com­pa­nies, even though engi­neer­ing inter­ven­tions are made on these com­mer­cial algo­rithms on a rou­tine basis, in a dis­cre­tionary fash­ion, by the cor­po­ra­tions that con­trol them, all the time. This is true of cred­it scor­ing as well. The empir­i­cal ques­tion then is, what does it mean to tam­per with an algo­rithm? And does tam­per­ing with an algo­rithm, does chang­ing the algo­rithm, change the way that the pub­lic is being con­sti­tut­ed?

My response to Tarleton is to say well, it seems to me that that tam­per­ing means some­thing very dif­fer­ent with­in the log­ic of com­mer­cial engi­neer­ing than it does with­in the epoch of knowl­edge and epis­te­mol­o­gy. More specif­i­cal­ly, from an engi­neer­ing stand­point, opti­miza­tion is what anchors the con­cept of objec­tiv­i­ty. Optimization is the prin­ci­ple that allows you to test a sys­tem and then make a cri­tique that it is less than opti­mal or it is biased in some way.

But what will it mean, and what can it mean to do a crit­i­cal analy­sis of algo­rithms that are commercially-engineered sys­tems in the absence of an opti­miza­tion imper­a­tive? So what Keith is sug­gest­ing [to] me is that today’s algo­rithms, the things that we call algo­rithms, don’t look any­thing like the ones that he calls algo­rithms because they don’t face an opti­miza­tion imper­a­tive.

So I don’t real­ly know the answer to this ques­tion. But I think the ambi­gu­i­ty here, the ten­sion between a prag­mat­ics of pro­pri­etary engi­neer­ing that per­mits con­stant adjust­ment of the tech­nol­o­gy on the one hand, and on the oth­er hand claims that what these tech­nolo­gies do is man­age the lega­cy of human knowl­edge, might help to explain what kind of pol­i­tics and what kind of gov­er­nance are at stake in the objects that Tarleton is study­ing.

Thank you very much.

Further Reference

The Governing Algorithms conference site with full schedule and downloadable discussion papers.

A special issue of the journal Special Issue of Science, Technology, & Human Values, on Governing Algorithms was published January 2016.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.