[This presentation was in response to Tarleton Gillespie’s “The Relevance of Algorithms”]
I’m really excited to be here. I don’t often meet people who think about algorithms.
Just by way of prefacing, I don’t actually study algorithms. What I study is a company that has been in the business of making algorithms for financial institutions for about 50 years. And my topic of research is a company called Fair Isaac, which most Americans have probably heard of. They are the company behind the American credit score that’s known by the acronym FICO.
Now, because of my interest in the business of making algorithms and the history of credit markets, and of information that is sold to run the credit markets, I have certain preoccupations which don’t often appear in conferences about media studies and topics of media. The history of algorithms that I’m interested is not the mathematical history, it’s the business history. And it’s the business history within the era of computing. And I would just note (I’m not sure this fits yet) to Kate that agonistic pluralism is of course the basis of the free market, and of the way that things are supposed to be exchanged in free markets run by information. I think there’s a link between the way I’m thinking about algorithms and the way that Kate is. I just give that background for you to keep in mind because I’m not going to step into the language of media studies. I’m going to stick to the language that I know, and I hope it doesn’t feel too foreign to you.
My presentation’s organized in three parts.
In the first part I’d like to talk about the ecology of commercial algorithms. What’s striking to me as someone who studies algorithms in a completely different industry from Tarleton is that his statement of relevance applies first and foremost to algorithms that play a role in the distribution of media to a broad public. And Tarleton’s analysis applies to those algorithms whose commercial purpose (kind of like a light switch) is to bring a flow of information, like the flow of electricity, into the room to many people in many places. And those people are conceived of as consumers and as the public.
But not all algorithms, and certainly not all commercial algorithms, actually interface with public life. So to help refine the contours of what Tarleton is up to, I thought I’d try to sketch out a very provisional and incomplete an ecology of the other kinds of algorithms that circulate, so that we can situate exactly what he’s talking about in a more specific way. To me what Tarleton’s topic is will be the third and youngest category of algorithm.
The first category is maybe the grandfather of the commercial algorithms. Those are the ones that were originally built and sold to business in the post-war period, that had nothing to do with public communication. The first commercial algorithms had as their purpose to help a very small number of people, a set of people in executive positions, to wrest control over large organizations and to make better decisions in their position as executives. So the first category of algorithms I would point to were managerial aids made by computational experts and offered to firms and governments to improve performance in the three-dimensional world.
And our mentor, Chandra Mukerji, who is a mentor both to me and to Tarleton, calls this preoccupation with the movement of people and paper and things outside in the three-dimensional world “logistical power.” She says this power originates with the state. And the very helpful distinction she draws is that logistical power is not the power of knowledge; it is the power of engineering. So state power, she’s arguing, is not necessarily founded in knowledge. It may well be founded in the power to engineer. So hold that thought because I’d like come back to it at the end.
Somewhere along the line, computer scientists will engineer algorithms into the machine. So the algorithm will become part of the inside of the digital infrastructure. Now, inside the machine, the purpose of the algorithm will change. Its purpose will no longer be to provide information to an independent human decision maker, but its purpose appears to me (as a non-technical person) to be to move information itself inside of digital infrastructure.
So, inside information systems, algorithms seem to play the role that mechanics play in industrial production. The algorithm is an instrument of consistent replication of movement that brings the spirit of industrial consistency to bureaucracy and information management. But of course it does this with one very important difference. Unlike its industrial predecessors, the algorithm as a machine does something different than a physical mechanical system which simply repeats the same action over and over and over and over. The algorithm has a kind of flexibility in it in its structure, through math, that allows it to execute action with a degree of responsiveness. And that internal mathematical structure allows it to adjust output depending on changing input conditions.
The third category of algorithm, then, moving on from algorithms that help executives make decisions, to algorithms that move things inside machines… I think that third kind of algorithm is the one that belongs to Tarleton. His algorithms are inside machines, but they are mediating the movement of information not to a small number of people, and not to the machine itself, but they are mediating the transfer of information to a broader spectrum of users. And this category of commercial algorithms obviously does not exist until you have the widespread use of personal computing. And that’s why I call it the youngest algorithm; that’s why I place it last in time.
So of course by now my ecology isn’t really just an ecology, it’s also a chronology. It’s about the transformation of the use of commercial algorithms in different ways. So as the second part of my presentation, I’d like to raise the question of how algorithms have changed in time.
Since I’m not used treating algorithms as an independent topic, I hopped over to the Department of Management as LSE to look up my new friend Keith, who is a retired operations researcher for British Airways. And of course the use of algorithms in controlling business was pioneered in the airline industry because getting people and planes together to move between geographical locations on time is a problem that has largely been managed by algorithms. So Keith, who’s worked for British Airways for his entire career, it seemed to me he was the perfect person to post the question, “What is an algorithm, and what is the scope of things I can expect to encounter at this conference I’m going to in New York?” So this is what he said.
“That’s a very good question. When I started,” he said, “there was a fairly precise meaning of the term. An algorithm was a set of rules, which would generate an optimum answer to the problem that you’d posed it. It was a statement of rules that gave you the best possible answer in a finite amount of time.” And he emphasizes “finite amount of time” because he’s talking about the days when he was still running punch cards to do the computation. “As time has gone on”, he continued, “the definition of algorithm has gotten weaker and weaker. The strong definition still applies, but it’s not what most people mean when they use the term.”
“So what about Google?” I asked him, sort of pressing him on.
And he says, “That’s not an algorithm. At least not the way that I mean it.”
So what can we make of this little fragment of empirical data? It seems to me that the two key components in Keith’s response, the two key things that define an algorithm for him, are that they provide not just any answer but an optimum answer, and it does so in a reasonably finite period of time.
Here’s my take on what has happened: I think Keith is giving me a classic definition. And over the past fifty years there has been a permutation in how algorithms are made, what they do, and what they’re sold for. And if my intuition is correct, then we need to be very careful about how we frame the question raised by content management and financial information. Because it seems to me that to confront algorithms on their own terms, we may have to modify our preoccupation with the politics of knowledge and take up an interest in the politics of logistical engineering.
So this is my way of sort of raising a question of genealogy. What is the genealogy of these algorithms? From a business perspective, if you trace the making of algorithms for sale as commercial objects, then Google looks a lot less like a public library and it looks a lot more like UPS.
Part Three, very briefly.
How might thinking in terms of control over logistics help us to figure out what it might mean to govern algorithms? I hope you don’t feel like I’ve changed the topic too abruptly. But in case you have, let me just tie quickly back to Tarleton’s work.
Tarleton has made a very pointed observation about values of knowledge and objectivity and how these become
resources for content mediation companies, even though engineering interventions are made on these commercial algorithms on a routine basis, in a discretionary fashion, by the corporations that control them, all the time. This is true of credit scoring as well. The empirical question then is, what does it mean to tamper with an algorithm? And does tampering with an algorithm, does changing the algorithm, change the way that the public is being constituted?
My response to Tarleton is to say well, it seems to me that that tampering means something very different within the logic of commercial engineering than it does within the epoch of knowledge and epistemology. More specifically, from an engineering standpoint, optimization is what anchors the concept of objectivity. Optimization is the principle that allows you to test a system and then make a critique that it is less than optimal or it is biased in some way.
But what will it mean, and what can it mean to do a critical analysis of algorithms that are commercially-engineered systems in the absence of an optimization imperative? So what Keith is suggesting [to] me is that today’s algorithms, the things that we call algorithms, don’t look anything like the ones that he calls algorithms because they don’t face an optimization imperative.
So I don’t really know the answer to this question. But I think the ambiguity here, the tension between a pragmatics of proprietary engineering that permits constant adjustment of the technology on the one hand, and on the other hand claims that what these technologies do is manage the legacy of human knowledge, might help to explain what kind of politics and what kind of governance are at stake in the objects that Tarleton is studying.
Thank you very much.