[This presentation was in response to Tarleton Gillespie’s “The Relevance of Algorithms”]
Good morning, everyone. It is a complete pleasure to be responding to Tarleton [Gillespie] today because I have found his work so generative over the last few years, and have enjoyed many conversations where we’ve grappled with these sorts of ideas that Tarleton has raised in his talk this morning.
And in fact, just listening to you today made me think about a whole new set of questions. In particular in fact, the anxiety around the female body, which I think recurred three times in your talk, through abortion, through the Target pregnancy case, and of course through bikinis. I’m thinking, well, that’s a whole other subject for a paper, is why that is such a focus of algorithmic anxiety.
But in actual fact there’s something else I’d like to dig deeper into today. This is your claim that we need to pay attention to the way that algorithms may produce particular kinds of political effects. But what exactly do you mean by the “political” here?
So what I would like to do, rather than walking away from the realm of theory, which Tarleton offered this morning, I’m actually going to take us right back there. But from the perspective of political theory. And I’d like to think about the logics of what Tarleton describes as these calculated publics. I’m going to do this by breaking it down into eight or so—depending how we go for time—scenes about life in calculated publics.
Scene 1
A woman is sitting in a chair with a laptop on her knees, and she’s trying to buy some books for a conference that she’s about to go to called “Governing Algorithms.” When she tries to buy Tarleton’s book, Wired Shut, she finds that “customers who bought this item also bought” James Boyle’s The Public Domain, William Patry’s How to Fix Copyright, and Biella Coleman’s Coding Freedom.
So she starts imagining who this group of imagined shoppers might be. Are they interested in the same topics as she is? Should she buy a book about reforming copyright law, or should she buy a book which is an ethnographic account of the Debian community? I mean, they seem like fairly different topics.
So, who are these customers and what unites them in these particular tastes? I think we can imagine some of these answers, but we can’t know for sure how Amazon has determined them. In fact, even senior Amazon developers may not be able to tell us exactly how these imagined communities of customers have been created and how it’s changed over time as millions of books have been purchased, and millions of profiles have been updated.
Algorithms simply don’t always behave in predictable ways. And this is why we have A/B testing, extensive, randomized testing, to observe just how algorithms actually behave in the field with large datasets. So Tarleton argues that algorithms, and I quote,
…not only structure our interactions with others as members of network publics, but they also traffic in these calculated publics that they themselves produce.” Thus Amazon is both invoking and claiming to know a public with which we’re invited to feel an affinity even if they have nothing whatsoever to do with the kind of public that we were originally seeking out.
So the woman at the laptop types in a different author’s name, Evgeny Morozov. And she’s told that customers who bought this item also bought, amongst other things, Eric Schmidt’s The New Digital Age, and Kevin Kelly’s What Technology Wants. Are these books similar? Well… Have people like her bought Morozov and Gillespie’s books together? Not that we know. Instead we’re shown a calculated public. But we don’t know the membership, their concerns, whether they loved or hated these books. There’s simply a consensus. These books and people are frequently represented together.
Scene 2
Is talking about the political ramifications of algorithms enough? Or can we go a step further. McKenzie Wark argues that technology and the political are not separate things. “One is simply looking at the same system,” he writes, “through different lenses when one speaks of the political or the technical.”
Likewise, Alex Galloway notes that we shouldn’t focus so much on devices or platforms or algorithms and such, and more on the systems of power that they mobilize. So let’s speak for a moment about algorithms as political theory, and vice versa.
Some thinkers think about algorithms as being essentially autocratic systems. We have no input, they make the decisions, and we don’t get to see the processes by which those decisions are made. Barbara Cassin has described, on the other hand, how algorithms like PageRank appear to have a more deliberative, democratic, ethos. And I quote, “using graph theory to valorize pure heterogeneity, showing how quality is an emergent property of quantity.”
But what about alternative political frameworks to this autocracy vs. deliberative democracy? What if say for example we started to think agonistic pluralism? Which is to way we start with the premise of ongoing struggle (agonism), between different groups and entities (pluralism), and recognize that complex, shifting negotiations are occurring between people, institutions, and algorithms, all the time and that they’re acting in relation to each other?
Scene 3
Chantal Mouffe is being interviewed for a political magazine. She’s sitting on a very large, comfy chair. She’s asked, “How do you define democracy, if not as a consensus?” In response she describes the difference between the model of traditional democracy, and her notion of agonistic pluralism. And I quote,
I use the concept of agonistic pluralism to present a new way to think about democracy, which is different from the traditional liberal conception of democracy as a negotiation between interests. While they have many differences, Rawls and Habermas have in common the idea that the aim of a democratic society is the creation of a consensus, and that consensus is possible if people are only able to leave aside their passions and their particular interests and think like rational beings. However, while we desire an end to conflict, if we want people to be free we must always allow for the possibility that conflict may appear and to provide an arena where differences can be confronted.
Mouffe, C. (2000) The Democratic Paradox.
Scene 4
New York City. There’s a group chat happening in the Reddit office. The discussion is about what they call their “hot sorting algorithm,” and how posts in some areas of the site become front-page stories with relatively few upvotes.
At the same time, in Southern Nebraska, a group of fifteen year-old girls meet at a friend’s house to discuss how they could influence, or game, Reddit’s system. They aim to band people together to upvote a story of homophobic harassment of a boy at their school in order to gain media attention and shame the perpetrators.
Scene 5
Can an algorithm be agonistic? So, algorithms may be rule-based mechanisms, but they’re also, and sometimes we could argue, governing agents, that are choosing between competing and sometimes conflicting data objects.
So if algorithms present us with a new knowledge logic, as Tarleton has convinced us, then it’s important to consider the contours of that logic. What are the histories and philosophies that have most strongly shaped them?
Certainly I think it’s going to be difficult to describe any of these algorithms as agonistic. So much of the messy business of choosing between particular kinds of data points is essentially hidden from us, whether it’s search results, which books are sold together, which news stories are most relevant to us.
Much of the algorithmic work of picking winners between information contests is going to remain invisible. Yet these deliberations are crucial. This is the stuff of lower‑g governance. What Maurizio Lazzarato describes as, “the ensemble of techniques and procedures put into place to direct the conduct of men and to take account of the probabilities of their action and relations.”
Scene 6
If the politics of many of these algorithms is commonly located on a spectrum between autocracy and deliberative democracy, I think we could start to discuss the limitations of those approaches. In Mouffe’s words, “when we accept that every consensus exists as a temporary result of a provisional hegemony as a stabilization of power that always entails some form of exclusion, we can begin to envisage the nature of a democratic public sphere in a different way.”
And so I think we reach her strongest argument for why thinking about agonism is important. This is why a pluralist democracy, she writes, “needs to make room for dissent, and for the institutions through which it can be manifested. It’s survival depends on collective identities forming around clearly differentiated positions, as well as on the possibility of choosing between real alternatives.” And I think that’s a fairly key concept here.
So this is why it matters whether algorithms can be agonist, given their roles in governance. When the logic of algorithms is understood as autocratic, we’re going to feel powerless and panicked because we can’t possibly intervene. If we assume that they’re deliberately democratic, we’ll assume an Internet of equal agents, rational debate, and emerging consensus positions, which probably doesn’t sound like the Internet that many of us actually recognize.
So instead, perhaps if we started to think about this idea of agonistic pluralism, we might start to think about the way in which algorithms are choosing from counterposed perspectives within a field where rationality and emotion are given. As an ethos, it assumes perpetual conflict and constant contestation. It would ideally offer the path to choose, I think, away from these disappointingly limited calls for “transparency” in algorithms, which are ultimately kind of doomed to fail, given that companies like Facebook and Twitter are not going to give their algorithms away, for a whole host of competitive reasons, and also because they’re afraid of users gaming the system.
Instead, I think to recognize value of different perspectives and opposing interests involves an acceptance of what Howarth calls “the rules of the game” and an understanding that algorithms are participants in wider institutional and capitalist logics.
Scene 7
Where else do we find agonism in the field of algorithms? Perhaps the problem here is actually the fetishizing of algorithms themselves, without widening the perspective to include the many ways in which algorithms are not stable, and they’re always in relation to other people. That is, they’re in flux and they’re embedded in hybrid systems.
For example, we can look to the offices and the spaces where developers are currently coming up with algorithms, and I think this is where Nick Seaver’s work is really useful, where he’s actually spending time with people who are designing music recommendation algorithms. We could also look at the spaces where people and algorithms are actually playing particular kinds of games. I’m thinking of Reddit here and the fact that Reddit makes much of its algorithmic process public.
And I think people actually like the way that they can see some of the rules of the game and at least imagine how they might game them. It offers a kind of legitimacy that I think some of these more closed, opaque systems like Facebook instead produce a kind of suspicion.
Or we could even look to the ways that people are currently reverse-engineering algorithms, where the troll and the hacker become key players in an agonistic system. So by using this wider optic, I think we can see that algorithms are always working in contested, human spaces.
Scene 8
The final word has to go to Tarleton.
In attempting to say something of substance about the way algorithms are shifting our public discourse, we must firmly resist putting the technology in the explanatory driver’s seat. While recent sociological studies of the Internet have tried to undo all the simplistic technological determinism that plagued earlier work, that determinism still remains as a fairly attractive analytic stance. Our analysis must not conceive of algorithms as abstract technical achievements, but must unpack the warm, human, and institutional choices that lie behind these cold mechanisms.
I take this as a useful reminder that we need to look beyond algorithms as kind of fetish objects, to consider the developers in their cubicle farms, the teenage hackers who are playing on Reddit, the Amazon book buyers, and the multitude of flesh and blood scenes where humans and algorithms engage.
Thanks.
Further Reference
The Governing Algorithms conference site with full schedule and downloadable discussion papers.
A special issue of the journal Special Issue of Science, Technology, & Human Values, on Governing Algorithms was published January 2016, including a final version of this presentation’s paper.