When you make a decision to opt for an automated process, to some extent you’re already by doing so compromising transparency. Or you could say it the other way around. It’s possible to argue that if you opt for extremely strict transparency regulation, you’re making a compromise in terms of automation.
More than sort of a discussion of what’s been said so far this is a kind of research proposal of what I would like to see happening at the intersection of CS and this audience.
The study of search, be it by people like David Stark in sociology, or economists or others, I tend to sort of see it in the tradition of a really rich socio-theoretical literature on the sociology of knowledge. And as a lawyer, I tend to complement that by thinking if there’s problems, maybe we can look to the history of communications law.
How would we begin to look at the production of the algorithmic? Not the production of algorithms, but the production of the algorithmic as a justifiable, legitimate mechanism for knowledge production. Where is that being established and how do we examine it?
It seems to me that to confront algorithms on their own terms, we may have to modify our preoccupation with the politics of knowledge and take up an interest in the politics of logistical engineering.
This is why it matters whether algorithms can be agonist, given their roles in governance. When the logic of algorithms is understood as autocratic, we’re going to feel powerless and panicked because we can’t possibly intervene. If we assume that they’re deliberately democratic, we’ll assume an Internet of equal agents, rational debate, and emerging consensus positions, which probably doesn’t sound like the Internet that many of us actually recognize.