[This is a response to Lucas Introna’s presentation of his draft paper “Algorithms, Performativity and Governability.”]
Matthew Jones: So I want to begin with a great failed data mining project from around 2001. It was was a great data mining project that would “apply automation to team processes so more information can be exploited, more hypotheses created and examined, more models built and populated with evidence, and overall more crises dealt with automatically.”
And this project—and I think it’s very interesting for out point—it rested fundamentally on a critique of human reason, of human individual reason. It said that what it wished to fund were A: cognitive tools that allow human and machines to think together in real-time about complicated problems; B: means to overcome the biases and limitations of the human cognitive system; C: cognitive amplifiers to help teams rapidly and fully comprehend complicated and uncertain situations.
This was a massive data mining effort produced in the wake of what were widely seen as failures within artificial intelligence and machine learning to produce evidence that wasn’t grounded on such an understanding of the limitations of human ability. It went under the name Total Information Awareness. Some of you may remember this. This was a massive counterterrorism effort.
And what interests me about this and the reason I mention it is that it seems that those of us discussing the inscrutability of algorithms, as Lucas and others today have done such a wonderful job discussing—not just the fact but what the ethical implications that stems from that fact—that that’s central in fact to the entire effort to produce a whole new series of algorithms, the premise of which are the limitations of human ability, and the way to get computers to help us do that.
Now, haunting our discussion today… So, what I’m interested in is that we, and many of the people that were interested to discuss the possibility of governing, are simultaneously operating in a condition of the inscrutability of that which we want to know. So, haunting much of our discussion of algorithms is I think a concern that we would become worried when algorithms govern us and not us them.
And this isn’t sort of a simple version where there’s a grand machine Matrix-like that’s making us think something, or for philosophers the Cartesian demon, but rather a dispersion of algorithms that are even harder to pin down. To use an algorithm well autonomously is to lay down the law of using it. It is to know in some fundamental way.
Now, Lucas challenges this analysis I think in deeply fu—in at least three ways. First, there’s no simply knowing an algorithm. Second, there’s no sort of pre-given human personhood or subject that is the origin that we’re going to use in judging algorithms. So we need to look at people differently and the process of them. And then thirdly, there’s a really grave danger when we’re doing either sort of high-faluting academic work looking at the effects of algorithms, or thinking about concrete ethical and political, or indeed coding challenges to examining them.
So, to begin looking at this, I want to begin with— It’s something that’s an old chestnut among this community. I mostly spend my time with boring historians so they don’t think this is so unexciting. But in the very first PageRank paper there’s a remarkable thing, which is of course that it’s always about personalized search. It was always going to have it. Brin and Page wrote, “Personalized page ranks may have a number of applications…” Indeed. “…including personal search engines. These search engines could save users a great deal of trouble by efficiently guessing a large part of their interests given simple input such as their bookmarks or home page.”
Now, from this simple beginning of course something more searching arose. Fundamental to defenses of the data miners of users is a claim indeed that the cognitive limitations—that’s our inability to think well—of human beings require us both to data mine to learn things, but as centrally to be data mined. In this vision, data mining helps us to become free. Because the space of potential things we might go and learn about is simply too large, if it were accessible. In fact, being data mined allows us in this argument to move from a sort of formal declaration that we are free to go and learn anything, towards an actualization, because we are being helped. Something that is judging us is helping us to overcome our own cognitive limitations.
Now this kind of argument is complemented by the sorts of arguments that the massive data brokers make. And there’s a remarkable set of— How many minutes? Three. Okay, I’ll be quick.
So the data brokers quite remarkably make a similar argument. One of them says, “for many populations for whom online services are made free, information truly is a direct conduit to Liberty.” So there’s a double enabling of freedom, that is, that allows us to become the people we want to be. To actualize ourselves. To become who we really are. And what’s interesting is Lucas is giving us an account, using the performative theory, of becoming in which there isn’t an essence. But many of the people who are interested in governing are profoundly interested in providing services that are about becoming who we really are. And I think that shared and different ontology is well worth thinking about.
Okay. I want to end just on this question of governing without knowledge. And we keep coming back to this when we talk about transparency. We’ve talked about the problems of transparency both epistemological (we can’t really know these sorts of things), as well as things like trade secrets. We can’t govern through knowledge, properly speaking. Even if many algorithms are trade secrets, Lucas and others have reminded us nearly all would not be surveillable by human beings, even if we had access to their source code. We have to begin whatever process from this fundamental lack of knowledge. We need to start from the same epistemological place that many of the producers of algorithms do.
And so I think, curiously enough, at the moment that we’re critical—as I think we rightly are—of proxies like Turnitin, we’re in desperate need of proxies in thinking about how to govern the kinds of algorithms that worry us. We cannot know them. Even if someone handed them to us, we couldn’t know them. And so I think that’s one of the sort of things we really need to know. We really need to produce proxies. Now I suspect, and here I’ll end, that this is of course going to cause gaming the systems. But I wonder if mutually-assured gaming isn’t one of the best things we can do if we’re interested in governing. And those who are interested in governing us…we have a shared interest in actually both gaming the system. Okay, I’ll end there.