Of all the different issues we face, three problems pose existential challenges to our species. These three existential challenges are nuclear war, ecological collapse, and technological disruption. We should focus on them.
Archive (Page 1 of 4)
When you make a decision to opt for an automated process, to some extent you’re already by doing so compromising transparency. Or you could say it the other way around. It’s possible to argue that if you opt for extremely strict transparency regulation, you’re making a compromise in terms of automation.
More than sort of a discussion of what’s been said so far this is a kind of research proposal of what I would like to see happening at the intersection of CS and this audience.
The study of search, be it by people like David Stark in sociology, or economists or others, I tend to sort of see it in the tradition of a really rich socio-theoretical literature on the sociology of knowledge. And as a lawyer, I tend to complement that by thinking if there’s problems, maybe we can look to the history of communications law.
I think the question I’m trying to formulate is, how in this world of increasing optimization where the algorithms will be accurate… They’ll increasingly be accurate. But their application could lead to discrimination. How do we stop that?
I consider myself to be an algorithm auditor. So what does that mean? Well, I’m inherently a suspicious person. When I start interacting with a new service, or a new app, and it appears to be doing something dynamic, I immediately been begin to question what is going on inside the black box, right? What is powering these dynamics? And ultimately what is the impact of this?
All they have to do is write to journalists and ask questions. And what they do is they ask a journalist a question and be like, “What’s going on with this thing?” And journalists, under pressure to find stories to report, go looking around. They immediately search something in Google. And that becomes the tool of exploitation.
I decided I wanted to do some accountability studies about algorithms in our lives. And it’s hard to study the newsfeed in a quantitative way, and I also wanted something with higher stakes. So I started with an algorithm that is used in the criminal justice system to predict whether a person is likely to commit a future crime.
The big concerns that I have about artificial intelligence are really not about the Singularity, which frankly computer scientists say is…if it’s possible at all it’s hundreds of years away. I’m actually much more interested in the effects that we are seeing of AI now.