We wanted to look at how surveillance, how these algorithmic decisionmaking systems and surveillance systems feed into this kind of targeting decisionmaking. And in particular what we’re going to talk about today is the role of the AI research community. How that research ends up in the real world being used with real-world consequences.
Archive (Page 1 of 6)
Positionality is the specific position or perspective that an individual takes given their past experiences, their knowledge; their worldview is shaped by positionality. It’s a unique but partial view of the world. And when we’re designing machines we’re embedding positionality into those machines with all of the choices we’re making about what counts and what doesn’t count.
AI Blindspot is a discovery process for spotting unconscious biases and structural inequalities in AI systems.
I’m just going to say it, I would like to completely blow up employment classification as we know it. I do not think that defining full-time work as the place where you get benefits, and part-time work as the place where you have to fight to get a full-time job, is an appropriate way of addressing this labor market.
AI Policy Futures is a research effort to explore the relationship between science fiction around AI and the social imaginaries of AI. What those social measures can teach us about real technology policy today. We seem to tell the same few stories about AI, and they’re not very helpful.
This is going to be a conversation about science fiction not just as a cultural phenomenon, or a body of work of different kinds, but also as a kind of method or a tool.
How people think about AI depends largely on how they know AI. And to the point, how the most people know AI is through science fiction, which sort of raises the question, yeah? What stories are we telling ourselves about AI in science fiction?
We came up with the idea to write a short paper…trying to make some sense of those many narratives that we have around artificial intelligence and see if we could divide them up into different hopes and different fears.
When data scientists talk about bias, we talk about quantifiable bias that is a result of let’s say incomplete or incorrect data. And data scientists love living in that world—it’s very comfortable. Why? Because once it’s quantified if you can point out the error you just fix the error. What this does not ask is should you have built the facial recognition technology in the first place?
What I hope we can do in this panel is have a slightly more literary discussion to try to answer well why were those the stories that we were telling and what has been the point of telling those stories even though they don’t now necessarily always align with the policy problems that we’re having.