We wanted to look at how surveillance, how these algorithmic decisionmaking systems and surveillance systems feed into this kind of targeting decisionmaking. And in particular what we’re going to talk about today is the role of the AI research community. How that research ends up in the real world being used with real-world consequences.
Archive (Page 1 of 2)
Positionality is the specific position or perspective that an individual takes given their past experiences, their knowledge; their worldview is shaped by positionality. It’s a unique but partial view of the world. And when we’re designing machines we’re embedding positionality into those machines with all of the choices we’re making about what counts and what doesn’t count.
BJ Copeland states that a strong AI machine would be one, built in the form of a man; two, have the same sensory perception as a human; and three, go through the same education and learning processes as a human child. With these three attributes, similar to human development, the mind of the machine would be born as a child and will eventually mature as an adult.
The big concerns that I have about artificial intelligence are really not about the Singularity, which frankly computer scientists say is…if it’s possible at all it’s hundreds of years away. I’m actually much more interested in the effects that we are seeing of AI now.
I’m interested in data and discrimination, in the things that have come to make us uniquely who we are, how we look, where we are from, our personal and demographic identities, what languages we speak. These things are effectively incomprehensible to machines. What is generally celebrated as human diversity and experience is transformed by machine reading into something absurd, something that marks us as different.
One of the most important insights that I’ve gotten in working with biologists and ecologists is that today it’s actually not really known on a scientific basis how well different conservation interventions will work. And it’s because we just don’t have a lot of data.
The question is what are we doing in the industry, or what is the machine learning research community doing, to combat instances of algorithmic bias? So I think there is a certain amount of good news, and it’s the good news that I wanted to focus on in my talk today.
Computers can tell stories but they’re always stories that humans have input into a computer, which are then just being regurgitated. But they don’t make stories up on their own. They don’t really understand the stories that we tell. They’re not kind of aware of the cultural importance of stories. They can’t watch the same movies or read the same books we do. And this seems like this huge missing gap between what computers can do and humans can do if you think about how important storytelling is to the human condition.
Machine learning systems that we have today have become so powerful and are being introduced into everything from self-driving cars, to predictive policing, to assisting judges, to producing your news feed on Facebook on what you ought to see. And they have a lot of societal impacts. But they’re very difficult to audit.