I am profoundly envious of people who get to write about settled domains or sort of settled states of affairs in human events. For me, I was dealing with a set of technologies which are either recently emerged or still in the process of emerging. And so it was a continual Red Queen’s race to keep up with these things as they announce themselves to us and try and wrap my head around them, understand what it was that they were proposing, understand what their effects were when deployed in the world.
Computers can tell stories but they’re always stories that humans have input into a computer, which are then just being regurgitated. But they don’t make stories up on their own. They don’t really understand the stories that we tell. They’re not kind of aware of the cultural importance of stories. They can’t watch the same movies or read the same books we do. And this seems like this huge missing gap between what computers can do and humans can do if you think about how important storytelling is to the human condition.
Victor’s sin wasn’t in being too ambitious, not necessarily in playing God. It was in failing to care for the being he created, failing to take responsibility and to provide the creature what it needed to thrive, to reach its potential, to be a positive development for society instead of a disaster.
I think one of the things I want to say from the start is it’s not like AI is going to appear. It’s actually out there, in some instances in ways that we never even notice.
Machine learning systems that we have today have become so powerful and are being introduced into everything from self‐driving cars, to predictive policing, to assisting judges, to producing your news feed on Facebook on what you ought to see. And they have a lot of societal impacts. But they’re very difficult to audit.
Some of the long‐term challenges are very hypothetical—we don’t really know if they will ever materialize in this way. But in the short term I think AI poses some regulatory challenges for society.
We’ve been building autonomous vehicles for about twenty‐five years, and now that the technology has become adopted much more broadly and is on the brink of being deployed, our earnest faculty who’ve been looking at it are now really interested in questions like, a car suddenly realizes an emergency, an animal has just jumped out at it. There’s going to be a crash in one second from now. Human nervous system can’t deal with that fast enough. What should the car do?
The idea of putting a robot simulator inside a robot, well, it’s not a new idea but it’s tricky and very few people have pulled it off. In fact, it takes a bit of getting your head round. The robot needs to have, inside itself, a simulation of itself and its environment, and others in its environment. And running in real‐time as well.
Imagine your privacy assistant is a computer program that’s running on your smartphone or your smartwatch. Your privacy assistant listens for privacy policies that are being broadcast over a digital stream. We are building standard formats for these privacy policies so that all sensors will speak the same language that your personal privacy assistant will be able to understand.
We have to be aware that when you create magic or occult things, when they go wrong they become horror. Because we create technologies to soothe our cultural and social anxieties, in a way. We create these things because we’re worried about security, we’re worried about climate change, we’re worried about threat of terrorism. Whatever it is. And these devices provide a kind of stopgap for helping us feel safe or protected or whatever.