I think that the development of AI poses immediate challenges and long-term challenges. Some of the long-term challenges are very hypothetical—we don’t really know if they will ever materialize in this way. But in the short term I think AI poses some regulatory challenges for society. They pose ethical challenges. And there are also challenges when it comes to the marketplace, in particular the labor market.
So I like to think about the example of driverless cars, not because I’m only interested in that problem but I think it exemplifies many of the questions that we will face in many applications of AI in the future. We recently ran very large surveys of people’s preferences over what a self-driving car should do if faced with a difficult ethical question. And the the question is what values, what are the principles that we want to embed in those cars.
And what we found, interestingly, is that there is a broad cultural variation in the values that people consider important. And so in some cultures people seem to think the car has a bigger duty towards its owner, whereas in other cultures people seem to think that the car has a duty to society, to minimizing harm in total.
We’re still analyzing the data and we don’t have conclusive findings yet, but I think it’s very interesting that as soon as we began probing into these sorts of questions we very quickly encountered an important sort of anthropological dimension here, a cross-cultural dimension.
Traditionally, the way we think about these problems is obviously shaped by our own training and our own way of looking at the world. So an engineer, when faced with an ethical challenge of what should the car do, or how do you make sure the car doesn’t misbehave, they see it as an engineering problem. But I think that can only take you so far.
On the other hand, you have people from the humanities who are aware of the history of law and regulation, and who have a very good eye for identifying potential misuse and abuse in systems. And they think about regulatory measures to mitigate systems basically going out of control.
And the problem to me has been that these two groups have not been talking to each other. Engineers typically would ignore these issues because they think that an engineering solution will fix the problem. On the other hand, people coming from the humanities typically don’t have the means to implement those ideas in an operational way. And this is why I think that it’s important to bridge this gap by bringing both of both of those perspectives together.