Iyad Rahwan: I think that the devel­op­ment of AI pos­es imme­di­ate chal­lenges and long-term chal­lenges. Some of the long-term chal­lenges are very hypothetical—we don’t real­ly know if they will ever mate­ri­al­ize in this way. But in the short term I think AI pos­es some reg­u­la­to­ry chal­lenges for soci­ety. They pose eth­i­cal chal­lenges. And there are also chal­lenges when it comes to the mar­ket­place, in par­tic­u­lar the labor market.

So I like to think about the exam­ple of dri­ver­less cars, not because I’m only inter­est­ed in that prob­lem but I think it exem­pli­fies many of the ques­tions that we will face in many appli­ca­tions of AI in the future. We recent­ly ran very large sur­veys of peo­ple’s pref­er­ences over what a self-driving car should do if faced with a dif­fi­cult eth­i­cal ques­tion. And the ques­tion is what val­ues, what are the prin­ci­ples that we want to embed in those cars.

And what we found, inter­est­ing­ly, is that there is a broad cul­tur­al vari­a­tion in the val­ues that peo­ple con­sid­er impor­tant. And so in some cul­tures peo­ple seem to think the car has a big­ger duty towards its own­er, where­as in oth­er cul­tures peo­ple seem to think that the car has a duty to soci­ety, to min­i­miz­ing harm in total. 

We’re still ana­lyz­ing the data and we don’t have con­clu­sive find­ings yet, but I think it’s very inter­est­ing that as soon as we began prob­ing into these sorts of ques­tions we very quick­ly encoun­tered an impor­tant sort of anthro­po­log­i­cal dimen­sion here, a cross-cultural dimension.

Traditionally, the way we think about these prob­lems is obvi­ous­ly shaped by our own train­ing and our own way of look­ing at the world. So an engi­neer, when faced with an eth­i­cal chal­lenge of what should the car do, or how do you make sure the car does­n’t mis­be­have, they see it as an engi­neer­ing prob­lem. But I think that can only take you so far.

On the oth­er hand, you have peo­ple from the human­i­ties who are aware of the his­to­ry of law and reg­u­la­tion, and who have a very good eye for iden­ti­fy­ing poten­tial mis­use and abuse in sys­tems. And they think about reg­u­la­to­ry mea­sures to mit­i­gate sys­tems basi­cal­ly going out of control.

And the prob­lem to me has been that these two groups have not been talk­ing to each oth­er. Engineers typ­i­cal­ly would ignore these issues because they think that an engi­neer­ing solu­tion will fix the prob­lem. On the oth­er hand, peo­ple com­ing from the human­i­ties typ­i­cal­ly don’t have the means to imple­ment those ideas in an oper­a­tional way. And this is why I think that it’s impor­tant to bridge this gap by bring­ing both of both of those per­spec­tives together.

Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Cash App, or even just sharing the link. Thanks.