Thank you, Hannes. It’s great to be here. The cre­ation of arti­fi­cial life is a very pop­u­lar motive in lit­er­a­ture and sci­ence fic­tion. And it’s a sto­ry that nev­er real­ly has a hap­py end­ing. The cre­ator, even though he might have the best inten­tions, ends up sum­mon­ing spir­its that he can’t con­trol.

When we talk about tech­nolo­gies such as AI, and pol­i­cy, one of the main prob­lems is that tech­no­log­i­cal advance­ment is fast, and pol­i­cy and democ­ra­cy is a very very slow process. And that could be poten­tial­ly a very big prob­lem if we think that AI could be poten­tial­ly dan­ger­ous. Now, we heard before from Alex Lebrun that this is all sci­ence fic­tion, this is not going to be hap­pen­ing. But there are oth­ers who dis­agree.

The devel­op­ment of full arti­fi­cial intel­li­gence could spell the end of the human race.
Stephen Hawking, Stephen Hawking warns arti­fi­cial intel­li­gence could end mankind”

Like this guy, Stephen Hawking, who says it could spell the end of the human race.

Nuclear pow­er gave us access to the almost unlim­it­ed ener­gy stored in an atom, but unfor­tu­nate­ly, the first thing we did was cre­ate an atom bomb. […] AI is going to go the same way.
Stuart Russel, Science Friday, April 10, 2015, The Future of Artificial Intelligence”

Or Stuart Russell, who com­pares it to nuclear tech­nol­o­gy, where first the idea was to find the ener­gy source but the first thing that was actu­al­ly made was a bomb.

Now, this may all be sci­ence fic­tion, but if you have sci­ence fic­tion you also have an idea or a vision of how the future could be. And one of these ideas is usu­al­ly this dystopia, as you can see here, the rise of the machines. 

But there’s also a more utopi­an idea like in Star Trek, where tech­nol­o­gy can meet all mate­r­i­al needs of mankind and there­fore there’s no more pover­ty or greed or hunger, and no more war. And if we talk about tech­nol­o­gy, we always have to think about what kind of future we want to have.

Now I will talk about some of the issues and chal­lenges that we face in this fourth indus­tri­al rev­o­lu­tion.

There’s a study by Oxford University that pre­dicts that 50% of all American jobs will be at risk in the next twen­ty years due to automa­tion. And in the past, automa­tion was usu­al­ly some­thing that con­cerned most­ly blue col­lar jobs. But it is entire­ly pos­si­ble with AI and oth­er tech­nol­o­gy that more highly-skilled jobs could be at risk.

Now, this could be a good thing. It could be could be an ally. It could help us and free us from bor­ing tasks that we don’t real­ly like to do. Or it could be main­ly a good thing for employ­ers because they have a dream work­force who nev­er com­plains, who works all the time, doesn’t have to go to the bath­room, or doesn’t join a union.

So, the ques­tion is who’s going to prof­it from all these devel­op­ments? And if we ask this ques­tion, we have to a look at who’s invest­ing in AI research. And now you course have uni­ver­si­ties, you have public/private part­ner­ships. But you also have the tech giants that invest a lot of mon­ey in these tech­nolo­gies. And you also have, of course, the mil­i­tary that invests.

Now, even if arti­fi­cial intel­li­gence won’t be as intel­li­gent as in the sci­ence fic­tion movie, even if it’s just a lit­tle self-learning and self-improving, if we imag­ine an Internet of Things where we are sur­round­ed with self-improving machines, this ques­tion about who is liable and who is respon­si­ble if some­thing happens…we will have to deal with this ques­tion. Because if you have a self-driving car and there’s an acci­dent, who’s respon­si­ble for this? Is it is the own­er, like with a pet the own­er is respon­si­ble? Or is it the man­u­fac­tur­er? Or maybe it’s the machine itself.

When we talk about arti­fi­cial intel­li­gence as not just pro­gram­ming but some­thing that is learn­ing and is devel­op­ing, this might be some­thing we would have to think about. That these machines should learn some rules, some norms, some val­ues, as well. I’m sure you all know Isaac Asimov and his Robot Laws. So they should have some sort of rules that they shouldn’t harm peo­ple or shouldn’t harm them­selves.

Min Li Marti Ai Policy 04h39 07

If you take anoth­er pic­ture from sci­ence fic­tion you have Commander Data from Star Trek, who has an eth­i­cal sub­rou­tine [that] worked very well for him. He was prob­a­bly more human and more moral than all his col­leagues. But then again, he also had an evil twin broth­er.

If we talk about all these chal­lenges, we should also talk about pos­si­ble solu­tions. What can be done. What should be done. Is there any­thing to be done at all? The first idea would be to reg­u­late, maybe cre­ate a reg­u­la­to­ry agency. Maybe there’s a need for a law, or maybe just a code of con­duct. Maybe we should just ban things that we don’t want, like the Geneva Protocol that bans chem­i­cal or bio­log­i­cal war­fare. We could ban arti­fi­cial intel­li­gence in war­fare.

Maybe it’s just that research should always con­sid­er eth­i­cal impli­ca­tions, like in the life sci­ences. In med­i­cine and genet­ics, that’s always a part. You have the Hippocratic Oath, you ethics boards. So maybe that should be a part of research on arti­fi­cial intel­li­gence and arti­fi­cial life, as well.

As I men­tioned before, who’s going to prof­it? So, there is prob­a­bly in each to have research that is not for prof­it. One ini­tia­tive is OpenAI, that Elon Musk and oth­ers are fund­ing, but that could also be con­ven­tion­al uni­ver­si­ty research, also on the impli­ca­tions on soci­ety.

Now, this is a high­ly high­ly con­tro­ver­sial ques­tion that we in Switzerland will vote on in June. It’s an uncon­di­tion­al basic income for all. This sounds not very real­is­tic, but if these pre­dic­tions that 50% of all jobs will be lost, then maybe we have to think about what to do with all these peo­ple who don’t have jobs any­more, and maybe an uncon­di­tion­al income could be a pos­si­ble solu­tion.

Or we could think about what sets us apart, what sets human intel­li­gence apart from arti­fi­cial intel­li­gence. And that would prob­a­bly be cre­ativ­i­ty, social skills, empa­thy, things like that. And maybe our edu­ca­tion should focus more on these skills, so we can strength­ened the skills that set us apart.

In the end, the ques­tion is just, if we have a pow­er­ful tech­nol­o­gy, who’s going to ben­e­fit from this tech­nol­o­gy? Is it going to be the wealthy? The big com­pa­nies? Is it going to be the mil­i­tary? Or can this tech­nol­o­gy be used to solve the big prob­lems we have, like cli­mate change? Or dis­eases, or pover­ty? And I think we should make sure that AI tech­nol­o­gy, that this pow­er­ful tech­nol­o­gy, should ben­e­fit every­one and not just the few. And that is pos­si­ble, but we have to start the con­ver­sa­tion now, and we have to start dis­cussing pos­si­ble solu­tions.

Thank you.

Further Reference

Artificial Intelligence, Technology without Alternative?, at the Lift Conference 2016 site.

This presentation, with complete sites, is available at Klewel.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.