Thank you, Hannes. It’s great to be here. The creation of artificial life is a very popular motive in literature and science fiction. And it’s a story that never really has a happy ending. The creator, even though he might have the best intentions, ends up summoning spirits that he can’t control.
When we talk about technologies such as AI, and policy, one of the main problems is that technological advancement is fast, and policy and democracy is a very very slow process. And that could be potentially a very big problem if we think that AI could be potentially dangerous. Now, we heard before from Alex Lebrun that this is all science fiction, this is not going to be happening. But there are others who disagree.
The development of full artificial intelligence could spell the end of the human race.
Stephen Hawking, “Stephen Hawking warns artificial intelligence could end mankind”
Like this guy, Stephen Hawking, who says it could spell the end of the human race.
Nuclear power gave us access to the almost unlimited energy stored in an atom, but unfortunately, the first thing we did was create an atom bomb. […] AI is going to go the same way.
Stuart Russel, Science Friday, April 10, 2015, “The Future of Artificial Intelligence”
Or Stuart Russell, who compares it to nuclear technology, where first the idea was to find the energy source but the first thing that was actually made was a bomb.
Now, this may all be science fiction, but if you have science fiction you also have an idea or a vision of how the future could be. And one of these ideas is usually this dystopia, as you can see here, the rise of the machines.
But there’s also a more utopian idea like in Star Trek, where technology can meet all material needs of mankind and therefore there’s no more poverty or greed or hunger, and no more war. And if we talk about technology, we always have to think about what kind of future we want to have.
Now I will talk about some of the issues and challenges that we face in this fourth industrial revolution.
There’s a study by Oxford University that predicts that 50% of all American jobs will be at risk in the next twenty years due to automation. And in the past, automation was usually something that concerned mostly blue collar jobs. But it is entirely possible with AI and other technology that more highly‐skilled jobs could be at risk.
Now, this could be a good thing. It could be could be an ally. It could help us and free us from boring tasks that we don’t really like to do. Or it could be mainly a good thing for employers because they have a dream workforce who never complains, who works all the time, doesn’t have to go to the bathroom, or doesn’t join a union.
So, the question is who’s going to profit from all these developments? And if we ask this question, we have to a look at who’s investing in AI research. And now you course have universities, you have public/private partnerships. But you also have the tech giants that invest a lot of money in these technologies. And you also have, of course, the military that invests.
Now, even if artificial intelligence won’t be as intelligent as in the science fiction movie, even if it’s just a little self‐learning and self‐improving, if we imagine an Internet of Things where we are surrounded with self‐improving machines, this question about who is liable and who is responsible if something happens…we will have to deal with this question. Because if you have a self‐driving car and there’s an accident, who’s responsible for this? Is it is the owner, like with a pet the owner is responsible? Or is it the manufacturer? Or maybe it’s the machine itself.
When we talk about artificial intelligence as not just programming but something that is learning and is developing, this might be something we would have to think about. That these machines should learn some rules, some norms, some values, as well. I’m sure you all know Isaac Asimov and his Robot Laws. So they should have some sort of rules that they shouldn’t harm people or shouldn’t harm themselves.
If you take another picture from science fiction you have Commander Data from Star Trek, who has an ethical subroutine [that] worked very well for him. He was probably more human and more moral than all his colleagues. But then again, he also had an evil twin brother.
If we talk about all these challenges, we should also talk about possible solutions. What can be done. What should be done. Is there anything to be done at all? The first idea would be to regulate, maybe create a regulatory agency. Maybe there’s a need for a law, or maybe just a code of conduct. Maybe we should just ban things that we don’t want, like the Geneva Protocol that bans chemical or biological warfare. We could ban artificial intelligence in warfare.
Maybe it’s just that research should always consider ethical implications, like in the life sciences. In medicine and genetics, that’s always a part. You have the Hippocratic Oath, you ethics boards. So maybe that should be a part of research on artificial intelligence and artificial life, as well.
As I mentioned before, who’s going to profit? So, there is probably in each to have research that is not for profit. One initiative is OpenAI, that Elon Musk and others are funding, but that could also be conventional university research, also on the implications on society.
Now, this is a highly highly controversial question that we in Switzerland will vote on in June. It’s an unconditional basic income for all. This sounds not very realistic, but if these predictions that 50% of all jobs will be lost, then maybe we have to think about what to do with all these people who don’t have jobs anymore, and maybe an unconditional income could be a possible solution.
Or we could think about what sets us apart, what sets human intelligence apart from artificial intelligence. And that would probably be creativity, social skills, empathy, things like that. And maybe our education should focus more on these skills, so we can strengthened the skills that set us apart.
In the end, the question is just, if we have a powerful technology, who’s going to benefit from this technology? Is it going to be the wealthy? The big companies? Is it going to be the military? Or can this technology be used to solve the big problems we have, like climate change? Or diseases, or poverty? And I think we should make sure that AI technology, that this powerful technology, should benefit everyone and not just the few. And that is possible, but we have to start the conversation now, and we have to start discussing possible solutions.
Artificial Intelligence, Technology without Alternative?, at the Lift Conference 2016 site.
This presentation, with complete sites, is available at Klewel.