Some of the long-term challenges are very hypothetical—we don’t really know if they will ever materialize in this way. But in the short term I think AI poses some regulatory challenges for society.
Archive (Page 2 of 3)
Back in 1980, working with the artificial intelligence guys, we had this idea we were going to make smart machines. But it needed to read good books, don’t you think?
The smartphone is the ultimate example of a universal computer. Apps transform the phone into different devices. Unfortunately, the computational revolution has done little for the sustainability of our Earth. Yet, sustainability problems are unique in scale and complexity, often involving significant computational challenges.
When I go talk about this, the thing that I tell people is that I’m not worried about algorithms taking over humanity, because they kind of suck at a lot of things, right. And we’re really not that good at a lot of things we do. But there are things that we’re good at. And so the example that I like to give is Amazon recommender systems. You all run into this on Netflix or Amazon, where they recommend stuff to you. And those algorithms are actually very similar to a lot of the sophisticated artificial intelligence we see now. It’s the same underneath.
When we talk about technologies such as AI, and policy, one of the main problems is that technological advancement is fast, and policy and democracy is a very very slow process. And that could be potentially a very big problem if we think that AI could be potentially dangerous.
With Twitter bots and a lot of AI in pop science, it’s kind of like staying up late with your parents. Once you ask to be treated like a human being, you have to abide by a different set of rules. You have to be extra good. And the second you misbehave, you get sent to bed. Because you didn’t play by the rules that you were agreeing to be judged by.