I think that artificial intelligence absolutely poses a challenge for society and humanity. I think there are various things when we say artificial intelligence. There’s artificial general intelligence, which is the idea that there’s a singularity coming and that something will become so smart that we won’t be able to control it, and it might even decide that human beings are kind of a bad idea and get rid of us. And I think that’s a real threat. I think that it’s not…in my view imminent. I think that we have a little bit of time. And I think actually what I’m more concerned about personally is the machine learning.
Machine learning systems that we have today have become so powerful and are being introduced into everything from self-driving cars, to predictive policing, to assisting judges, to producing your news feed on Facebook on what you ought to see. And they have a lot of societal impacts. But they’re very difficult to audit. They’re not like normal software programs where you can just read the code and understand what they do. In fact even the developers unless you test them don’t understand exactly—they can’t predict exactly what the outcome is going to be.
So there are estimates that self-driving cars may reduce traffic accidents by 90%. There are types of diagnostics where machines seem to be faring much better than human beings at diagnosing diseases. There is a possibility that things like parole or bail may be very quickly shown to be better judged by machines. These raise some interesting questions because these are lives at risk, and lives that could be saved.
As we start to introduce these things, our regulatory frameworks, the way we think about how society will work under these new systems whether we’re talking about jobs or whether we’re talking about the law or whether we’re talking about technical architecture, all of these things are going to change.
At the Media Lab, we use the word anti-disciplinary because we find that the traditional disciplines, both in business and academia, tend to reinforce a specialization which sort of the cliché is you learn more and more about less and less. And that’s important when you’re going deep. But when you have a technology like AI that cuts across all of these disciplines in terms of their impact, you need to create this tissue in between. And I worry a little bit that the people who are designing and deploying these systems are computer scientists who are trying to solve the world’s problems through computer science, and that the connective tissue between the discipline of machine learning and computer science, and the other disciplines like social sciences or law, or even philosophy, that those communities aren’t really able to talk to each other because the language is so different and there isn’t a lot of culture of interaction between those communities, and I think the way we address this is to start creating much more interdisciplinary work.
As we were thinking about how we might tackle some of the missing pieces in thinking about where AI should go, I thought about it with various hats. I thought about it with my MacArthur Foundation hat, with my Knight Foundation hat, the Media Lab hat, and just sort of a citizen of the world hat. And I realized that all of the pieces that needed to be at the table weren’t in a single institution. You couldn’t give all the money to Media Lab. You couldn’t give all the money to Berkman Center. You couldn’t give all the money to anybody and get all of the different voices that we needed. And not just voices. Everybody has a different framework. So the way that the Harvard Law School thinks about the theory of change in thinking through problems is very different than the way that the Media Lab would do it.
So I think the key thing, and you can see through the diversity of the different people funding this initiative as well as the people who are involved in coordinating it, we’re hoping to bring both a diversity of geography, a diversity of technology versus also field diversity. But also just a fundamental theory of change and at what layer we should intervene. And so I think the first couple of years we’re going to be doing a lot of really interesting experiments, and hopefully by the end of this process we’ll have a pretty good idea of several different things that we should set up either as institutions or as funding opportunities.
And I think it’s important to start having the conversation—not just a conversation but doing the work around the policy, thinking about how society should be integrated and respond, before it’s too late. Because I think one of the problems is that once you move past certain points it’s going to be difficult to roll back. And so I think timing-wise, sort of beginning last year and this year, is really the key point in bringing others into this process. Because up till now the computer science was just getting to the point where it was ready to be deployed. Right now it’s sort of just right or almost a little bit too late to get started. So I think the timing is super important.