I think one of the things I want to say from the start is it’s not like AI is going to appear. It’s actually out there, in some instances in ways that we never even notice. So for example checking credit card usage, predicting patients who are likely to come back into the emergency room and therefore keeping them from going home and then having to come back. There are some very clever uses of artificial intelligence in education. But increasingly in ways in which we do notice it, for example the various personal assistants on our phones. So it’s out there making a difference, in most cases in situations where it’s not replacing people but really working with people.
So I stress that distinction between replacing people and complementing people because so much of the science fiction that’s out there and so much that’s in the press presumes that the goal would be to replace people. But there’s a perfectly wonderful way to replace human intelligence, you know. It takes a man, a woman, certain acts and you’re done. And human intelligence is limited in certain ways, so why make that the aim? I mean, it has fascinated people for centuries, probably tied back to religion and people wondering or being concerned that people would try to imitate God, as it were. This is the story of the golem, it’s the story of Frankenstein, it’s the story of Ex Machina.
But that’s not the best way to think about developing artificial intelligence methods nor embodying them in computer systems. Rather, it would be better to complement people, as many computer systems do now. So that’s the reason I make that distinction and urge it, is that regardless of which two aims you pick, the systems are going to exist— Unless we just send them to Mars by themselves, they’re going to exist in a world that’s populated with human beings.
You can see this playing out, actually, in something that’s been in the press a lot recently, which is autonomous and semi‐autonomous vehicles. So for example autonomous vehicles, the idea is they just drive; no person’s involved in the driving at all. Semi‐autonomous vehicles do some driving but then shift off with people. In both cases they’re interacting with people, so until we build roads on which the only vehicles are fully autonomous, the vehicles are going to have to interact with people. And even if all the vehicles are fully autonomous, we have to get rid of all of the pedestrians and all of the bicycles and everything.
That’s the issue with fully autonomous, they will still have to interact with people. Semi‐autonomous vehicles have to take into account people’s cognitive capacities in order to handle the so‐called handoff between people and computer systems appropriately.
So, except in a few instances there’s no taking people out of the picture. I think it’s much more valuable and societally useful to think from the very beginning of designing in ways to interact appropriately with people, rather than building something separate from people and then presuming people will adjust to it.
What’s crucial at this point is to bring together expertise from these different fields, and that that expertise has to be brought in before the systems are designed and released to the world. And now is the time to think about this, to bring together people who are experts in artificial intelligence with people who understand ethics deeply, with psychologists who understand human cognition, with social scientists who understand social organizations, so that we can, as the rubric now is, “make AI for social good.” And that rubric actually covers also building systems that help low‐resource communities, building systems that protect the environment, building systems that contribute to education and healthcare.
I think both that we need to train and teach people about ethics— And here I want to say I’m not talking about professional ethics. I’m talking about really understanding the tradeoffs between consequentialist ideas and deontological ideas, grappling with virtue ethics, thinking about justice, thinking about who you’re serving—really a deep sense of ethics and about these systems, and then make it part of the process of design of the systems. It’s a years‐long process of having people from these different fields come together, explain their work, explain their perspectives to each other in ways that are accessible, treat those different perspectives with respect, and develop a common vocabulary and a way of approaching things together. That can’t be short‐circuited. It’s really a years‐long process.