I think one of the things I want to say from the start is it’s not like AI is going to appear. It’s actu­al­ly out there, in some instances in ways that we nev­er even notice. So for exam­ple check­ing cred­it card usage, pre­dict­ing patients who are like­ly to come back into the emer­gency room and there­fore keep­ing them from going home and then hav­ing to come back. There are some very clever uses of arti­fi­cial intel­li­gence in edu­ca­tion. But increas­ing­ly in ways in which we do notice it, for exam­ple the var­i­ous per­son­al assis­tants on our phones. So it’s out there mak­ing a dif­fer­ence, in most cas­es in sit­u­a­tions where it’s not replac­ing peo­ple but real­ly work­ing with peo­ple.

So I stress that dis­tinc­tion between replac­ing peo­ple and com­ple­ment­ing peo­ple because so much of the sci­ence fic­tion that’s out there and so much that’s in the press pre­sumes that the goal would be to replace peo­ple. But there’s a per­fect­ly won­der­ful way to replace human intel­li­gence, you know. It takes a man, a woman, cer­tain acts and you’re done. And human intel­li­gence is lim­it­ed in cer­tain ways, so why make that the aim? I mean, it has fas­ci­nat­ed peo­ple for cen­turies, prob­a­bly tied back to reli­gion and peo­ple won­der­ing or being con­cerned that peo­ple would try to imi­tate God, as it were. This is the sto­ry of the golem, it’s the sto­ry of Frankenstein, it’s the sto­ry of Ex Machina.

But that’s not the best way to think about devel­op­ing arti­fi­cial intel­li­gence meth­ods nor embody­ing them in com­put­er sys­tems. Rather, it would be bet­ter to com­ple­ment peo­ple, as many com­put­er sys­tems do now. So that’s the rea­son I make that dis­tinc­tion and urge it, is that regard­less of which two aims you pick, the sys­tems are going to exist— Unless we just send them to Mars by them­selves, they’re going to exist in a world that’s pop­u­lat­ed with human beings.

You can see this play­ing out, actu­al­ly, in some­thing that’s been in the press a lot recent­ly, which is autonomous and semi-autonomous vehi­cles. So for exam­ple autonomous vehi­cles, the idea is they just dri­ve; no person’s involved in the dri­ving at all. Semi-autonomous vehi­cles do some dri­ving but then shift off with peo­ple. In both cas­es they’re inter­act­ing with peo­ple, so until we build roads on which the only vehi­cles are ful­ly autonomous, the vehi­cles are going to have to inter­act with peo­ple. And even if all the vehi­cles are ful­ly autonomous, we have to get rid of all of the pedes­tri­ans and all of the bicy­cles and every­thing.

That’s the issue with ful­ly autonomous, they will still have to inter­act with peo­ple. Semi-autonomous vehi­cles have to take into account people’s cog­ni­tive capac­i­ties in order to han­dle the so-called hand­off between peo­ple and com­put­er sys­tems appro­pri­ate­ly.

So, except in a few instances there’s no tak­ing peo­ple out of the pic­ture. I think it’s much more valu­able and soci­etal­ly use­ful to think from the very begin­ning of design­ing in ways to inter­act appro­pri­ate­ly with peo­ple, rather than build­ing some­thing sep­a­rate from peo­ple and then pre­sum­ing peo­ple will adjust to it.

What’s cru­cial at this point is to bring togeth­er exper­tise from these dif­fer­ent fields, and that that exper­tise has to be brought in before the sys­tems are designed and released to the world. And now is the time to think about this, to bring togeth­er peo­ple who are experts in arti­fi­cial intel­li­gence with peo­ple who under­stand ethics deeply, with psy­chol­o­gists who under­stand human cog­ni­tion, with social sci­en­tists who under­stand social orga­ni­za­tions, so that we can, as the rubric now is, make AI for social good.” And that rubric actu­al­ly cov­ers also build­ing sys­tems that help low-resource com­mu­ni­ties, build­ing sys­tems that pro­tect the envi­ron­ment, build­ing sys­tems that con­tribute to edu­ca­tion and health­care.

I think both that we need to train and teach peo­ple about ethics— And here I want to say I’m not talk­ing about pro­fes­sion­al ethics. I’m talk­ing about real­ly under­stand­ing the trade­offs between con­se­quen­tial­ist ideas and deon­to­log­i­cal ideas, grap­pling with virtue ethics, think­ing about jus­tice, think­ing about who you’re serving—really a deep sense of ethics and about these sys­tems, and then make it part of the process of design of the sys­tems. It’s a years-long process of hav­ing peo­ple from these dif­fer­ent fields come togeth­er, explain their work, explain their per­spec­tives to each oth­er in ways that are acces­si­ble, treat those dif­fer­ent per­spec­tives with respect, and devel­op a com­mon vocab­u­lary and a way of approach­ing things togeth­er. That can’t be short-circuited. It’s real­ly a years-long process.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.