I think that arti­fi­cial intel­li­gence absolute­ly pos­es a chal­lenge for soci­ety and human­i­ty. I think there are var­i­ous things when we say arti­fi­cial intel­li­gence. There’s arti­fi­cial gen­er­al intel­li­gence, which is the idea that there’s a sin­gu­lar­i­ty com­ing and that some­thing will become so smart that we won’t be able to con­trol it, and it might even decide that human beings are kind of a bad idea and get rid of us. And I think that’s a real threat. I think that it’s not…in my view immi­nent. I think that we have a lit­tle bit of time. And I think actu­al­ly what I’m more con­cerned about per­son­al­ly is the machine learn­ing.

Machine learn­ing sys­tems that we have today have become so pow­er­ful and are being intro­duced into every­thing from self-driving cars, to pre­dic­tive polic­ing, to assist­ing judges, to pro­duc­ing your news feed on Facebook on what you ought to see. And they have a lot of soci­etal impacts. But they’re very dif­fi­cult to audit. They’re not like nor­mal soft­ware pro­grams where you can just read the code and under­stand what they do. In fact even the devel­op­ers unless you test them don’t under­stand exactly—they can’t pre­dict exact­ly what the out­come is going to be.

So there are esti­mates that self-driving cars may reduce traf­fic acci­dents by 90%. There are types of diag­nos­tics where machi­nes seem to be far­ing much bet­ter than human beings at diag­nos­ing dis­eases. There is a pos­si­bil­i­ty that things like parole or bail may be very quick­ly shown to be bet­ter judged by machi­nes. These raise some inter­est­ing ques­tions because the­se are lives at risk, and lives that could be saved.

As we start to intro­duce the­se things, our reg­u­la­to­ry frame­works, the way we think about how soci­ety will work under the­se new sys­tems whether we’re talk­ing about jobs or whether we’re talk­ing about the law or whether we’re talk­ing about tech­ni­cal archi­tec­ture, all of the­se things are going to change.

At the Media Lab, we use the word anti-dis­ci­pli­nary because we find that the tra­di­tion­al dis­ci­plines, both in busi­ness and acad­e­mia, tend to rein­force a spe­cial­iza­tion which sort of the cliché is you learn more and more about less and less. And that’s impor­tant when you’re going deep. But when you have a tech­nol­o­gy like AI that cuts across all of the­se dis­ci­plines in terms of their impact, you need to cre­ate this tis­sue in between. And I wor­ry a lit­tle bit that the peo­ple who are design­ing and deploy­ing the­se sys­tems are com­put­er sci­en­tists who are try­ing to solve the world’s prob­lems through com­put­er sci­ence, and that the con­nec­tive tis­sue between the dis­ci­pline of machine learn­ing and com­put­er sci­ence, and the oth­er dis­ci­plines like social sci­ences or law, or even phi­los­o­phy, that those com­mu­ni­ties aren’t real­ly able to talk to each oth­er because the lan­guage is so dif­fer­ent and there isn’t a lot of cul­ture of inter­ac­tion between those com­mu­ni­ties, and I think the way we address this is to start cre­at­ing much more inter­dis­ci­pli­nary work.

As we were think­ing about how we might tack­le some of the miss­ing pieces in think­ing about where AI should go, I thought about it with var­i­ous hats. I thought about it with my MacArthur Foundation hat, with my Knight Foundation hat, the Media Lab hat, and just sort of a cit­i­zen of the world hat. And I real­ized that all of the pieces that need­ed to be at the table weren’t in a sin­gle insti­tu­tion. You couldn’t give all the mon­ey to Media Lab. You couldn’t give all the mon­ey to Berkman Center. You couldn’t give all the mon­ey to any­body and get all of the dif­fer­ent voic­es that we need­ed. And not just voic­es. Everybody has a dif­fer­ent frame­work. So the way that the Harvard Law School thinks about the the­o­ry of change in think­ing through prob­lems is very dif­fer­ent than the way that the Media Lab would do it.

So I think the key thing, and you can see through the diver­si­ty of the dif­fer­ent peo­ple fund­ing this ini­tia­tive as well as the peo­ple who are involved in coor­di­nat­ing it, we’re hop­ing to bring both a diver­si­ty of geog­ra­phy, a diver­si­ty of tech­nol­o­gy ver­sus also field diver­si­ty. But also just a fun­da­men­tal the­o­ry of change and at what lay­er we should inter­vene. And so I think the first cou­ple of years we’re going to be doing a lot of real­ly inter­est­ing exper­i­ments, and hope­ful­ly by the end of this process we’ll have a pret­ty good idea of sev­er­al dif­fer­ent things that we should set up either as insti­tu­tions or as fund­ing oppor­tu­ni­ties.

And I think it’s impor­tant to start hav­ing the conversation—not just a con­ver­sa­tion but doing the work around the pol­i­cy, think­ing about how soci­ety should be inte­grat­ed and respond, before it’s too late. Because I think one of the prob­lems is that once you move past cer­tain points it’s going to be dif­fi­cult to roll back. And so I think timing-wise, sort of begin­ning last year and this year, is real­ly the key point in bring­ing oth­ers into this process. Because up till now the com­put­er sci­ence was just get­ting to the point where it was ready to be deployed. Right now it’s sort of just right or almost a lit­tle bit too late to get start­ed. So I think the tim­ing is super impor­tant.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.