Jonathan Zittrain: Artificial Intelligence is one label for it, but anoth­er label for it is just forms of sys­tems that evolve under their own rules in ways that might be unex­pect­ed even to the cre­ator of those sys­tems that will be used in some way to sub­sti­tute for human agency, in a lot of instances. And that sub­sti­tu­tion for human agency might be some­thing that is quite autonomy-enhanc­ing for humans indi­vid­u­al­ly or in groups. If you have a sys­tem that can wor­ry about stuff that you don’t have to wor­ry about any­more, you can turn your atten­tion to oth­er pos­si­bly more inter­est­ing or impor­tant issues.

On the oth­er hand, if you’re con­sign­ing to a sys­tem agenda-setting pow­er, decision-making power—again either indi­vid­u­al­ly or in a group—that may real­ly car­ry with it con­se­quences and peo­ple aren’t so much keep­ing an eye on it, or peo­ple who are direct­ly affect­ed aren’t in a posi­tion to keep an eye on it, I think that’s cre­at­ing some of the dis­com­fort we see right now with the pace at which AI is grow­ing, and appli­ca­tions of machine learn­ing and oth­er sys­tems that can devel­op under their own steam. These are the sorts of things that give us some pause.

And I think about the pro­vi­sion of gov­ern­ment ser­vices, or deci­sions that are unique­ly often made by gov­ern­ments, such as under what cir­cum­stances some­body should get bail and how much the bail should be set at, whether some­body should be paroled from prison, how long should a sen­tence be. These are things we usu­al­ly con­sign to human actors—judges—but those judges are sub­ject to their own bias­es and fal­li­bil­i­ty and incon­sis­ten­cies. And there is now an oppor­tu­ni­ty to start think­ing about what would it mean—equal pro­tec­tion under the law—to treat sim­i­lar peo­ple sim­i­lar­ly. And machines could either be quite help­ful with that in double-checking the way in which our cohort of judges is behav­ing. It could also be I think an unfor­tu­nate exam­ple of set it and for­get it, and bias­es could creep in and often in unex­pect­ed ways or cir­cum­stances that real­ly will require some form of over­sight.

All of these sys­tems not only have their own out­puts and depen­den­cies and peo­ple that they affect. They may also be inter­act­ing with oth­er sys­tems, and that can end up with unex­pect­ed results and quite pos­si­bly coun­ter­in­tu­itive ones.

We have had for many many years, for the func­tions in soci­ety under­tak­en by pro­fes­sion­als where the pro­fes­sion­als are the most empowered—able to real­ly affect oth­er people’s lives—we often have them orga­nized into a for­mal pro­fes­sion, even with a guild that you need spe­cial qual­i­fi­ca­tions to join. There are pro­fes­sion­al ethics inde­pen­dent of what you agree to do for a cus­tomer or a client. Now, I don’t know if AI is ready for that. I don’t know that we would want to restrict some­body in a garage from exper­i­ment­ing with some cool code and neat data and doing things. At the same time, when that data gets spun up and it starts affect­ing mil­lions or tens of mil­lions of peo­ple, it’s not clear that we still want it to be as if it’s just a cool project in a garage.

Interestingly, acad­e­mia in huge part gave us the Internet, which in turn has been the gift that keeps on giv­ing. And so many fea­tures of the way the Internet was designed and con­tin­ues to oper­ate reflect the val­ues of acad­e­mia that have to do with an open­ness to con­tri­bu­tion from near­ly any­where, and under­stand­ing that we should try things out and have things sink or swim on their recep­tion rather than try­ing to hand­i­cap ahead of time what exact­ly is going to work, tight­ly con­trolled by one firm or a hand­ful. These are all reflect­ed in the Internet. And for AI, I think there’s a sim­i­lar desire to be wel­com­ing to as many dif­fer­ent ways of imple­ment­ing and refin­ing the remark­able toolset that has devel­oped in just a few years, and the cor­re­spond­ing reams of data that can be used and that in turn can go from innocu­ous to quite sen­si­tive in just one flop.

To be able to have acad­e­mia not just play­ing a mean­ing­ful role but cen­tral to these efforts strikes me as an impor­tant soci­etal hedge against what oth­er­wise can be the pro­pri­eti­za­tion of some of the best tech­nolo­gies and our inabil­i­ty to under­stand how they’re doing what they do. Because often we don’t know what we don’t know. Able even to sug­gest design changes or tweaks and to then com­pare it, rig­or­ous­ly, against some set of cri­te­ria that are cri­te­ria that in turn can be debat­ed about: What makes for a bet­ter soci­ety? What is help­ing human­i­ty? What is respect­ing dig­ni­ty and auton­o­my? And those are ques­tions that we may nev­er ful­ly set­tle but we may have a sense on the spec­trum of which is push­ing things in one direc­tion or anoth­er.

If we didn’t have acad­e­mia play­ing a role, it might just be a tra­di­tion­al pri­vate arms race. And we could find that gosh, some­how this mag­ic box does some cool thing offered by name-your-company. We don’t real­ly know how it works. And because it’s a robot it’s nev­er going to quit its job and move to anoth­er com­pa­ny and spread that knowl­edge or retire and teach. These are the sorts of things that over the medi­um to the longer term mean that hav­ing a mean­ing­ful, open project that real­ly devel­ops this next round of tech­nol­o­gy in the kind of open man­ner in which the Internet was devel­oped and is often health­ily crit­i­cized and refined, that’s what we should be aim­ing for for AI.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.