Jonathan Zittrain: Artificial Intelligence is one label for it, but another label for it is just forms of systems that evolve under their own rules in ways that might be unexpected even to the creator of those systems that will be used in some way to substitute for human agency, in a lot of instances. And that substitution for human agency might be something that is quite autonomy-enhancing for humans individually or in groups. If you have a system that can worry about stuff that you don’t have to worry about anymore, you can turn your attention to other possibly more interesting or important issues.
On the other hand, if you’re consigning to a system agenda-setting power, decision-making power—again either individually or in a group—that may really carry with it consequences and people aren’t so much keeping an eye on it, or people who are directly affected aren’t in a position to keep an eye on it, I think that’s creating some of the discomfort we see right now with the pace at which AI is growing, and applications of machine learning and other systems that can develop under their own steam. These are the sorts of things that give us some pause.
And I think about the provision of government services, or decisions that are uniquely often made by governments, such as under what circumstances somebody should get bail and how much the bail should be set at, whether somebody should be paroled from prison, how long should a sentence be. These are things we usually consign to human actors—judges—but those judges are subject to their own biases and fallibility and inconsistencies. And there is now an opportunity to start thinking about what would it mean—equal protection under the law—to treat similar people similarly. And machines could either be quite helpful with that in double-checking the way in which our cohort of judges is behaving. It could also be I think an unfortunate example of set it and forget it, and biases could creep in and often in unexpected ways or circumstances that really will require some form of oversight.
All of these systems not only have their own outputs and dependencies and people that they affect. They may also be interacting with other systems, and that can end up with unexpected results and quite possibly counterintuitive ones.
We have had for many many years, for the functions in society undertaken by professionals where the professionals are the most empowered—able to really affect other people’s lives—we often have them organized into a formal profession, even with a guild that you need special qualifications to join. There are professional ethics independent of what you agree to do for a customer or a client. Now, I don’t know if AI is ready for that. I don’t know that we would want to restrict somebody in a garage from experimenting with some cool code and neat data and doing things. At the same time, when that data gets spun up and it starts affecting millions or tens of millions of people, it’s not clear that we still want it to be as if it’s just a cool project in a garage.
Interestingly, academia in huge part gave us the Internet, which in turn has been the gift that keeps on giving. And so many features of the way the Internet was designed and continues to operate reflect the values of academia that have to do with an openness to contribution from nearly anywhere, and understanding that we should try things out and have things sink or swim on their reception rather than trying to handicap ahead of time what exactly is going to work, tightly controlled by one firm or a handful. These are all reflected in the Internet. And for AI, I think there’s a similar desire to be welcoming to as many different ways of implementing and refining the remarkable toolset that has developed in just a few years, and the corresponding reams of data that can be used and that in turn can go from innocuous to quite sensitive in just one flop.
To be able to have academia not just playing a meaningful role but central to these efforts strikes me as an important societal hedge against what otherwise can be the proprietization of some of the best technologies and our inability to understand how they’re doing what they do. Because often we don’t know what we don’t know. Able even to suggest design changes or tweaks and to then compare it, rigorously, against some set of criteria that are criteria that in turn can be debated about: What makes for a better society? What is helping humanity? What is respecting dignity and autonomy? And those are questions that we may never fully settle but we may have a sense on the spectrum of which is pushing things in one direction or another.
If we didn’t have academia playing a role, it might just be a traditional private arms race. And we could find that gosh, somehow this magic box does some cool thing offered by name-your-company. We don’t really know how it works. And because it’s a robot it’s never going to quit its job and move to another company and spread that knowledge or retire and teach. These are the sorts of things that over the medium to the longer term mean that having a meaningful, open project that really develops this next round of technology in the kind of open manner in which the Internet was developed and is often healthily criticized and refined, that’s what we should be aiming for for AI.