One of the chal­lenges of build­ing new tech­nolo­gies is that we often want them to solve things that have been very social­ly dif­fi­cult to solve. Things that we don’t have answers to, prob­lems that we don’t know how we would best go about it in a social­ly respon­si­ble way. 

Technology can­not fill that gap for us. In fact tech­nol­o­gy is more like­ly than not—and arti­fi­cial intel­li­gence in particular—to exac­er­bate exist­ing chal­lenges. So if we look at dif­fer­ent issues where we have major social chal­lenges ahead of us, whether that is in the busi­ness realm, in crim­i­nal jus­tice, in med­i­cine, in edu­ca­tion, we need to think hard and deep about how we want to mar­ry tech­nol­o­gy and arti­fi­cial intel­li­gence into the broad­er social chal­lenges that we’re see­ing with those systems.

Artificial intel­li­gence in many ways right now means dif­fer­ent things to dif­fer­ent peo­ple. For the tech­ni­cal com­mu­ni­ty it’s a very par­tic­u­lar and nar­row set of tech­nolo­gies, very much focused on neur­al net­works, or advanced machine learn­ing tech­niques, robot­ics of a par­tic­u­lar ilk. And these kinds of tech­niques have been in devel­op­ment for an extend­ed peri­od of time, so most tech­ni­cal folks are think­ing about the iter­a­tions, the evo­lu­tions, the his­to­ries of that. 

For the busi­ness com­mu­ni­ty, arti­fi­cial intel­li­gence has become the new buzz­word. The idea of being able to do mag­i­cal things with large amounts of data to solve prob­lems that have in many ways become social­ly intractable, we hope that we can solve through tech­ni­cal means. 

Now for the pub­lic, AI real­ly refers to the imag­i­na­tion that com­put­ers can do crazy things. And those crazy things can be both positive—solving the world’s prob­lems, com­ing down and com­put­ers appear­ing to be smart. They can also be absolute­ly ter­ri­fy­ing, and usu­al­ly there we refer to Hollywood con­cepts to real­ly come back to it. 

One of the most impor­tant things to do when we start to study arti­fi­cial intel­li­gence is to actu­al­ly bring togeth­er dif­fer­ent frame­works of think­ing. We need to think about it both tech­ni­cal­ly and social­ly. And the main rea­son is because the biggest prob­lems ahead of us are not sim­ply tech­ni­cal or sim­ply social. In fact it’s the mar­riage between the two that becomes the most impor­tant. For this rea­son we have to take dif­fer­ent kinds of social issues very seri­ous­ly. We have to real­ly under­stand what’s at stake, the bias­es of the data that are involved in mak­ing arti­fi­cial intel­li­gence func­tion, the inter­pre­ta­tion lay­ers, the ways in which these sys­tems can get manip­u­lat­ed. All of these social issues become crit­i­cal to mak­ing cer­tain that the tech­nolo­gies are done right.

The key and chal­lenge of fig­ur­ing out how to think eth­i­cal­ly is to actu­al­ly think about how we want to mar­ry dif­fer­ent kinds of social mind­sets and dif­fer­ent kinds of tech­ni­cal mind­sets. How do we get the tech­ni­cal folks to start artic­u­lat­ing the realm of pos­si­bil­i­ties that are avail­able to us? And what are the gov­er­nance struc­tures that we want to see in place to make cer­tain that we can choose respon­si­bly? We’re enter­ing a realm where we’re pay­ing a lot of atten­tion to cyber­se­cu­ri­ty, where we’re real­iz­ing that the secu­ri­ty of our infra­struc­ture can put us at risk in tremen­dous ways. The same will be true of arti­fi­cial intel­li­gence, but the risks that we will face aren’t nec­es­sar­i­ly about tra­di­tion­al hack­ing. They’re about the manip­u­la­tion of data, about data being mis­in­ter­pret­ed in dif­fer­ent ways, about the ways of clean­ing and pro­cess­ing that data to do analy­sis, not tak­ing into account cer­tain social issues. And so this means we need to real­ly think about the whole process in which we pro­duce arti­fi­cial intel­li­gence systems.