Alan Cooper: So, bon­jour messieurs et mes­dames, haut et bas. That’s all the French I know. Thank you very much for tol­er­at­ing the monoglot American. And thank you to Roberta and Gilles and Fredrik for putting on this event and for invit­ing me here. It’s my plea­sure to be here, and it’s a plea­sure to see all of your smil­ing faces. 

It is very much a a plea­sure for me to be here, and it is a priv­i­lege for me to be allowed to address you all in the place of hon­or, the first speech of the first day of this con­fer­ence. And it was ten years ago that I was accord­ed a sim­i­lar hon­or when in 2008 in Savannah, Georgia I was the first speak­er on the first day at the very first IxDA con­fer­ence. It seems fit­ting to be here. 

I’ve always sup­port­ed the IxDA from its begin­ning, and I’m proud that for­mer col­leagues of mine were involved in the cre­ation of the orga­ni­za­tion. Because the IxDA is the Interaction Design Association, and inter­ac­tion design is a dis­ci­pline com­posed of many… It has many roots, it’s a tree with many roots. And some of those roots are…branching off in ways that…there’s a kind of a thread of aes­thet­ics and per­son­al vision in the indus­try? And while those are…interesting, they lead away from the user. And to me, inter­ac­tion design is about design­ing the behav­ior of tech­nol­o­gy in ser­vice of the user, and that is the core of who we are and what we do. In oth­er, words IxDA rep­re­sents user-cen­tered design rather than design­er-cen­tered design. 

So, some of you may know my wife. Sue and I, in just this past October sold Cooper to Designit, a European design firm. And we have the oppor­tu­ni­ty now to think about oth­er things, and pur­sue oth­er avenues. And today in my talk you’ll see what I’ve been work­ing on. Is every­one good? Enough cof­fee? You’re good? Have I had enough coffee? 

Seven years ago, Sue and I sold our house in Silicon Valley and we moved to this fifty-acre ranch in the coun­try, which we named Monkey Ranch, after our cat Monkey. It’s in West Petaluma. It’s about an hour north of San Francisco. And to my sur­prise, I found an entire­ly new per­spec­tive on the high-tech indus­try amongst the sheep and the chick­ens and the tall California grass. And that new per­spec­tive is the basis for this talk. And it’s why pic­tures of Monkey Ranch are the back­ground for many of my slides. 

So let’s start out. I’d like you to do a lit­tle thought exper­i­ment with me. Imagine that you work at a giant social media com­pa­ny. Every day you access mas­sive col­lec­tions of user data, ana­lyz­ing it so that you can give users exact­ly the posts that they most want to see and not the posts that they don’t. Your work is so good that you’ve cre­at­ed a tar­get­ed adver­tis­ing plat­form that’s near­ly perfect. 

Then one day, you dis­cov­er that Russian gov­ern­ment hack­ers have used it to influ­ence an American pres­i­den­tial cam­paign, using the psy­cho­log­i­cal pro­files that you cre­at­ed. The Russians iden­ti­fied sus­cep­ti­ble end users and flood­ed their feeds with hate mes­sages, fear-mongering, and out­ra­geous lies about pro­gres­sive can­di­dates and orga­ni­za­tions. It’s then that you real­ize that your work direct­ly con­tributed to the destruc­tion of a rep­re­sen­ta­tive democracy. 

Or, imag­ine that you are a staff researcher at a major com­put­er soft­ware com­pa­ny. You’ve been work­ing on real­ly cool learn­ing algo­rithms for con­ver­sa­tion­al user inter­faces and cre­at­ed a chat­bot to show off what your AI is capa­ble of. Your boss is so impressed with your work that he lets you deploy the chat­bot on the web. 

But hack­ers dis­cov­er your chat­bot and begin fill­ing it with lies and prej­u­dices, and telling it ter­ri­ble things. Within one day, using the soft­ware that you wrote, all your chat­bot can do is spout, racist, misog­y­nis­tic, and hate­ful ven­om. Your boss dis­ables it imme­di­ate­ly. Then you real­ize that your work can eas­i­ly be turned into some­thing very destructive. 

Or, imag­ine that you became an intel­lec­tu­al prop­er­ty lawyer because you want cre­ative inven­tors to be inspired and sup­port­ed. You worked for years to empow­er inno­va­tors to pro­tect their ideas. 

Then one day, you dis­cov­er that the over­whelm­ing major­i­ty of patent law­suits are pur­sued by patent trolls, those who patent ideas but nev­er actu­al­ly make things. They just wait for some­one else to cre­ate a prod­uct, and then the sue them for unde­served roy­al­ties. Only then do you real­ize that your life’s work has empow­ered a few greedy peo­ple to engage in legal­ized extortion. 

Or, imag­ine that you’re an expert in lin­guis­tics and you’ve spent years work­ing on new spellcheck algo­rithms. Your work involves machine learn­ing, arti­fi­cial intel­li­gence, and very clever index­ing meth­ods. It’s deployed glob­al­ly on a com­put­er named after a fruit. 

Then one day, it’s dis­cov­ered that it auto­cor­rects some pre­scrip­tion drug names into com­plete­ly dif­fer­ent drugs. When Duloxetine becomes Fluoxe­tine, there’s a high like­li­hood that while no one will notice, some­one will suf­fer from tak­ing the wrong med­ica­tion. It’s then that you real­ize that your inno­va­tions have brought harm to inno­cent people. 

Why is this hap­pen­ing? Is it inevitable that our coolest tech­ni­cal achieve­ments become agents of evil? No. It is not inevitable. Today I’m going to talk about prac­ti­cal meth­ods to avoid cre­at­ing high-tech prod­ucts that enable tox­ic behavior. 

The tech­nol­o­gy we build has cer­tain­ly made peo­ple’s lives eas­i­er and bet­ter. But unin­tend­ed side-effects hap­pen more and more often, and they tear at the fab­ric of soci­ety. We are good peo­ple, and we do our jobs with the best of inten­tions. But we find, to our dis­may, that we have enabled bad behav­ior and bad outcomes. 

Where did this evil stuff come from? Are we evil? I’m per­fect­ly will­ing to stip­u­late you are not evil. Neither is your boss evil. Nor is Larry Page or Mark Zuckerberg or Bill Gates. And yet the results of our work, our best most altru­is­tic work, often turns evil when it’s deployed in the larg­er world. We go to work every day, gen­uine­ly expect­ing to make the world a bet­ter place with our pow­er­ful tech­nol­o­gy. But some­how, evil is sneak­ing in despite our good intentions. 

It’s like the way Dr. Frankenstein did­n’t under­stand the mon­ster he cre­at­ed. There is a mech­a­nism at work here that we don’t ful­ly under­stand. We need to decon­struct this phe­nom­e­non so that we can rec­og­nize it and pre­vent it from hap­pen­ing again. 

In the 1940s, [J. Robert Oppenheimer] head­ed up the Manhattan Project, the largest sci­en­tif­ic effort the world had ever seen. His job was to invent the atom­ic bomb so that the United States could use it to end World War II. But when Oppenheimer saw that first atom­ic explo­sion, he real­ized that he had cre­at­ed some­thing ter­ri­ble. This was Oppenheimer’s moment. Not only was he a god of physics and sci­ence, but now he was Mars. He was Ares. He had also become a god of war, bring­ing chaos, suf­fer­ing, and death. 

Today, we the tech prac­ti­tion­ers, those who design, devel­op, and deploy a tech­nol­o­gy, are hav­ing our own Oppenheimer moments. It’s that moment when you real­ize that your best inten­tions were sub­vert­ed. When your prod­uct was used in unex­pect­ed and unwant­ed ways. It’s that moment when you real­ize that even though you aren’t racist, your algo­rithms might be. It’s that moment when you real­ize that your soft­ware, designed to bring peo­ple togeth­er instead is dri­ving them apart into trib­al iso­la­tion. It’s that moment when you real­ize that the social and eco­nom­ic checks and bal­ances that pre­vent excess and abuse don’t work any­more. And noth­ing har­ness­es your your cre­ation, and your cre­ation is begin­ning to run amok. 

My first thoughts are, who can we blame for this? We can’t blame the tech­nol­o­gy, because it does amaz­ing good things for us, as well as bad. It’s easy to blame the founders, your cowork­ers, the ven­ture cap­i­tal­ists, your annoy­ing boss. But they’re just as trou­bled by this as we are. Tony Fadell, the founder of Nest, wants a Hippocratic Oath for design­ers, where they pledge to work eth­i­cal­ly and do no harm. And Sam Altman, the pres­i­dent of Y Combinator, is writ­ing an ethics con­sti­tu­tion. And Bill Gates is giv­ing bil­lions to char­i­ty. Our narrative-seeking, sto­ry­telling brains want a vil­lain. But this isn’t a Walt Disney movie, and Cruella de Vil is not com­ing for our pup­pies. Really. There’s no one to blame.

It’s a sys­tems prob­lem. It’s as though the Titanic ship of tech­nol­o­gy is sink­ing. But we nev­er hit an ice­berg. We’re fill­ing with water but nobody can find the leak. We keep search­ing for the giant rip in the hull caused by the evil ice­berg, or the evil cap­tain, or the evil ship­builders. But we find no evil, and we find no giant hole. Yet the water keeps get­ting deeper. 

Trying to find a sin­gle point of fail­ure or ori­gin of mal­ice only works on sim­ple sys­tems. But all of the tech prod­ucts we build are com­plex sys­tems, and their net­work envi­ron­ment is yet anoth­er lev­el of com­plex sys­tem. The water, the evil, isn’t com­ing from one big hole but from a con­stel­la­tion of tiny ones. Like 100 mil­lion micro­scop­ic laser-drilled holes in the hull of the Titanic, they col­lec­tive­ly add up to a fatal­ly huge gash. And this is just what a sys­tems prob­lem looks like. 

Systems prob­lems are by nature dis­trib­uted ones, and their solu­tions are dis­trib­uted too. We put those mil­lions of tiny leaks into the sys­tem. There was no mal­ice, no evil. That’s why we have to apply our efforts to pre­vent­ing tiny leaks, rather than try­ing to pre­dict and then stop a sin­gle cat­a­stroph­ic event. The tech­nol­o­gy we use changes so fast it ren­ders our good inten­tions irrel­e­vant and inad­e­quate. No mat­ter how well-intentioned, our good per­mutes into bad. Our inno­cence sim­ply isn’t suf­fi­cient. We have to mas­ter this system.

We need to iden­ti­fy the weak­ness­es in our prod­ucts and busi­ness mod­els when they are tiny, embry­on­ic things. We have to get ahead of this phe­nom­e­non, because once it emerges it’s too late. And we need to do it in a way that tran­scends the tech­nol­o­gy. So it applies to every­thing. So it’s long-term. So that it’s sustainable. 

The blo­gos­phere is awash in long­form com­men­tary ask­ing how we could bring ethics back to tech­nol­o­gy. People tell us to be good, but they don’t tell us how. There’s a lot of work to do. But first we have to have a clear goal. The founders of Facebook, Google, and a thou­sand oth­er com­pa­nies large and small have as their pri­ma­ry goal to make mon­ey. Their sec­ond avowed goal is to not be evil; to do no harm. 

There is abun­dant proof that this does not work. When your pri­ma­ry goal is to make mon­ey, all oth­er goals devolve into mere words. Don’t be evil” is too vague, too sim­plis­tic, and too hard to relate to the dai­ly work of tech. Besides, it’s always in sec­ond place, and it always los­es to the imper­a­tives of mak­ing mon­ey. We need a new goal, a new rubric for suc­cess. One that makes us bet­ter cit­i­zens, first, with­out stop­ping us from mak­ing mon­ey, second. 

Here’s my pro­pos­al: I want to be a good ances­tor. My goal is to cre­ate a bet­ter world for our chil­dren, both yours and mine. And their chil­dren. Every day I ask myself does what I’m doing to make the world a bet­ter place than when I found it. When you think like a good ances­tor, you’re forced to think about the whole sys­tem. You can no longer max­i­mize iso­lat­ed mea­sure­ments at the expense of oth­ers. You can no longer excuse bad behav­ior in the inter­est of profit. 

Conservative polit­i­cal dog­ma says that you can either make mon­ey or you can be a good cit­i­zen, but not both. This is a lie. You don’t have to behave bad­ly to make mon­ey. Virgin, Costco, and Patagonia are all proof that you can be a very prof­itable, good ances­tor. As Steve Jobs said, prof­it is a byprod­uct of qual­i­ty. People are very loy­al to good qual­i­ty, and not at all loy­al to prod­ucts that behave bad­ly. So while there is val­ue in mak­ing mon­ey, I val­ue even more mak­ing the world a bet­ter place for our children. 

By mak­ing good ances­try our pri­ma­ry goal we can work to pre­vent our prod­ucts from turn­ing to the dark side, and we can still build prof­itable busi­ness­es. Every day instead of say­ing, Do no evil,” ask, How can I be a good ances­tor?” Being a good ances­tor is my goal, and I want you to make it your goal, too. 

While the first step is hav­ing a clear goal, there are many sub­se­quent steps need­ed to solve the chal­lenge. It’s all giv­ing me a strong sense of déjà vu—that’s my oth­er French word. That sen­sa­tion where you feel like you’ve been there before. Twenty-five years ago, as per­son­al com­put­ing was explod­ing on the world, it had become clear the tech­nol­o­gy was hard to use. Everyone knew that we need­ed to make soft­ware user-friendly, but no one knew exact­ly how to do that. Back then plen­ty of smart peo­ple thought it could­n’t even be done. Someone had to define user-friendly in mea­sur­able ways. Then devel­op a tax­on­o­my for the field. Invent a set of tools. Create a process frame­work. Establish clear exam­ples demon­strat­ing the ben­e­fits. Train a cadre of skilled prac­ti­tion­ers. And then take that show on the road and pros­e­ly­tize it. 

Well, that’s what I did for inter­ac­tion design dur­ing the nineties and the aughts. Our pres­ence here today proves how design has become an impor­tant role. It’s well-known, trust­ed, and omnipresent. Now it’s time to do the same thing for being a good ancestor. 

Nice hair. Renato Verdugo is my bril­liant young Chilean col­lab­o­ra­tor. And we are devel­op­ing a frame­work and tools that we call ances­try thinking.” 

The first step is aware­ness of the prob­lem. It’s vital that prac­ti­tion­ers pay atten­tion to how their prod­ucts are applied in the real world by actu­al users. 

The sec­ond step is cre­at­ing a lan­guage, a tax­on­o­my that lets us see where and how those mil­lions of tiny holes get drilled into the hull of tech­nol­o­gy. So far we have iden­ti­fied three main vec­tors by which bad behav­ior creeps into your prod­uct: assump­tions, exter­nal­i­ties, and timescale. We exam­ine these vec­tors by ask­ing our­selves three hard questions. 

First we must ask what assump­tions are we mak­ing? Whenever we design a solu­tion to a prob­lem, we base our think­ing on cer­tain assump­tions. If we don’t rig­or­ous­ly exam­ine all of those assump­tions, our inten­tions can become lost, and we open the door to bad ancestry. 

Someone was mak­ing assump­tions when they cre­at­ed their dis­as­ter relief web­site, requir­ing users to have elec­tri­cal pow­er and WiFi, scarce things in a disaster. 

Someone assumed that the white engi­neer­ing staff was rep­re­sen­ta­tive of the peo­ple who would use their new sensor-equipped bath­room soap dis­penser. And it works fine, if your skin is white, but it fails to detect black skin. A good assump­tion can turn bad over time, or in dif­fer­ent cir­cum­stances. So you have to iden­ti­fy, inven­to­ry, and reg­u­lar­ly reex­am­ine every assump­tion you make. 

The fos­sil fuel indus­try used to pro­vide a lot of jobs and eco­nom­ic oppor­tu­ni­ty in the USA. Not any­more. Solar is where the jobs are, where the growth is, where the oppor­tu­ni­ties lay, and a prov­ing ground for tomor­row’s lead­ers. Unexamined assump­tions become dog­ma, and dog­ma is the oppo­site of inten­tion­al­i­ty. To be a good ances­tor, we can’t let any­thing hide. We must be explicit. 

Secondly, we must ask, what exter­nal­i­ties are we cre­at­ing? Externalities are those things that affect us, or that we effect that are pushed out of our atten­tion, whether by choice, neglect, or igno­rance. Everything we do is part of a com­plex web, and noth­ing is com­plete­ly external. 

Every Monday morn­ing, a big green truck comes to take my trash away. But there real­ly is no away.” It takes my trash down to the land­fill by the riv­er. My chil­dren are going to have to deal with that land­fill. Whenever you say, That’s not my prob­lem,” you cre­ate an exter­nal­i­ty. And every exter­nal­i­ty is a hole in your boat. It’s anoth­er way bad ances­try creeps into your world. 

For exam­ple there’s a rideshare com­pa­ny that pro­vides a great car hire expe­ri­ence, for rid­ers, but it regards the dri­ver’s wel­fare as some­one else’s prob­lem. Drivers are forced into a pre­car­i­ous hand-to-mouth exis­tence, degrad­ing our civ­i­liza­tion for everyone. 

Or how about the giant retail­er that prides itself on offer­ing the low­est prices to its shop­pers, but it does­n’t pay its employ­ees a liv­ing wage? The employ­ees are forced to rely on sec­ond jobs and food stamps. 

Machine learn­ing algo­rithms, some­times called AI, cre­ate sig­nif­i­cant exter­nal­i­ties. Just let­ting the black box make deci­sions is an exter­nal­i­ty. Then when we trust those deci­sions, with­out hav­ing meth­ods for over­sight or ver­i­fi­ca­tion, we cre­ate more exter­nal­i­ties, com­pound­ing the prob­lem. Most peo­ple are hap­py to abdi­cate their respon­si­bil­i­ties to the algo­rithm. It seems eas­i­er. And it is. Because it’s an externality. 

For exam­ple a com­pa­ny that makes soft­ware to auto­mate hir­ing proud­ly uses machine learn­ing to stream­line the process. Its black box algo­rithms rec­om­mend can­di­dates based on those you’ve hired in the past. The prob­lem is that any racial, age, or gen­der prej­u­dices are invis­i­bly sus­tained, and nobody ques­tions it. Nobody even sees it. Externalities hide in our point of view, our igno­rance, our social norms, and the sys­tems that we cre­ate and use. 

In real­i­ty, every­thing is con­nect­ed. There’s no such thing as an exter­nal­i­ty. If you regard some­thing as exter­nal, you’re just bequeath­ing trou­ble to your descen­dants. You’re slam­ming a door on a fire, but it’s still burn­ing in there and you’re leav­ing it for your chil­dren to put out. 

And third­ly, we must ask, what time scale are we using? What is the lifes­pan of our actions, our prod­ucts, and their effects. Our tools for peer­ing ahead are weak, so we design based on the way things are right now, even though our prod­ucts will live on in that uncer­tain future. 

Now, I’m a per­fect exam­ple of this. Software that I wrote in the mid 1970s con­served pre­cious mem­o­ry by using only two num­bers for the year. The turn of the mil­len­ni­um, the year 2000, seemed very very far away. So I helped to cre­ate the Y2K bug. (I’m very proud of that, actually.) 

All of our social sys­tems bias us toward a pre­sen­tist focus: cap­i­tal­ist mar­kets, rapid tech­no­log­i­cal advance, pro­fes­sion­al reward sys­tems, and indus­tri­al man­age­ment meth­ods. You have to ask your­self, how will this be used in ten years? In thir­ty. When will it die? What will hap­pen to its users? To be a good ances­tor, we must look at the entire lifes­pan of our work. 

I know I said that there were three con­sid­er­a­tions, but there’s a strong fourth one, too. Having estab­lished the three con­duits for bad ancestry—assumptions, exter­nal­i­ties, and timescale—we now need some tac­ti­cal tools for ances­try thinking. 

Because it’s a sys­tems prob­lem, indi­vid­ual peo­ple are rarely to blame. But peo­ple become rep­re­sen­ta­tives of the sys­tem. That is, the face of bad ances­try will usu­al­ly be a per­son. So it takes some finesse to move in a pos­i­tive direc­tion with­out polar­iz­ing the sit­u­a­tion. You can see from the USA’s cur­rent polit­i­cal sit­u­a­tion how easy it is to slip into polarization. 

First we need to under­stand that sys­tems need con­stant work. John Gall’s the­o­ry of General Systemantics says that, sys­tems fail­ure is an intrin­sic fea­ture of sys­tems.” In oth­er words, all sys­tems go hay­wire, and will con­tin­ue to go hay­wire, and only con­stant vig­i­lance can keep those sys­tems work­ing in a pos­i­tive direc­tion. You can’t ignore sys­tems. You have to ask ques­tions about sys­tems. You must probe con­stant­ly, deeply, and not accept rote answers. 

And when you detect bad assump­tions, ignored side-effects, or dis­tor­tions of time, you have to ask those same ques­tions of the oth­ers around you. You need to lead them through the thought process so they see the prob­lem too. This is how you reveal the secret lan­guage of the system. 

Ask about the exter­nal forces at work on the sys­tem. Who is out­side of the sys­tem? What did they think of it? What lever­age do they have? How might they use the sys­tem? Who is exclud­ed from it? 

Ask about the impact of the sys­tem. Who is affect­ed by it? What oth­er sys­tems are affect­ed? What are the indi­rect long-term effects? Who gets left behind? 

Ask about the con­sent your sys­tem requires. Who agrees with what you are doing? Who dis­agrees? Who silent­ly con­dones it? And who’s igno­rant of it? 

Ask who ben­e­fits from the sys­tem? Who makes mon­ey from it? Who los­es mon­ey? Who gets pro­mot­ed? And how does it affect the larg­er economy? 

Ask about how the sys­tem can be mis­used. How can it be used to cheat, to steal, to con­fuse, to polar­ize, to alien­ate, to dom­i­nate, to ter­ri­fy? Who might want to mis­use it? What could they gain by it? Who could lose? 

If you are ask­ing ques­tions like these reg­u­lar­ly, you’re prob­a­bly mak­ing a leaky boat. 

Lately I’ve been talk­ing a lot about what I call work­ing back­wards. It’s my pre­ferred method of problem-solving. In the con­ven­tion­al world, gnarly chal­lenges are always pre­sent­ed from with­in a con­text, a frame­work of think­ing about the prob­lem. The giv­en frame­work is almost always too small of a win­dow. Sometimes it’s the wrong win­dow alto­geth­er. Viewed this way, your prob­lems can seem inscrutable and unsolv­able, a Gordian Knot. 

Working back­wards can be very effec­tive in this sit­u­a­tion. It’s sim­i­lar to Edward de Bono’s notion of lat­er­al think­ing, and Taiichi Ohno’s idea of the 5 Whys. Instead of address­ing the prob­lem in its famil­iar sur­round­ings, you step back­wards and you exam­ine the sur­round­ings instead. Deconstructing and under­stand­ing the prob­lem def­i­n­i­tion first is more pro­duc­tive than direct­ly address­ing the solution. 

Typically you dis­cov­er that the range of pos­si­ble solu­tions first pre­sent­ed are too lim­it­ing, too con­ven­tion­al, and sup­press inno­va­tion. When the sit­u­a­tion forces you to choose between Option A or Option B, the choice is almost always Option C. If we don’t work back­wards we tend to treat symp­toms rather than caus­es. For exam­ple we clam­or for a cure for can­cer, but we ignore the search for what caus­es can­cer. We insti­tute recy­cling pro­grams, but we don’t reduce our con­sump­tion of dis­pos­able plas­tic. We eat organ­ic grains and meat, but we still grow them using pro­found­ly unsus­tain­able agri­cul­tur­al practices. 

The dif­fi­cul­ty pre­sent­ed by work­ing back­wards is that it typ­i­cal­ly vio­lates estab­lished bound­aries. The encom­pass­ing frame­work is often in a dif­fer­ent field of thought and author­i­ty. Most peo­ple, when they detect such a bound­ary refuse to cross it. They say, That’s not my respon­si­bil­i­ty.” But this is exact­ly what an exter­nal­i­ty looks like. Boundaries are even more coun­ter­pro­duc­tive in tech. 

A few years ago, a famous graph­ic cir­cu­lat­ed on the Web that said, In 2015, Uber, the world’s largest taxi com­pa­ny, owns no vehi­cles. Facebook, the world’s most pop­u­lar media own­er, cre­ates no con­tent. Alibaba, the most valu­able retail­er, has no inven­to­ry. And Airbnb, the world’s largest accom­mo­da­tion provider, owns no real estate.” 

The prob­lem is that taxi com­pa­nies are reg­u­lat­ed by tax­ing and con­trol­ling vehi­cles. Media is con­trolled by reg­u­lat­ing con­tent. Retailing is con­trolled by tax­ing inven­to­ry. And accom­mo­da­tions by tax­ing rooms. All of the gov­ern­men­tal checks and bal­ances are side-stepped my busi­ness mod­el inno­va­tion. These new busi­ness mod­els are bet­ter than the old ones, but the new ideas short-circuit the con­trols we need to keep them from behav­ing like bad cit­i­zens, bad ancestors. 

All busi­ness mod­els have good sides and bad sides. We can­not pro­tect our­selves against the bad parts by leg­is­lat­ing symp­toms and arti­facts. Instead of leg­is­lat­ing mech­a­nism mech­a­nisms, we have to leg­is­late desired out­comes. The mech­a­nisms may change fre­quent­ly, but the out­comes remain very con­stant, and we need to step back­wards to be good ancestors. 

And when we step back­wards, we see the big pic­ture. But see­ing it shows us that there’s a lot of deplorable stuff going on in the world today. And a lot of it is enabled and exac­er­bat­ed by the high-tech prod­ucts that we make. It might not be our fault, but it’s our respon­si­bil­i­ty to fix it. 

One reac­tion to look­ing at the big pic­ture is despair. When you real­ize the whole machine is going in the wrong direc­tion, it’s easy to be over­whelmed with a fatal­is­tic sense of doom. Another reac­tion to see­ing this ele­phant is denial. It makes you want to just put your head back down and con­cen­trate on the wire­frames. But those paths are the Option A and the Option B of the prob­lem, and I am com­mit­ted to Option C. I want to fix the problem. 

If you find your­self at the point in a pro­duc­t’s devel­op­ment where clear­ly uneth­i­cal requests are made of you, when the boss asks you to lie, cheat, or steal, you’re too late for any­thing oth­er than brinks­man­ship. I applaud you for your courage if you’re will­ing to put your job on the line for this, but it’s unfair for me to ask you to do it. My goal here is to arm you with prac­ti­cal, use­ful tools that will effec­tive­ly turn the tech indus­try towards becom­ing a good ances­tor. This is not a rebel­lion. Those tools will be more of a dialec­tic than a street protest. We can only play the long game here. 

Our very pow­er­less­ness as indi­vid­ual prac­ti­tion­ers makes us think that we can’t change the sys­tem. Unless of course we are one of the few empow­ered peo­ple. We imag­ine that pow­er­ful peo­ple take pow­er­ful actions. We pic­ture the lone Tiananmen pro­test­er stand­ing res­olute­ly in front of a col­umn of bat­tle tanks, thus mak­ing us good ances­tors. Similarly, we pic­ture the CEO Jack Dorsey ban­ning Nazis from Twitter and thus, in a stroke, mak­ing every­thing better. 

This is a nice fan­ta­sy but it’s not actu­al­ly true. The tanks in China had already been giv­en the order to stop. Otherwise it would’ve dri­ven right over the Tank Man. And Jack Dorsey is stuck in a dilem­ma he wish­es des­per­ate­ly to get out of. If he bans Nazis, he asserts that cen­sor­ing hate speech is Twitter’s respon­si­bil­i­ty. And if he does­n’t ban Nazis, he asserts that every­one will play nice­ly togeth­er. Because as soon as he bans a sin­gle Nazi he opens him­self up to a tsuna­mi of crit­i­cism and worse. There will be a wave of law­suits from those who think he’s cho­sen too many Nazis, and those who think he’s cho­sen too few. The fact that his refusal to ban a Nazi is in itself a choice and opens him up to an equal­ly large wave of crit­i­cism is why you don’t want to be in his shoes. Dorsey is at the end of a whip, jerk­ing back and forth. There’s no good deci­sion for him to make. He’s look­ing down the bar­rel of Option A and Option B. So he does what is eas­i­est: nothing.

But you and I know that the only cor­rect answer is Option C. Make no mis­take about it, while Dorsey faces the twin evils of choic­es A and B, he isn’t an evil per­son. And he’s not guilty of any crime oth­er than not think­ing things through. And ulti­mate­ly, that’s the solu­tion: tak­ing the time to think things through. 

The more prac­ti­tion­ers who do this, and the ear­li­er in the cre­ation process we do it, the more effec­tive it becomes. Because there is no evil agen­da, there’s no anti-evil agen­da, either. This is all about our col­lec­tive over­sight and gen­tle inter­ven­tion ear­ly in the process. 

When we stand in the cen­ter of North America, watch­ing the mile-wide Mississippi River flow by, our pow­er­less­ness to affect the mighty water­way is tan­gi­ble. But if we ascend to the con­ti­nen­tal divide at the crest of the Rocky Mountains, where the riv­er ris­es, it’s just a tiny rivulet, and we can divert the course of the Mississippi with a shov­el.

This is the nature of how we divert the course of the tech indus­try. Neither Jack nor any of us are going to fix Twitter’s mis­be­hav­ior with a sin­gle dra­mat­ic action. Twitter went off the rails one mil­lime­ter at a time, and the only way to put it back is with an equal num­ber of tiny cor­rec­tions. The way to van­quish evil is to find it at the source, in the head­wa­ters, when it is a tiny and vul­ner­a­ble thing.

I don’t mean to pick on Twitter or Jack Dorsey, but they’re a per­fect exam­ple of the chal­lenge that we face. Monitoring and curat­ing an open pub­lic forum is hard, expen­sive work. Dorsey, as an ide­al­is­tic Silicon Valley entre­pre­neur, believed that he could cre­ate a ful­ly auto­mat­ed plat­form that would police itself. For that to work, he had to ignore the real-world behav­ior of anony­mous strangers. He assumed that respect­ful pub­lic dis­course would be self-perpetuating. He exter­nal­ized respon­si­bil­i­ty for polic­ing his forum. And he only thought about how things were, right now. 

Like most lib­er­tar­i­ans, he failed to rec­og­nize how hard it is to be effort­less. How much work goes into mak­ing sure that nasty peo­ple don’t shout down nice peo­ple. Because nice peo­ple nev­er shout down nasty peo­ple. He’s been in denial about that behav­ior since day one. 

Despite Jack Dorsey’s role as CEO of Twitter, he lacks the pow­er to fix it. He is as unable to change the course of a mighty riv­er as any­one else inside his com­pa­ny. He has pow­er, but he lacks agency. Remarkably, the most junior prac­ti­tion­er at Twitter, while hav­ing none of Dorsey’s pow­er has the exact same amount of agency. Now true, nei­ther of them have much, but it’s not zero. 

Power is the abil­i­ty to change macro struc­tures. Agency works on the micro lev­el. Agency is local; pow­er is glob­al. Power is ban­ning Nazis from Twitter. Agency is one per­son point­ing out that there’s no mech­a­nism in Twitter to iden­ti­fy sus­pect­ed Nazis, and that there should be one. Power is being able to end homo­pho­bia; agency is one per­son com­ing out of the closet. 

Agency in its embry­on­ic state man­i­fests sim­ply as talk­ing. We ask ques­tions. We seek expla­na­tions. We point out the con­sid­er­a­tions. But the more you talk, the more you get heard. And the more you get heard, the more influ­ence you have. Agency grows the more you exer­cise it. 

Start by pay­ing atten­tion. This evolves into ask­ing ques­tions, which grows into dis­cus­sions, fol­lowed by learn­ing, then coop­er­a­tion, then team­work, and ulti­mate­ly action. In this way we iden­ti­fy the assump­tions we’re mak­ing, the exter­nal­i­ties we’re cre­at­ing, and the time­frames we’re work­ing within. 

When you start a dia­logue with peo­ple you can make them think. You can show them a dif­fer­ent point of view. Agency is a mir­ror you can hold up to your col­leagues. It’s an ampli­fi­er, a news­feed, a loud­speak­er, a book, a friend. And you cre­ate a rela­tion­ship. And you become more human in their mind. Admittedly this is a grad­ual process—an incre­men­tal process—but it’s the only viable process. And it works in for the long term. 

In 2016, I spoke with activist blog­ger Anil Dash about the state of the tech indus­try. He posed a rhetor­i­cal ques­tion: Why aren’t we teach­ing ethics in engi­neer­ing schools? His chal­lenge real­ly got under my skin and I could­n’t stop think­ing about it. But ethics, ooh! There’s noth­ing more bor­ing, use­less, old, and pedan­tic. It’s hard to imag­ine a sub­ject less inter­est­ing than tech­nol­o­gy and the e word.” 

Now for­tu­itous­ly, I had recent­ly been talk­ing with folks at the engi­neer­ing school at the University of California at Berkeley about teach­ing some­thing there. Renato Verdugo, my new friend and col­lab­o­ra­tor with the great hair, agreed to help. And we just com­plet­ed co-teaching a semester-long class called Thinking Like a Good Ancestor” at the Jacobs Institute for Design Innovation on the Berkeley cam­pus. Renato works for Google, and they gen­er­ous­ly sup­port­ed our work. 

We’re intro­duc­ing our stu­dents to the fun­da­men­tals of how tech­nol­o­gy could lose its way. Of aware­ness and inten­tion­al­i­ty. We’re giv­ing the stu­dents our tax­on­o­my of assump­tions, exter­nal­i­ties, and time. Instead of focus­ing on how tech behaves bad­ly, we’re focus­ing on how good tech is allowed to become bad. We’re not try­ing to patch the holes in the Titanic but pre­vent them from occur­ring in future tech. So we’re encour­ag­ing our stu­dents to exer­cise their per­son­al agency. We expect these bril­liant young stu­dents at Berkeley to take ances­try think­ing out into the world. We expect them to make it a bet­ter place for all of our children. 

Like those stu­dents, we are the prac­ti­tion­ers. We are the mak­ers. We are the ones who design, devel­op, and deploy software-powered expe­ri­ences. At the start of this talk I asked you to imag­ine your­self as a tech prac­ti­tion­er wit­ness­ing your cre­ations turned against our com­mon good. Now I want you to imag­ine your­self cre­at­ing prod­ucts that can’t be turned towards evil. Products that won’t spy on you, won’t addict you, and won’t dis­crim­i­nate against you. More than any­one else, you have the pow­er to cre­ate this real­i­ty. Because you have your hands on the tech­nol­o­gy. And I believe that the future is in the hands of the hands-on. 

Ultimately, we the crafts­peo­ple who make the arti­facts of the future have more effect on the world than the busi­ness exec­u­tives, the politi­cians, and the invest­ment com­mu­ni­ty. We are like the key­stone in the arch. Without us it all falls to the ground. While it may not be our fault that our prod­ucts let evil leak in, it is cer­tain­ly with­in our pow­er to pre­vent it. The wel­fare of our chil­dren, and their chil­dren, is at stake, and tak­ing care of our off­spring is the best way to take care of ourselves. 

We need to stand up, and stand togeth­er. Not in oppo­si­tion but as a light shin­ing in a dark room. Because if we don’t, we stand to lose every­thing. We need to har­ness our tech­nol­o­gy for good and pre­vent it from devour­ing us. I want you to under­stand the risks and know the inflec­tion points. I want you to use your agency to sus­tain a dia­logue with your col­leagues. To work col­lec­tive­ly and relent­less­ly. I want you to become an ances­try thinker. I want you to cre­ate prod­ucts you could be proud of. Products that make the world a bet­ter place instead of just mak­ing yet anoth­er bil­lion­aire. I want you to change the vision of suc­cess in tech from mak­ing mon­ey to mak­ing a just and equi­table world for every­one. You have the pow­er to do this with your lead­er­ship, your agency, and with your hands-on. You can be a good ances­tor. Thank you.

Further Reference

Session page