https://www.youtube.com/watch?v=Rw9FSYH6kL8

Yuval Noah Harari: Of all the dif­fer­ent issues we face, three prob­lems pose exis­ten­tial chal­lenges to our species. These three exis­ten­tial chal­lenges are nuclear war, eco­log­i­cal col­lapse, and tech­no­log­i­cal dis­rup­tion. We should focus on them.

Now, nuclear war and eco­log­i­cal col­lapse are already famil­iar threats, so let me spend some time explain­ing the less famil­iar threat posed by tech­no­log­i­cal dis­rup­tion. In Davos, we hear so much about the enor­mous promis­es of tech­nol­o­gy. And these promis­es are cer­tain­ly real, but tech­nol­o­gy might also dis­rupt human soci­ety and the very mean­ing of human life in numer­ous ways rang­ing from the cre­ation of a glob­al use­less class to the rise of data colo­nial­ism and of dig­i­tal dic­ta­tor­ships.

First, we might face upheavals on the social and eco­nom­ic lev­el. Automation will soon elim­i­nate mil­lions upon mil­lions of jobs. And while new jobs will cer­tain­ly be cre­at­ed, it is unclear whether peo­ple will be able to learn the nec­es­sary new skills fast enough. Suppose you’re a 50 years-old truck dri­ver. And you just lost your job to a self-driving vehi­cle. Now, there are new jobs. In design­ing soft­ware, or in teach­ing yoga to engi­neers. But how does a 50 years-old truck dri­ver rein­vent him­self or her­self as a soft­ware engi­neer or as a yoga teacher?

And peo­ple will have to do it not just once but again and again through­out their lives, because the automa­tion rev­o­lu­tion will not be a sin­gle water­shed event fol­low­ing which the job mar­ket will set­tle down into some new equi­lib­ri­um. Rather, it will be a cas­cade of ever big­ger dis­rup­tions. Because AI is nowhere near its full poten­tial. Old jobs will dis­ap­pear. New jobs will emerge. But then the new jobs will rapid­ly change and van­ish. Whereas in the past, humans had to strug­gle against exploita­tion, in the 21st cen­tu­ry, the real­ly big strug­gle will be against irrel­e­vance. And it’s much worse to be irrel­e­vant than to be exploit­ed.

Those who fail in the strug­gle against irrel­e­vance would con­sti­tute a new use­less class. People who are use­less, not from the view­point of their friends and fam­i­ly of course, but use­less from the view­point of the eco­nom­ic and polit­i­cal sys­tem. And this use­less class will be sep­a­rat­ed by an ever-growing gap from the ever more pow­er­ful elite.

The AI rev­o­lu­tion might cre­ate unprece­dent­ed inequal­i­ty not just between class­es but also between coun­tries. In the 19th cen­tu­ry, a few coun­tries like Britain and Japan indus­tri­al­ized first, and they went on to con­quer and exploit most of the world. If we aren’t care­ful, the same thing will hap­pen in the 21st cen­tu­ry with AI. We are already in the midst of an AI arms race with China and the USA lead­ing the race, and most coun­tries being left far, far behind. Unless we take action to dis­trib­ute the ben­e­fits and pow­er of AI between all humans, AI will like­ly cre­ate immense wealth in a few high-tech hubs, while oth­er coun­tries will either go bank­rupt or will become exploit­ed data colonies.

Now we aren’t talk­ing about a sci­ence fic­tion sce­nario of robots rebelling against humans. We are talk­ing about far more prim­i­tive AI which is nev­er­the­less enough to dis­rupt the glob­al bal­ance. Just think what will hap­pen to devel­op­ing economies once it is cheap­er to pro­duce tex­tiles or cars in California than in Mexico. And what will hap­pen to pol­i­tics in your coun­try in twen­ty years when some­body in San Francisco or in Beijing knows the entire med­ical and per­son­al his­to­ry of every politi­cian, every judge, and every jour­nal­ist in your coun­try, includ­ing all their sex­u­al escapades, all their men­tal weak­ness­es, and all their cor­rupt deal­ings. Will it still be an inde­pen­dent coun­try, or will it become a data colony? When you have enough data, you don’t need to send sol­diers in order to con­trol a coun­try.

Alongside inequal­i­ty, the oth­er major dan­ger we face is the rise of dig­i­tal dic­ta­tor­ships that will mon­i­tor every­one all the time. This dan­ger can be stat­ed in the form of a sim­ple equa­tion which I think might be the defin­ing equa­tion for life in the 21st cen­tu­ry. B times C times D equals AHH. [pro­nounced ah”] Which means bio­log­i­cal knowl­edge, mul­ti­plied by com­put­ing pow­er, mul­ti­plied by data, equals the abil­i­ty to hack humans—“ahh.” If you know enough biol­o­gy, and you have enough com­put­ing pow­er and data, you can hack my body and my brain and my life, and you can under­stand me bet­ter than I under­stand myself. You can know my per­son­al­i­ty type, my polit­i­cal views, my sex­u­al pref­er­ences, my men­tal weak­ness­es, my deep­est fears and hopes. You know more about me than I know about myself. And you can do that not just to me but to every­one. A sys­tem that under­stands us bet­ter than we under­stand our­selves can pre­dict our feel­ings and deci­sions, can manip­u­late our feel­ings and deci­sions, and can ulti­mate­ly make deci­sions for us.

Now in the past, many tyrants and gov­ern­ments want­ed to do it, but nobody under­stood biol­o­gy well enough. And nobody had enough com­put­ing pow­er and data to hack mil­lions of peo­ple. Neither the Gestapo nor the KGB could do it. But soon, at least some cor­po­ra­tions and gov­ern­ments will be able to sys­tem­at­i­cal­ly hack all the peo­ple. We humans should get used to the idea that we are no longer mys­te­ri­ous souls. We are now hack­able ani­mals. That’s what we are.

The pow­er to hack human beings can of course be used for good pur­pos­es, like pro­vid­ing much bet­ter health­care. But if this pow­er falls into the hands of a 21st-century Stalin, the result will be the worst total­i­tar­i­an regime in human his­to­ry, and we already have a num­ber of appli­cants for the job of 21st-century Stalin. Just imag­ine North Korea in twen­ty years when every­body has to wear a bio­met­ric bracelet which con­stant­ly mon­i­tors your blood pres­sure, your heart rate, your brain activ­i­ty, twenty-four hours a day. You lis­ten to a speech on the radio by the Great Leader, and they know what you actu­al­ly feel. You can clap your hands and smile, but if you’re angry, they know, you’ll be in the gulag tomor­row morn­ing.

And if we allow the emer­gence of such total sur­veil­lance regimes, don’t think that the rich and pow­er­ful in places like Davos will be safe. Just ask just Jeff Bezos. In Stalin’s USSR, the state mon­i­tored mem­bers of the com­mu­nist elite more than any­one else. The same will be true of future total sur­veil­lance regimes. The high­er you are in the hier­ar­chy, the more close­ly you will be watched. Do you want your CEO or your pres­i­dent to know what you real­ly think about them?

So it’s in the inter­est of all humans, includ­ing the elites, to pre­vent the rise of such dig­i­tal dic­ta­tor­ships. And in the mean­time, if you get a sus­pi­cious WhatsApp mes­sage from some prince, don’t open it.

Now, even if we indeed pre­vent the estab­lish­ment of dig­i­tal dic­ta­tor­ships, the abil­i­ty to hack humans might still under­mine the very mean­ing of human free­dom. Because as humans will rely on AI to make more and more deci­sions for us, author­i­ty will shift from humans to algo­rithms. And this is already hap­pen­ing. Already today, bil­lions of peo­ple trust the Facebook algo­rithm to tell us what is new. The Google algo­rithm tells us what is true. Netflix tells us what to watch. And Amazon and Alibaba algo­rithms tell us what to buy. In the not-so-distant future, sim­i­lar algo­rithms might tell us where to work, and whom to mar­ry, and also decide whether to hire us for a job, whether to give us a loan, and whether the cen­tral bank should raise the inter­est rate. And if you ask why, you will not be giv­en a loan. Or why the bank did­n’t raise the inter­est rate. The answer will always be the same. Because the com­put­er says no.

And since the lim­it­ed human brain lacks suf­fi­cient bio­log­i­cal knowl­edge, com­put­ing pow­er, and data, humans will sim­ply not to be able to under­stand the com­put­er’s deci­sions. So even in sup­pos­ed­ly free coun­tries, humans are like­ly to lose con­trol over our own lives, and also lose the abil­i­ty to under­stand pub­lic pol­i­cy. Already now, how many humans real­ly under­stand the finan­cial sys­tem? Maybe one per­son, to be very gen­er­ous. In a cou­ple of decades, the num­ber of humans capa­ble of under­stand­ing the finan­cial sys­tem will be exact­ly zero.

Now, we humans are used to think­ing about life as a dra­ma of deci­sion­mak­ing. What will be the mean­ing of human life when most deci­sions are tak­en by algo­rithms? We don’t even have philo­soph­i­cal mod­els to under­stand such an exis­tence. The usu­al bar­gain between philoso­phers and politi­cians is that philoso­phers have a lot of fan­ci­ful ideas, and politi­cians patient­ly explain that they lack the means to imple­ment these ideas.

Now we are in the oppo­site sit­u­a­tion. We are fac­ing philo­soph­i­cal bank­rupt­cy. The twin rev­o­lu­tions of infotech and biotech are now giv­ing politi­cians and busi­ness­peo­ple the means to cre­ate Heaven or Hell. But the philoso­phers are hav­ing trou­ble con­cep­tu­al­iz­ing what the new Heaven and the new Hell will look like. And that’s a very dan­ger­ous sit­u­a­tion. If we fail to con­cep­tu­al­ize the new Heaven quick­ly enough, we might eas­i­ly mis­led by naïve utopias. And if we fail to con­cep­tu­al­ize the new Hell quick­ly enough, we might find our­selves entrapped there with no way out.

Finally, tech­nol­o­gy might dis­rupt not just our econ­o­my and pol­i­tics and phi­los­o­phy, but also our biol­o­gy. In the com­ing decades, AI and biotech­nol­o­gy will give us god-like abil­i­ties to reengi­neer life and even to cre­ate com­plete­ly new life­forms. After four bil­lion years of organ­ic life shaped by nat­ur­al selec­tion, we are about to enter a new era of inorgan­ic life shaped by intel­li­gent design. Our intel­li­gent design is going to be the new dri­ving force of the evo­lu­tion of life. And in using our new divine pow­ers of cre­ation, we might make mis­takes on a cos­mic scale. In par­tic­u­lar, gov­ern­ments, cor­po­ra­tions, and armies are like­ly to use tech­nol­o­gy to enhance human skills that they need like intel­li­gence and dis­ci­pline, while neglect­ing oth­er human skills like com­pas­sion, artis­tic sen­si­tiv­i­ty, and spir­i­tu­al­i­ty. The result might be a race of humans who are very intel­li­gent and very dis­ci­plined, but the lack com­pas­sion, lack artis­tic sen­si­tiv­i­ty, and lack spir­i­tu­al depth.

Of course this is not a prophe­cy. These are just pos­si­bil­i­ties. Technology is nev­er deter­min­is­tic. In the 20th cen­tu­ry, peo­ple used indus­tri­al tech­nol­o­gy to build very dif­fer­ent kinds of soci­eties. Fascist dic­ta­tor­ships, com­mu­nist regimes, lib­er­al democ­ra­cies. The same thing will hap­pen in the 21st cen­tu­ry. AI and biotech will cer­tain­ly trans­form the world, but we can use them to cre­ate very dif­fer­ent kinds of soci­eties.

And if you’re afraid of some of the pos­si­bil­i­ties I’ve men­tioned you can still do some­thing about it. But to do some­thing effec­tive, we need glob­al coop­er­a­tion. All the three exis­ten­tial chal­lenges we face are glob­al prob­lems that demand glob­al solu­tions. Whenever any leader says some­thing like, My coun­try first,” we should remind that leader that no nation can pre­vent nuclear war or stop eco­log­i­cal col­lapse by itself. And no nation can reg­u­late AI and bio­engi­neer­ing by itself.

Almost every coun­try will say, Hey. We don’t want to devel­op killer robots or to genet­i­cal­ly engi­neered human babies. We’re the good guys. But we can’t trust our rivals not to do it. So we must do it first.” If we allow such an arms race to devel­op in fields like AI and bio­engi­neer­ing, it does­n’t real­ly mat­ter who wins the arms race. The los­er will be human­i­ty.

Unfortunately, just when glob­al coop­er­a­tion is need­ed more than ever before, some of the most pow­er­ful lead­ers and coun­tries in the world are now delib­er­ate­ly under­min­ing glob­al coop­er­a­tion. Leaders like the US President tell us that there is an inher­ent con­tra­dic­tion between nation­al­ism and glob­al­ism, and that we should choose nation­al­ism and reject glob­al­ism. But this is a dan­ger­ous mis­take. There is no con­tra­dic­tion between nation­al­ism and glob­al­ism because nation­al­ism isn’t about hat­ing for­eign­ers. Nationalism is about lov­ing your com­pa­tri­ots. And in the 21st cen­tu­ry, in order to pro­tect the safe­ty and the future of your com­pa­tri­ots, you must coop­er­ate with for­eign­ers. So in the 21st cen­tu­ry, good nation­al­ists must be also glol­bal­ists.

Now glob­al­ism does­n’t mean estab­lish­ing a glob­al gov­ern­ment, aban­don­ing all nation­al tra­di­tions, or open­ing the bor­der to unlim­it­ed immi­gra­tion. Rather, glob­al­ism means a com­mit­ment to some glob­al rules. Rules that don’t deny the unique­ness of each nation but only reg­u­late rela­tions between nations. And a good mod­el is the foot­ball World Cup.

The World Cup is a com­pe­ti­tion between nations, and peo­ple often show fierce loy­al­ty to their nation­al team. But at the same time, the World Cup is also an amaz­ing dis­play of glob­al har­mo­ny. France can’t play foot­ball against Croatia unless the French and Croatians agree on the same rules for the game. And that’s glob­al­ism in action. If you like the World Cup, you’re already a glob­al­ist.

Now hope­ful­ly, nations could agree on glob­al rules not just for foot­ball but also for how to pre­vent eco­log­i­cal col­lapse, how to reg­u­late dan­ger­ous tech­nolo­gies, and how to reduce glob­al inequal­i­ty. How to make sure, for exam­ple, that AI ben­e­fits Mexican tex­tile work­ers and not only American soft­ware engi­neers.

Now, of course this is going to be much more dif­fi­cult than foot­ball, but not impos­si­ble. Because the impossible—well, we have already accom­plished the impos­si­ble. We have already escaped the vio­lent jun­gle in which we humans have lives through­out his­to­ry. For thou­sands of years, humans lived under the law of the jun­gle in a con­di­tion of omnipresent war. The law of the jun­gle said that for every two near­by coun­tries there is a plau­si­ble sce­nario that they will go to war against each oth­er next year. Under this law, peace meant only the tem­po­rary absence of war. When there was peace between say Athens and Sparta, or France and Germany, it meant that now they are not at war, but next year they might be.

And for thou­sands of years, peo­ple had assumed that it was impos­si­ble to escape this law. But in the last few decades, human­i­ty has man­aged to do the impos­si­ble, to break the law, and to escape the jun­gle. We have built the rule-based, lib­er­al glob­al order that despite many imper­fec­tions has nev­er­the­less cre­at­ed the most pros­per­ous and most peace­ful era in human his­to­ry. The very mean­ing of the word peace” has changed. Peace no longer means just the tem­po­rary absence of war. Peace now means the implau­si­bil­i­ty of war. There are many coun­tries in the world which you sim­ply can­not imag­ine going to war against each oth­er next year. Like France and Germany.

There are still wars in some parts of the world. I come from the Middle East, so believe me I know this per­fect­ly well. But it should­n’t blind us to the over­all glob­al pic­ture. We are now liv­ing in a world in which war kills few­er peo­ple than sui­cide, and gun pow­der is fol­lowed less dan­ger­ous to your life than sug­ar. Most coun­tries, with some notable excep­tions like Russia, don’t even fan­ta­size about con­quer­ing and annex­ing their neigh­bors. Which is why most coun­tries can afford to spend maybe just about 2% of their GDP on defense while spend­ing far far more on edu­ca­tion and health­care. This is not a jun­gle.

Unfortunately we’ve got­ten so used to this won­der­ful sit­u­a­tion that we take it for grant­ed, and we are there­fore becom­ing extreme­ly care­less. Instead of doing every­thing we can to strength­en the frag­ile glob­al order, coun­tries neglect­ed and even delib­er­ate­ly under­mine it. The glob­al old­er is now like a house that every­body inhab­its and nobody repairs. It can hold on for a few more years, but if we con­tin­ue like this it will col­lapse and we will find our­selves back in the jun­gle of omnipresent war. We’ve for­got­ten what it’s like, but believe me as a his­to­ri­an, you don’t want to go back there. It’s far far worse than you imag­ine. Yes, our species has evolved in that jun­gle, and lived and even pros­pered there for thou­sands of years. But if we return there there now, with the pow­er­ful new tech­nolo­gies of the 21st cen­tu­ry, our species will prob­a­bly anni­hi­late itself.

Of course even if we dis­ap­pear, it will not be the end of the world. Something will sur­vive us. Perhaps the rats will even­tu­al­ly take over and rebuild civ­i­liza­tion. Perhaps then the rats will learn from our mis­takes. But I very much hope that we can rely on the lead­ers assem­bled here and not on the rats. Thank you.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Cash App, or even just sharing the link. Thanks.