I’m respon­si­ble for the suc­cess of the School of Computer Science at Carnegie Mellon, where we hap­pen to be a place which has more than five hun­dred fac­ul­ty and stu­dents earnest­ly work­ing towards mak­ing AI real and use­ful. And one of the things which has swept through the whole col­lege over the last eigh­teen months, real­ly has changed a lot people’s direc­tion, is the ques­tion of AI safe­ty. Not just it’s impor­tant, but what can we prac­ti­cal­ly do about it? 

And as we’ve been look­ing at it, the prob­lem turns into two parts, which I want to talk about. Policy, where we need peo­ple, the folks in this room, to do work togeth­er. And then ver­i­fi­ca­tion, which is the respon­si­bil­i­ty of we engi­neers to actu­al­ly show the sys­tems we’re build­ing are safe.

We’ve been build­ing autonomous vehi­cles for about twenty-five years, and now that the tech­nol­o­gy has become adopt­ed much more broad­ly and is on the brink of being deployed, our earnest fac­ul­ty who’ve been look­ing at it are now real­ly inter­est­ed in ques­tions like, a car sud­den­ly real­izes an emer­gency, an ani­mal has just jumped out at it. There’s going to be a crash in one sec­ond from now. Human ner­vous sys­tem can’t deal with that fast enough. What should the car do?

We’re real­ly excit­ed, because we’re writ­ing code which is in gen­er­al going to save a lot of lives. But, there’s a point in the code, and our fac­ul­ty iden­ti­fied these points, where they’re hav­ing to put in some mag­ic num­bers.

For exam­ple, if you’re going to hit an ani­mal, should you go straight through it, almost cer­tain­ly killing it but maybe only a one in a mil­lion chance of hurt­ing the dri­ver, or should you swerve, as most of us humans do right now, swerve very well so that per­haps you’ve only got a one in a hun­dred thou­sand chance of hurt­ing the dri­ver, and prob­a­bly save the ani­mal? Someone has to write that num­ber, how many ani­mals is one human life worth. Is it a thou­sand, a mil­lion, or a bil­lion? We have to get that from you, the rest of the world. We can­not be allowed to write it our­selves.

This is real­ly impor­tant. Two of our oth­er fac­ul­ty have pushed on under­stand­ing the poten­tial move­ments of dozens of pedes­tri­ans and vehi­cles at busy inter­sec­tion, so that they can pull the kill switch if someone’s about to essen­tial­ly have a seri­ous injury. If that goes off all the time, it will be unac­cept­able. If it goes off nev­er, then we won’t be sav­ing lives. Someone’s got to put in the num­ber.

So that’s pol­i­cy. And that’s my real urgency here, is ask­ing for help to make sure that we all get this pol­i­cy in place so we can start sav­ing lives. After pol­i­cy, it gets back to being on us, the engi­neers, to do ver­i­fi­ca­tion. And this is an area of AI which is explod­ing because it’s so impor­tant.

In the old days, when you had non-autonomous sys­tems like reg­u­lar cars, you had to test crash them in maybe fifty dif­fer­ent tests using about eight cars each. Now, with an autonomous sys­tem which has got this com­bi­na­to­r­i­al space of pos­si­ble lives it could live, the test­ing prob­lem seems impos­si­ble. And some of the fac­ul­ty have real­ly been look­ing at the ques­tion, for exam­ple in help­ing the Army test out autonomous con­voys to trav­el through Iraq. There’s a new kind of com­put­er sci­ence avail­able for help­ing, almost like solv­ing a game of Mastermind, quick­ly fig­ur­ing out the vul­ner­a­bil­i­ties, the area where the autonomous sys­tem is in most dan­ger.

Now, inter­est­ing­ly, aca­d­e­m­i­cal­ly, there’s testing-based peo­ple absolute­ly at war with anoth­er group of peo­ple, exem­pli­fied by André Platzer, a who used for­mal proof meth­ods. They say if you’re going to deploy an autonomous sys­tem, it must come with a math­e­mat­i­cal proof of safe­ty. His sys­tem of prov­ing, for exam­ple, recent­ly showed that the new air col­li­sion avoid­ance sys­tem, when he tried to prove that it was safe, the proof failed and they dis­cov­ered some cas­es where it was actu­al­ly going to be a dis­as­ter. So now the sys­tem is being updat­ed.

That’s the whole sto­ry, pol­i­cy and ver­i­fi­ca­tion. I want to fin­ish up by talk­ing about four groups of stake­hold­ers, and my ques­tion and mes­sage going out here. 

Policymakers, we need your help. Most AI labs around the world want you to come vis­it us. Spend time with the sci­en­tists who want to express to you the trade­offs they’re wor­ried about. 

If you’re a com­pa­ny exec­u­tive right, now and one of your divi­sions says, We’ve got a new autonomous sys­tem we want to field,” you have to ask them what they’re doing about test­ing, because they can­not be using 1980s test­ing tech­nol­o­gy.

We sci­en­tists and engi­neers, we have to take this real­ly seri­ous­ly or we will be closed down, and that is trag­ic because we’re doing this because we think we can save tens of mil­lions of lives over the next few years.

Finally, for entre­pre­neurs, the ser­vice indus­try of test­ing autonomous sys­tems is a huge growth poten­tial, and I very much encour­age it. 

Thank you.


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.