Joey Eschrich: Thank you Ed and thank you to the first pan­el for set­ting a high bar and kick­ing us off and mak­ing me ner­vous. So I’m here to talk to you about a real­ly bad Frankenstein adap­ta­tion that I love, Splice. Has any­body seen this movie? Show of hands. Ah, there’s like six or sev­en Spliceheads in here. Very excit­ing.

Okay. Splice is a sci­ence fiction/horror hybrid. It was released in 2009. And the film fol­lows the efforts of a mar­ried cou­ple of genet­ic engi­neers played by Adrien Brody and the very tal­ent­ed Sarah Polley, who work for a big phar­ma­ceu­ti­cal com­pa­ny, and their job is to cre­ate genet­ic hybrid crea­tures for med­ical appli­ca­tions. They start off cre­at­ing the­se kind of worm-like beings. But they’re not sat­is­fied with that, and so they decide to splice human DNA into the mix. And they’re they’re hop­ing in a kind of Victor Franenstein-y, hand-wavy way to like, rev­o­lu­tion­ize the human race, right. Like they want to cre­ate an organ­ism that will pro­duce genet­ic mate­ri­al that could cure can­cer, that could cure Parkinson’s, that would you know, in some again very hand-wavy way just solve all the prob­lems that we have.

And you know, they end up cre­at­ing some­thing sen­tient and it’s kind of cute in a creepy squid-like way. And so they decide to raise it in secret, of course. Because as Nancy said, some­thing hor­ri­ble has to hap­pen right off the bat or else you don’t have a sto­ry. So Splice is a mod­ern day Frankenstein sto­ry. And for those of you who are sort of sci­ence fic­tion and hor­ror heads, it’s crossed with the grue­some bio­hor­ror of clas­sic sci­ence fic­tion movies Alien and The Fly.

It’s also frothy and over­wrought. It’s a lit­tle nuts. It goes total­ly off the rails near the end. And that messi­ness is pre­cise­ly why I love it so much. I think it in and bad movies like it—bad but kin­da smart movies like it—tell us a lot about the moment we live in. And in this case I think about the sense of dis­trust and para­noia we have about biotech­nol­o­gy and the­se oth­er Frankensteinian tech­nolo­gies like AI and geo­engi­neer­ing and things like that in this moment, as we’ve start­ed to talk about already, of great pos­si­bil­i­ty and per­haps great per­il as well.

So in adapt­ing Frankenstein to this con­tem­po­rary moment of actu­al human/pig hybrids, for those of you who have been read­ing your sci­ence and tech news week, with design­er babies—as Nancy talked about—on the hori­zon, the film­mak­ers behind Splice make impor­tant deci­sions about which ele­ments of Shelley’s nov­el to car­ry through and which to trans­form or leave out. You know, just like any adapters of a myth or well-worn sto­ry, they want to tai­lor it to their own social and in this case tech­no­log­i­cal moment.

And my basic premise is the­se deci­sions are real­ly mean­ing­ful. And in this case they shape the themes and eth­i­cal mes­sages of the film, and they shape the ways that it departs from its source mate­ri­al. And so today I want to talk about one real­ly impor­tant depar­ture that Splice makes from Shelley’s nov­el as a way to kind of set up this pan­el. My pan­el is about unin­tend­ed con­se­quences.

So with­out fur­ther ado, here’s a brief clip. And this is hap­pen­ing when the crea­ture, which is devel­op­ing at a vast­ly accel­er­at­ed rate, is very young.

[clip was exclud­ed from record­ing]

Rush out and see it, seri­ous­ly.

So, names are real­ly impor­tant. And giv­ing some­thing a name, whether it’s a human or a child or a pet or like, your car—right, an inan­i­mate object—it lets us imbue it with a per­son­al­i­ty. To see it as an inde­pen­dent being, with goals, and emo­tions, deserv­ing of our atten­tion and affec­tion. It’s no sur­prise that so many of our friend­ly tech­nol­o­gy con­glom­er­ates the­se days are cre­at­ing vir­tu­al assis­tants that have names and per­son­al­i­ties. They’re encour­ag­ing us to build emo­tion­al con­nec­tions with their brands and to kind of imbue those brands with all kinds of asso­ci­a­tions about desires and sens­es of humor and things like that.

In Frankenstein, Shelley very inten­tion­al­ly has Victor nev­er give his cre­ation a name. And I think this is real­ly, indeed, quite inten­tion­al. It’s awk­ward, I think, as a nov­el­ist to have a major char­ac­ter with no name. And it makes the writ­ing hard­er. When refer­ring to the crea­ture, Shelley has Victor use a bunch of dif­fer­ent sub­sti­tutes for a name. He calls the crea­ture a wretch, a demon, a mon­ster, and many oth­er ter­ri­ble, insult­ing things.

Shelley goes to all this trou­ble, I think, because the lack of a name sym­bol­izes in a real­ly pow­er­ful way Victor’s rejec­tion of the crea­ture. He aban­dons it right after he brings it to life. He makes no attempt to care for it, to teach it, to help it accli­mate to the world. In the nov­el the crea­ture becomes vio­lent and venge­ful, pre­cise­ly because he’s reject­ed, first by Victor then by oth­er peo­ple, large­ly because he’s so large, so ugly. He’s scary-looking, right. His lack of a name brings home the idea that he’s barred and shunned from human soci­ety, and the pain of that exclu­sion is what turns him bad—he’s not born bad.

Which brings us to Splice. And on the oth­er hand, in this movie—you can start to see it here—Dren is social­ized, edu­cat­ed, loved. Later in the film the sci­en­tists hide with her in a barn, where they cre­ate a sort of grotesque, Lynchian par­o­dy of a tra­di­tion­al 50s sub­ur­ban nuclear fam­i­ly. This movie has a kind of dark comedic under­side to it and it real­ly comes out in this pas­tiche of nuclear fam­i­ly life.

And the­se aren’t per­fect par­ents by a long­shot. But they do try, and they care for Dren. They screw up a lot but they try. And you can real­ly see of course, in this clip Sarah Polley’s char­ac­ter start­ing to real­ly build a bond with this crea­ture. And this is a real­ly piv­otal scene, because you can see in the con­flict between the two sci­en­tists that this is the start of the crea­ture tran­si­tion­ing from being a spec­i­men to being a daugh­ter. That name spec­i­men” real­ly becomes this stick­ing point between the two of them.

But of course this ends in vio­lent may­hem. This movie ends hor­ri­bly, just like Frankenstein, with death and with a real­ly shock­ing, bru­tal sex­u­al assault, actu­al­ly. Sarah Polley’s char­ac­ter ends up alone and despon­dent, just like Victor at the end of the nov­el. So we end up in the same place.

And so to go back to the nov­el, the lesson I drew from it is that Victor’s sin— This is one read­ing, any­way. That Victor’s sin wasn’t in being too ambi­tious, not nec­es­sar­i­ly in play­ing God. It was in fail­ing to care for the being he cre­at­ed, fail­ing to take respon­si­bil­i­ty and to provide the crea­ture what it need­ed to thrive, to reach its poten­tial, to be a pos­i­tive devel­op­ment for soci­ety instead of a dis­as­ter.

Splice on the oth­er hand has this just very dif­fer­ent eth­i­cal pro­gram. It has a very dif­fer­ent lesson for us. It says that some lines shouldn’t be crossed. Some tech­nolo­gies are too dan­ger­ous to med­dle with. It’s pos­si­ble for sci­en­tists, the­se sort of well-meaning sci­en­tists who we kin­da like and you know, we like the actors, they can fall vic­tim to hubris. They can shoot too high. And even though they try their best, again the exper­i­ment ends in blood and sor­row. These peo­ple, the­se char­ac­ters, do some­thing tru­ly ground­break­ing and they fail to pre­dict and under­stand the con­se­quences of their actions. They avoid Victor’s mis­take. They stick around and hold the crea­ture close. But the unin­tend­ed con­se­quences of their actions are still cat­a­stroph­ic.

And as we’ve already start­ed to talk about, we’re in a moment when the­se Frankensteinian tech­nolo­gies seem to be becom­ing more and more real­i­ty. AI, genet­ic engi­neer­ing, robot­ics, geo­engi­neer­ing, promise to make us health­ier and more effi­cient, and even help to com­bat the exis­ten­tial threat of cli­mate change.

But Splice warns us that if we try to do the­se rad­i­cal­ly ambi­tious things right, and we make an earnest effort to do them right, might unleash ter­ri­ble unin­tend­ed con­se­quences any­way. We might wipe out the econ­o­my. We might give rise to the robot upris­ing that every­body likes to ref­er­ence in their Future Tense pieces. We might wreck our envi­ron­ment even faster. And for Splice it’s just not about how respon­si­bly we do sci­ence or whether we stick around and care and love. It’s about the idea that some inno­va­tions are just a bridge too far.

And so to help me con­tin­ue to explore this the­me of unin­tend­ed con­se­quences, I would like to wel­come our three expert pan­elists to the stage. First Sam Arbesman is the Scientist in Residence at Lux Capital, and the author of the book Overcomplicated: Technology at the Limits of Comprehension. Susan Tyler Hitchcock is the Senior Editor of books for the National Geographic Society and the author of the book Frankenstein: A Cultural History, which has just been immense­ly help­ful to me in under­stand­ing and untan­gling all of this. And Cara LaPointe is an engi­neer who has worked with autonomous sys­tems for both sci­ence and defense appli­ca­tions and devel­op­ment, field­ing oper­a­tions, and pol­i­cy devel­op­ment. Thank you so much for being here with me.

Joey Eschrich: So I’m sort of inter­est­ed, whether you’re new to Frankenstein, rel­a­tive­ly new like Patric, or whether you are kind of like some­one who’s lived and breathed Frankenstein your whole life, what got you inter­est­ed in the first place? Susan, you have this entire very ency­clo­pe­dic and help­ful book about the Frankenstein phe­nom­e­non. Sam, your work with inven­tors and tech­nol­o­gy star­tups seems to me to be evoca­tive of some of the themes of the sto­ry, the­se cre­ators at the cusp of some­thing new. And Cara, I’m guess­ing that there’s some con­nec­tion between your work with autonomous sys­tems and the autonomous sys­tems that we see in the nov­el in the 19th cen­tu­ry. So I’m inter­est­ed to hear from each of you kind of what res­onates with you to start us off.

Susan Tyler Hitchcock: So, my fas­ci­na­tion with Frankenstein goes back to my graduate—well no, real­ly my edu­ca­tion, my fas­ci­na­tion with the lit­er­a­ture of the Romantics, the British Romantics. They rep­re­sent a time of cul­ture wars as inter­est­ing as the 60s, when I start­ed my fas­ci­na­tion with the­se char­ac­ters and their lit­er­a­ture.

And also today. I mean, there were a lot of amaz­ing things hap­pen­ing in their day, and I began with an inter­est in Percy Bysshe Shelley. I ulti­mate­ly taught human­i­ties to engi­neer­ing school stu­dents, and I had the great oppor­tu­ni­ty Halloween day of teach­ing a class on Frankenstein. And for that class, I brought—I actu­al­ly wore—a Halloween mask. Green, ugly, plas­tic Frankenstein mask. And we start­ed talk­ing about the dif­fer­ence between the nov­el and the cur­rent cul­tur­al inter­pre­ta­tion. And that’s what start­ed me. From that point on I start­ed col­lect­ing Frankensteiniana. And I have hun­dreds of objects. And then I wrote a book.

Eschrich: We should’ve done this at your house. Sam, how about you?

Hitchcock: I have them hid­den away.

Samuel Arbesman: So I guess my inter­est in the themes of Frankenstein, the themes of I guess soci­etal impli­ca­tions of tech­nol­o­gy more gen­er­al­ly, began I guess through influ­ences from my grand­fa­ther. My grand­fa­ther, he’s 99. He’s actu­al­ly been read­ing sci­ence fic­tion since essen­tial­ly the mod­ern dawn of the gen­re. He read Dune when it was seri­al­ized, before it was actu­al­ly a book. He gave me my first copy of the Foundation tril­o­gy. And a lot of the sci­ence fic­tion that I’ve been espe­cial­ly drawn to is the ones that kind of real­ly try to under­stand a lot of the soci­etal impli­ca­tions of the gad­gets, as opposed to just the gad­gets of the future them­selves.

And in my role at Lux— It’s a VC firm that does early-stage invest­ments and kind of— I guess any­thing that’s at the fron­tier of sci­ence and tech­nol­o­gy. And so one of my roles there is involved in try­ing to con­nect groups of peo­ple that are not tra­di­tion­al­ly con­nect­ed to the world of ven­ture and star­tups. And so relat­ed to that, when a lot of tech­nol­o­gists and peo­ple in the world of Silicon Valley are build­ing things, there’s often this kind of tech­noutopi­an sense of like, you build some­thing, it’s this unal­loyed good, it must be won­der­ful.

But of course there’s often a lot peo­ple who are think­ing about the social, reg­u­la­to­ry, eth­i­cal, legal impli­ca­tions of all the­se dif­fer­ent tech­nolo­gies. But they’re often in the world of acad­e­mia, and they’re often not talk­ing to the peo­ple who are build­ing the­se things in the star­tup world. And so one of things I’ve actu­al­ly been doing is try­ing to con­nect the­se two dif­fer­ent worlds togeth­er to make real­ly sure that both par­ties are as engaged as pos­si­ble.

And actu­al­ly, even going back to the sci­ence fic­tion part. Since sci­ence fic­tion more holis­ti­cal­ly looks at a lot of the impli­ca­tions of the­se kinds of things as opposed to just say­ing oh, the future is the fol­low­ing three gad­gets and what they’re going to be, sci­ence fic­tion is real­ly good at say­ing, Okay, here is a sce­nar­io. Let’s actu­al­ly play it out,” I’ve actu­al­ly been work­ing to try to get sci­ence fic­tion writ­ers involved in talk­ing to the world star­tups and real­ly try­ing to make them think about the­se kinds of things. I don’t think I’ve actu­al­ly got­ten peo­ple involved in like, explic­it­ly Frankensteinian sto­ries involved, but yes—

Eschrich: Everybody who gets mon­ey from you guys has to watch Splice before [inaudi­ble].

Cara LaPointe: Well it’s inter­est­ing that Sam talks about this kind of holis­tic approach. So, I’m an auton­o­my sys­tems engi­neer, but I’ve worked in devel­op­ment sys­tems, using sys­tems, the pol­i­cy impli­ca­tions. So I kind of come at autonomous sys­tems from a lot of dif­fer­ent angles. So what’s real­ly inter­est­ing to me about the Frankenstein sto­ry is it real­ly seems to delve into the idea of the ethics of cre­ation. Should we or should we not cre­ate the tech­nol­o­gy?

But I think it was brought up in the first pan­el, when it comes to auton­o­my, when it comes to arti­fi­cial intel­li­gence, this tech­nol­o­gy is being devel­oped. So I think it’s real­ly more pro­duc­tive to think about okay, what is the ethics of how, where, when, why, you’re going to use the­se types of tech­nolo­gies. Because I think some­one said the genie’s out of the bot­tle, right. These things are being devel­oped. So that’s what to me is very inter­est­ing, is kind of mov­ing from that con­ver­sa­tion about cre­at­ing, to how are the­se tech­nolo­gies actu­al­ly used. 

The thing about autonomous sys­tems is you start to move into a world where we’ve use machi­nes for a long time to do things, right. But now we’re start­ing to get to a place where machi­nes can move into the cog­ni­tive space in terms of the decision-making space. There’s a real­ly inter­est­ing con­struct that we use in the defense world some­times called the OODA Loop—Observe, Orient, Decide, and Act. It’s just kind of a way to describe doing any­thing. So observ­ing the world, you’re sens­ing the things around you. Orienting is kind of under­stand­ing what you’re sens­ing. And then decid­ing what you want to do to achieve what­ev­er your pur­pose is. And then you act.

We’ve use machi­nes for a long time to do the sens­ing. We have all kinds of cam­eras and oth­er types of sen­sors. And we’ve used machi­nes to act for us for a long time. But what’s real­ly inter­est­ing with tech­nol­o­gy today is we’re on this cusp where the­se cog­ni­tive functions—machines can move into this cog­ni­tive space. So fig­ur­ing out kind of where and when and how we want machi­nes to move into the cog­ni­tive space, that’s what I think is real­ly inter­est­ing. And I think even from very ear­ly on, Frankenstein was bring­ing up those ideas of when you bring some­thing into that cog­ni­tive space. So that’s why I think it’s pret­ty fas­ci­nat­ing.

Eschrich: So, Susan I was hop­ing you could ground is in how peo­ple at Mary Shelley’s his­tor­i­cal moment are think­ing about unin­tend­ed con­se­quences. As Ed said the word sci­en­tist” isn’t even in use yet. But are there oth­er ways peo­ple are think­ing and talk­ing about the ethics of cre­ation and respon­si­bil­i­ty. And how is Shelley kind of build­ing on the con­text that she’s in to kind of cre­ate this the­me in Frankenstein and devel­op it.

Hitchcock: Yeah, well there’s an inter­est­ing inter­sec­tion between her lega­cy from her par­ents and the sci­ence going on. Her father I find a real­ly impor­tant influ­ence on the nov­el because William Godwin was real­ly— I think of him as being the father of our mod­ern day lib­er­al con­cept that peo­ple aren’t evil, that bad actions come because peo­ple have been influ­enced by hatred, by anger, by neg­a­tive out­side influ­ences. That is, that evil is made not born. And I think that that real­ly car­ries through. It’s as if Mary Shelley want­ed to ani­mate that phi­los­o­phy of her father’s.

But at the same time, there are the­se fas­ci­nat­ing exper­i­ments going on at the time. Galvani, the whole idea of the spark of life, what is the spark of life, in the­se amaz­ing exper­i­ments. Not only with frogs, which is sort of the famous one, but even with corpses. Introducing elec­tri­cal stim­uli to bod­ies and mak­ing them move, mak­ing the eyes of a corpse open up, mak­ing it sit up, that sort of thing. Those things were being done at the time, and they were kind of like sideshow events that the pub­lic would go to.

So there was a lot of sci­ence hap­pen­ing that opened up a lot of ques­tions of should we real­ly be doing this, and that is a lot of the inspi­ra­tion behind Frankenstein as well. You don’t real­ly see that hap­pen­ing in the nov­el, but it’s so inter­est­ing that instant­ly the retelling of the sto­ries bring elec­tric­i­ty in as the spark of life.” 

Eschrich: So, that point about social con­text and those sort of social con­struc­tion­ist beliefs of William Godwin is real­ly appro­pri­ate, I think, and also some­thing that her moth­er Mary Wollstonecraft was very adamant about. She wrote a lot about women’s edu­ca­tion and the idea that the way that wom­en were edu­cat­ed social­ized them to be sub­mis­sive and sort of…she called them intel­lec­tu­al­ly mal­formed” and things like that. This idea that they were kind of vio­lent­ly social­ized away from being intel­lec­tu­als and cit­i­zens and full mem­bers of soci­ety.

Both Sam and Cara, I think you both have some inter­ac­tion, Sam through your book and through Lux, and Cara through year your engi­neer­ing work, with sys­tems that learn and adapt. The sys­tems that work in a social con­text and have to solve prob­lems in com­plex ways. So, is this sort of social con­struc­tion­ist think­ing, this idea that the social con­text for the oper­a­tion of the­se tech­nolo­gies actu­al­ly affects the way they work and what they become… How do we kind of react to that in this moment?

Samuel Arbesman: One of the clear exam­ples this kind of thing is the whole like, arti­fi­cial intel­li­gence, machine learning—especially with deep learn­ing, how this is like, we’re hav­ing a moment of deep learn­ing right now. And with the­se sys­tems and even though the algo­rithms of how they learn are well under­stood, often­times the result­ing sys­tem based on once you kind of pour a whole bunch of data into it, the result­ing thing might actu­al­ly be very pow­er­ful, it might be very pre­dic­tive. You can iden­ti­fy objects and images, or help cars dri­ve by them­selves, or do cool things with voice recog­ni­tion. How they actu­al­ly work, kind of the under­ly­ing com­po­nents and the actu­al thread with­in the net­works, are not always entire­ly under­stood. And often­times because of that, there’s moments when you’re actu­al­ly just like, the cre­ators are sur­prised by their behav­ior.

So we were talk­ing about this ear­lier of like, there’s the Microsoft chat bot Tay, I guess a lit­tle more than a year ago, when it was designed to be a teenage girl, end­ed up being a white suprema­cist. It was because there was this mis­match between the data that they thought the sys­tem was going to get and what it actu­al­ly did get. And there was this like…the social­iza­tion in this case was wrong. And you can actu­al­ly see this also in sit­u­a­tions with IBM Watson, where the engi­neers who were involved in Watson want­ed the sys­tem to bet­ter under­stand slang, just kind of every­day lan­guage. And so in order to teach it that, they kind of poured in Urban Dictionary. And then it end­ed up just curs­ing out its cre­ators. And that was also not intend­ed.

And so I think there’s a lot of the­se kinds of things of rec­og­niz­ing that the envi­ron­ment that you expose a sys­tem to, and the way it kind of assim­i­lates that, is going to affect its behav­ior. And some­times you only dis­cov­er that when you actu­al­ly inter­act with it. And so I think that’s kind this iter­a­tive process of— As opposed to in Frankenstein, where it’s like you build the thing, build the crea­ture, hope­ful­ly it’s per­fect. Oh no, it sucks. I kind of give up and run away. 

I think in tech­nol­o­gy ide­al­ly there’s this iter­a­tive process of under­stand­ing. You build some­thing. You learn from it. You actu­al­ly kind of find out that there’s a mis­match between how you thought It going to work and how it actu­al­ly does work, embod­ied by glitch­es and fail­ures and bugs. And then you debug it and make it bet­ter. So rather than kind of just view­ing it as we ful­ly under­stand it or we nev­er under­stand it, there’s this con­stant learn­ing process and social­iza­tion kind of real­ly mak­ing sure you have the right envi­ron­ment to make sure it gets as close as pos­si­ble to the thing you actu­al­ly want it to be. 

Hitchcock: There’s a lack of knowl­edge, though, of what that the forces that you’re putting on to— Whether it’s the crea­ture or the sys­tems. You know, may­be we don’t have the capa­bil­i­ty of ful­ly under­stand­ing, or ful­ly know­ing. Like pour­ing the Urban Dictionary in. They didn’t know what influ­ences they were mak­ing on the thing.

Arbesman: And actu­al­ly relat­ed to that, there’s this idea from physics. It’s this term of when look­ing at a com­plex tech­no­log­i­cal sys­tem or just com­plex sys­tems in gen­er­al, of robust yet frag­ile. So the idea that when you build a sys­tem, it’s often extreme­ly robust to all the dif­fer­ent even­tu­al­i­ties that you’ve planned in, but it can be incred­i­bly frag­ile to pret­ty much any­thing you didn’t think about. And so there’s all the­se dif­fer­ent excep­tions and edge cas­es that you’ve built in and you’re real­ly proud of han­dling them, and sud­den­ly there’s some tiny lit­tle thing that just makes the entire thing cas­cade and fall apart. And so yeah, you have to be very wary of rec­og­niz­ing the lim­its to how you actu­al­ly designed it. 

LaPointe: I think it’s real­ly inter­est­ing to think about the sys­tem. We’re using the word sys­tem to talk about a machine that’s being cre­ative. When I think of sys­tem,” I actu­al­ly think of the inter­ac­tion between machi­nes and peo­ple. Time and time again in his­to­ry, tech­nol­o­gy comes in, inno­v­a­tive emerg­ing tech­nolo­gies come in and actu­al­ly change the fab­ric of our lives. And I think that the whole Industrial Revolution, right. I live thir­ty miles out­side of DC but I can dri­ve in every day. I mean, that would be unheard of cen­turies ago. 

But then think of the per­son­al com­put­er, think of the Internet. You actu­al­ly live your life dif­fer­ent­ly because of the­se tech­nolo­gies. And so we’re on the cusp of the same kind of social change, when it comes to autonomous sys­tems. Autonomy is going to change the fab­ric of our lives. I don’t know what it’s going to look like. But I can tell you it is going to change the fab­ric of our lives over the com­ing decades. So it’s inter­est­ing when you’re talk­ing about a sys­tem to under­stand that it’s not kind of there’s one way, it’s not how we’re just teach­ing a machine and teach­ing a sys­tem. You’ve got to under­stand how kind of we col­lec­tive­ly as a sys­tem evolve. And so I think that’s just an inter­est­ing way to kind of think about fram­ing it as you move for­ward talk­ing about the­se types of tech­nolo­gies.

Hitchcock: What do you mean when you say auton­o­my is going to be shap­ing our future? What is auton­o­my, that you’re talk­ing about?

LaPointe: So, auton­o­my— You know what, there is no com­mon def­i­n­i­tion of auton­o­my. Many days of my life have been spent in the debate about what auton­o­my and autonomous means. So you know, at some point you just moved beyond. But auton­o­my is when when you start to get machi­nes to move into the cog­ni­tive space. Machines can start mak­ing deci­sions about how they’re going to act. 

So the exam­ple I love to use, because a lot of peo­ple have had them, seen them, the Roomba vac­u­ums, right? I got a Roomba—I love it. But it’s fun­ny because when you think of a tra­di­tion­al vac­u­um, you have a tra­di­tion­al vac­u­um, you turn it on and what’s it doing? Its job is to suck up the dirt, right. And you move and decide where it’s going to go. Okay well, a Roomba, what’s its job? Its job is to suck up dirt and clean the floor. But it now decides how the pat­tern it’s going to fol­low around your room, or around you what­ev­er the set space is, to clean that. So auton­o­my is, as you’re start­ing to look at machi­nes start­ing to get into the deci­sion space…

And I think one of the things that we real­ly need to address and fig­ure out as the­se machi­nes come in— And it’s much more than just a tech­ni­cal chal­lenge, it’s all the­se oth­er things we’re talk­ing about— …is how do we trust the­se sys­tems? You trust some­body when you can rely on it to be pre­dictable, right. And we have this kind of intu­itive trust of oth­er peo­ple, and we know that they’re not going to be per­fect all the time. We have kind of this under­stand­ing, and your under­stand­ing of what a toddler’s going to do is dif­fer­ent than what a teenager’s going to do, is dif­fer­ent than what’s an adult going to do. So you have kind of this intu­itive knowl­edge.

So as we’re devel­op­ing the­se autonomous sys­tems that can act in dif­fer­ent ways, it’s real­ly impor­tant for us to also spend a lot of time devel­op­ing and under­stand­ing what this trust frame­work is, for sys­tems. So as Sam was say­ing, when you have an autonomous sys­tem, when I turn that Roomba on I don’t know the path it’s going to take around the room. I don’t know if it goes straight or goes left or does a lit­tle cir­cle. I have three kids and a dog, so it does a lot of the lit­tle cir­cles where it finds those dirt patch­es, right. I don’t know just look­ing at it instan­ta­neous­ly if it’s doing the right thing. I have to kind of wait to see as it’s done its job if it did the right thing. So it’s decid­ing or fig­ur­ing out how you real­ly trust sys­tems and test and eval­u­ate sys­tems is going to be fun­da­men­tal­ly dif­fer­ent with autonomous sys­tems, and this to me is one of the real chal­lenges that we are facing—we are fac­ing as a soci­ety.

So think about auton­o­my in self-driving cars. A lot of peo­ple like to talk about self-driving cars. And this is a tech­nol­o­gy that is devel­op­ing apace. Well, what are the chal­lenges? The chal­lenges are how do you inte­grate the­se into the human, exist­ing sys­tem we already have? How do you trust the self-driving cars? I mean, if there’s ever one acci­dent does that mean you don’t trust all self-driving cars? I know a lot of dri­vers who’ve had an accent, and they still are trust­ed to dri­ve around, right. But you know, we don’t have that kind of same lev­el of intu­itive under­stand­ing of what is pre­dictable and reli­able.

Arbesman: And to relate to that, with­in machine learn­ing— I was going back men­tion­ing how the­se sys­tems are mak­ing the­se some­what eso­ter­ic deci­sions where they work, but we’re not always entire­ly sure why they’re mak­ing the­se things and that makes it dif­fi­cult to trust them. And so there’s been this move­ment of try­ing to cre­ate more explain­able AI, actu­al­ly kind of gain­ing a win­dow into the decision-making process of the­se sys­tems.

And so relat­ed to the self-driving cars, it’s one thing when you— We have a pret­ty decent sense of like…intuitive sense of mind of like, when I meet some­one at an inter­sec­tion how they’re going to kind of inter­act with me in my car ver­sus their car. They’re not entire­ly ratio­nal but I kind of have a sense. But if it’s a self-driving car, I’m not real­ly entire­ly sure the kind of decision-making process that’s going on. And so if we can cre­ate cer­tain types of win­dows into under­stand­ing the decision-making process that’s real­ly impor­tant.

And I think back in terms of the his­to­ry of tech­nol­o­gy, and so the first com­put­er my fam­i­ly had was the Commodore VIC-20. And I guess William Shatner called it the won­der com­put­er of the 1980s. He was the pitch­man for it. I was too young to pro­gram at the time, but one of the ways you would get pro­grams is you had the­se things called type-ins. You would actu­al­ly just get a mag­a­zine and there would be code there and you would just type the actu­al code in. 

And so even though I didn’t know how to pro­gram, I could see this clear rela­tion­ship between the text and what the com­put­er was doing. And now we have the­se real­ly pow­er­ful tech­nolo­gies, but I no longer have that con­nec­tion. There’s a cer­tain dis­tance between them, and I think we need to find ways of cre­at­ing sort of a gate­way into kind of peek­ing under the hood. And I’m not entire­ly sure what those things would be. It could be a sym­bol, may­be just like a pro­gress bar—although I guess those are only ten­u­ous­ly con­nect­ed to real­i­ty. But we need more of those kinds of things in order to cre­ate that sort of trust.

Eschrich: Yeah, it seems to me that the rul­ing aes­thet­ic is mag­ic, right. To say oh, you know Netflix, it works accord­ing to mag­ic. The iPhone, so much of what hap­pens is under the hood and it’s sort of for your pro­tec­tion. You don’t need to wor­ry about it. But I think we’re real­iz­ing espe­cial­ly with some­thing like cyber­se­cu­ri­ty, which is a big unin­tend­ed con­se­quences prob­lem, right. We offload every­thing onto the Internet to become more effi­cient, and sud­den­ly every­thing seems at risk and inse­cure, in a way. We’re real­iz­ing we might need to know a lit­tle bit more about how this stuff actu­al­ly works. Maybe mag­ic isn’t good enough all the time.

Arbesman: And one of the few times you actu­al­ly learn about how some­thing works is when it goes wrong. Like some­times the only way to learn about a sys­tem is through the fail­ure. And you’re like, oh! It’s like I don’t know, the chat bot Tay is becom­ing racist. Now we actu­al­ly real­ize it was kind of assim­i­lat­ing data in ways we didn’t expect. And yeah, the­se kind of bugs are actu­al­ly teach­ing us some­thing.

Hitchcock: Which brings us back to Frankenstein.

Arbesman: Yes.

Eschrich: Thank you. Thank you so much.

Hitchcock: Because Victor was so fas­ci­nat­ed and excit­ed and proud and delight­ed with what he was doing. And then when he saw what he had done, it’s like…checking out. Horrible. End of his fas­ci­na­tion and delight. And begin­ning of his down­fall.

Eschrich: I want­ed to say, and I’m going to kind of prompt you, Sam. That Frankenstein’s very…haughty about his real­ly— You know, I think you can read it psy­cho­log­i­cal­ly as a defense mech­a­nism. But he’s so haughty lat­er about the crea­ture. He’s very dis­dain­ful of it, he sort of dis­tances him­self from it. All the unin­tend­ed con­se­quences it caus­es, he sort of works real­ly hard to con­vince his lis­ten­ers and the read­er that he’s not respon­si­ble for that. As if not think­ing ahead some­how absolves him.

But Sam, in your book Overcomplicated, you talk a bit about this con­cept of humil­i­ty, which dates all the way back to the Medieval Period. And I feel like the con­ver­sa­tion we’ve been hav­ing reminds me of that con­cept. Talking about how to live with this com­plex­i­ty in a way that’s not scorn­ful, but that’s also not kind of mys­ti­fied and help­less.

Arbesman: Yeah, so when I was writ­ing about humil­i­ty in the face of tech­nol­o­gy I was kind of con­tract­ing it with two extremes which we often tend towards when we think about or when we’re kind of con­front­ed with tech­nol­o­gy we don’t ful­ly under­stand. So, one is the fear in the face of the unknown, and we’re like oh my god self-driving cars are going to kill us all, the robots are going to rise up. 

And the oth­er extreme is kind of like the mag­ic of Netflix or the beau­ti­ful mind of Google. This almost like reli­gious rev­er­en­tial sense of awe. Like the­se things are beau­ti­ful, they must be per­fect… And of course, they’re not per­fect. They’re built by imper­fect beings. Humans.

And the­se two extremes, the down­side of both of the­se is they end up cut­ting off ques­tion­ing. When we’re so fear­ful that we can’t real­ly process the sys­tems that we’re deal­ing with, we don’t actu­al­ly try to under­stand them. And the same thing, if we think the sys­tem is per­fect and won­der­ful and wor­thy of our awe, we also don’t query. And so humil­i­ty, I’ve kind of used that as like the sort of halfway point, which actu­al­ly is pro­duc­tive. It actu­al­ly ends up allow­ing us to try to query our sys­tem, but rec­og­nize there are going to be lim­its.

And so going back to the Medieval thing, I kind of bring this idea from my Maimonides, from the 12th cen­tu­ry philosopher/physician/rabbi. And in one of his books, The Guide for the Perplexed, he wrote about sort of like, there are clear lim­its to what we can under­stand and that’s fine. He had made his peace with it. And I think in lat­er cen­turies there was sort of a sci­en­tific tri­umphal­ism that if we apply our minds the world around us, we’ll under­stand every­thing.

And in many ways we’ve actu­al­ly been extreme­ly suc­cess­ful. Which is great. But I think we are rec­og­niz­ing that there are cer­tain lim­its, there are cer­tain things we’re not going to be able to under­stand. And I think we need to import that into the tech­no­log­i­cal realm and rec­og­nize that even the sys­tems we our­selves have built, there are cer­tain cas­es where— And it’s one thing to say okay, I don’t under­stand the iPhone in my pock­et.” But if no one under­stands the iPhone com­plete­ly, includ­ing the peo­ple who cre­at­ed it and work with it on a dai­ly basis, that’s an inter­est­ing sort of thing. 

And I think this humil­i­ty is pow­er­ful in the sense that it’s bet­ter to work that and rec­og­nize our lim­its from the out­set so that we can sort of build upon them and con­stant­ly try to increase our under­stand­ing, but rec­og­nize that we might not ever ful­ly under­stand a sys­tem. As opposed to think­ing we are going to ful­ly under­stand it, and then be blind­sid­ed by all of the­se unin­tend­ed con­se­quences.

Eschrich: So Susan, I’m going query you on this first. Because I feel like the oth­er two are going to have stuff to say too, but I want to get the Frankenstein angle on it. So what do we do? Like, should we… How do we— What does Frankenstein tell us about how we pre­pare for unin­tend­ed con­se­quences since they’re inevitable, clear­ly. Like, we’re sort of inno­vat­ing and dis­cov­er­ing very quick­ly, things are chang­ing quick­ly. Should we ask sci­en­tists and engi­neers to reg­u­late them­selves? Should we cre­ate rigid laws? Do researchers need more flex­i­ble norms that they agree— You know, what does Frankenstein— You know, what does this mod­ern myth we’re con­stant­ly using to frame the­se debates kind of have to say about them?

Hitchcock: What does the—oh, gosh. 

Eschrich: I have in my mind some­thing that you said.

Hitchcock: You do? Maybe you should say it, because— 

Eschrich: You said some­thing about— Well, I want to prompt you. So, you said when were talk­ing in advance and I was pick­ing your brains about this, about how secre­tive Victor is. About how he removes him­self from his col­leagues.

Hitchcock: Well, it’s true. Yes, indeed. Victor is rep­re­sen­ta­tive of a sci­en­tist who works in secret, all by him­self, does not share. And as a mat­ter of fact even the James Whale film, it’s the same thing. I mean, Victor goes up into a tow­er and he locks the door, and his beloved Elizabeth has to knock on the door to ever see him. I mean, it is per­pet­u­at­ed in the retelling of Frankenstein, this whole idea of a sci­ence that is solo and not shared.

And you know, thanks for the prompt because may­be that’s a good idea, that the we share the sci­ence, that we talk about it. And I think shar­ing it not only with oth­er sci­en­tists but also with philoso­phers, psy­chol­o­gists, human­ists. You know, peo­ple who think of— And bioethi­cists. People who think about the­se ques­tions from dif­fer­ent van­tage points, and talk about them as the sci­ence is being devel­oped. That is about what human beings could do, I think. That’s about the best we could do.

LaPointe: I think this idea of shar­ing is real­ly crit­i­cal. So, from the kind of developer/operator perspective—and I come from kind of a Navy back­ground, mil­i­tary back­ground, it’s real­ly impor­tant that you get the peo­ple who are devel­op­ing sys­tems talk­ing to the peo­ple who are using sys­tems, right. We get into trou­ble when peo­ple have an idea of Oh, this is what some­body would want,” and they go off and devel­op it kind of in iso­la­tion, right. Maybe not secret, but in iso­la­tion, just…there’s a lot of kind of stovepipes and large orga­ni­za­tions. And it’s real­ly impor­tant to cre­ate the­se robust feed­back loops. 

And we have this say­ing in the Navy that sailors can break any sys­tem, so you always when you build some­thing you want to make it sailor-proof, right. But it’s real­ly a fab­u­lous thing to take a new idea, take a new design, take a pro­to­type, and give it to sailors. Because there’s noth­ing like 19 and 20 year-olds to com­plete­ly take apart what you just gave them, tell you why all the rea­sons you thought it was going to be great are com­plete­ly stu­pid and use­less, and tell you the thou­sand oth­er things you can do with a sys­tem.

So I think this kind of idea of shar­ing in the devel­op­ment— So, shar­ing in terms of talk­ing to peo­ple who are oper­a­tors, talk­ing to peo­ple who are the infra­struc­ture devel­op­ers, right. You think about, kind of going back to the self-driving cars, think about how we inter­act with the dri­ving infra­struc­ture. When you come to a stop light, what are you doing? You are visu­al­ly look­ing at a stop light that will tell you to stop or go. Do you think that is the best way to inter­act with a com­put­er? That’s real­ly real­ly hard. It’s real­ly real­ly hard for a com­put­er to look and visu­al­ly see a different-colored light and take from that the instruc­tions of whether to stop or go. 

So you have to kind of include the peo­ple who are devel­op­ing the infra­struc­ture, and include the pol­i­cy­mak­ers, include the ethi­cists. I mean, you have to bring—back to this holis­tic idea—you have to bring every­body in as you’re devel­op­ing tech­nol­o­gy, to make sure you’re devel­op­ing it in a way that works, in a way that’s use­ful, in a way that’s going to actu­al­ly be the right way to go with tech­nol­o­gy. And I think that’s a real­ly good exam­ple from Frankeinstein, is that because he’s kind of solo and design­ing some­thing that to him is bril­liant, and may­be if he had stopped and talked to any­body about it they would’ve said, Hey may­be that’s not the most bril­liant idea in the world.”

Arbesman: Yeah, and so relat­ed to this, in the open source soft­ware move­ment there’s this max­im of with enough eye­balls all bugs are shal­low. The idea that if enough peo­ple are work­ing on some­thing then all bugs are going to be I guess root­ed out and dis­cov­ered. Which is not entire­ly true. There are bugs that can actu­al­ly be quite seri­ous and last for like a decade or more.

But by and large you want more peo­ple to actu­al­ly be look­ing at the tech­nol­o­gy— And also going back to the robust yet frag­ile idea, that you want to make it as robust as pos­si­ble, and to do that you need as many peo­ple involved to deal with all the dif­fer­ent even­tu­al­i­ties. But you also just need the dif­fer­ent kinds of…like, dif­fer­ent peo­ple from dif­fer­ent mod­es of think­ing, to real­ly try to under­stand. Not just to make the sys­tem as robust as pos­si­ble but real­ly as well thought-out as pos­si­ble. And I think that’s a real­ly impor­tant thing.

LaPointe: Kind of crowd­sourcing your devel­op­ment. If you think about what’s going on with self-driving cars, one of the most impor­tant things that’s hap­pen­ing today that’s going to feed into that is actu­al­ly just the autonomous fea­tures in oth­er cars that are being deployed, and all this information-gathering, that because there are so many peo­ple out there and so many cars out there, and if there are auton­o­my algo­rithms in some of the­se oth­er cars. 

And lit­tle things—there’s lots of cars today that help you park, they help you do all the­se oth­er things. They help you stay in the lane, right. And so those can all have unin­tend­ed con­se­quences. But as you learn from that, and the more wide­ly you’re test­ing and test­ing this kind of incre­men­tal approach— You know, I like to say rev­o­lu­tion through evolution,” right. You build a lit­tle, test a lit­tle, learn a lot. And I think that’s a real­ly good way to try to pre­vent unin­tend­ed con­se­quences.

So instead of just talk­ing about man­ag­ing unin­tend­ed con­se­quences when they hap­pen, try to bring as many peo­ple as you can in from dif­fer­ent fields and try to think through what could be pos­si­ble con­se­quences, and try to mit­i­gate them along the way.

Arbesman: And relat­ed to the process of sci­ence more broad­ly, in sci­ence peo­ple have been recent­ly talk­ing a lot about the repro­ducibil­i­ty cri­sis and the fact that there’s cer­tain sci­en­tific research that can’t be repro­ducible. And I think that real­ly speaks to the impor­tance of open­ing sci­ence up and actu­al­ly mak­ing sure we can share data, and actu­al­ly real­ly see­ing the entire process, and putting your com­put­er code online to allow peo­ple to repro­duce all the­se dif­fer­ent things, and allow peo­ple to actu­al­ly par­take in the won­der­ful messi­ness that is sci­ence as opposed to kind of just try­ing to sweep it under the rug. And I think that’s real­ly impor­tant, to real­ly make sure that every­one is involved in that kind of thing.

Eschrich: So we have time for one more quick ques­tion. I actu­al­ly want to address it to you, Susan, at least first. And hope­ful­ly we’ll get a quick answer so we can go to ques­tions and answers from every­body else.

Arbesman: I’m lis­ten­ing to you all talk about diver­si­fy­ing this con­ver­sa­tion and engag­ing non-specialists, it strikes me that one irony there is that Frankenstein itself, this poi­so­nous fram­ing of Frankenstein as this like don’t inno­vate too far; dis­as­trous out­comes might hap­pen; we might trans­gress the bounds of accept­able human ambi­tion.” That this is actu­al­ly a road­block to hav­ing a con­struc­tive con­ver­sa­tion in a way, right. All of the­se themes that we’re talk­ing today about today of unin­tend­ed con­se­quences and play­ing God are in fact dif­fi­cult for peo­ple to grap­ple with in big groups. I won­der if you have any thoughts about that, Susan. Other ways to think of the nov­el, may­be, or recode it for peo­ple.

Hitchcock: Well, yeah. You know, I think that cul­ture has done the nov­el dis­ser­vice. Because I actu­al­ly think that the nov­el doesn’t end— The nov­el does not end with every­body dead. Nor does Splice, by the way. 

Eschrich: There’s a cou­ple peo­ple alive at the end of Splice. [crosstalk] And of course the com­pa­ny is mas­sive­ly pop­u­lar.

Hitchcock: Oh, there’s also a preg­nant wom­an at the end.

Eschrich: That is true.

Hitchcock: Uh huh, that’s what I’m think­ing about. So, Frankenstein ends with the mon­ster, the creature—whatever we want to call him, good or bad—going off into the dis­tance and poten­tial­ly liv­ing forever. And also, Victor Frankenstein is you know, yes indeed he is say­ing, I shouldn’t have done that. And you, Walton,” who is our nar­ra­tor who’s been going off to the North Pole, indeed lis­tens to him, still wants to go to the North Pole, but his crew says, No no, we want to go home. We’re too cold—”

Eschrich: They’re wor­ried they’re going to die.

Hitchcock: Yeah.

Eschrich: Yeah.

Hitchcock: I know. But there are still the­se fig­ures in the nov­el, both the crea­ture and Walton to some extent, who are still quest­ing. [crosstalk] Still quest­ing.

Eschrich: They have moral agen­cy, to some extent.

Hitchcock: Yeah. And I don’t know why I got onto that from your ques­tion, but—

Eschrich: You refuse to see the nov­el as pure­ly black at the end, I think.

Hitchcock: Yeah. Oh, I know. I was going to say cul­ture had done it a dis­ser­vice because I think cul­ture has sim­pli­fied the sto­ry to say sci­ence is bad, push­ing the lim­its are bad. This is a bad guy, and he shouldn’t have done it. And I don’t think that it is that sim­ple, frankly. In the nov­el or today, for that mat­ter.


Joey Eschrich: Alright. Well, I am going to ask if anybody out here has questions for any of our panelists.

Audience 1: Thank you very much for a great discussion. I'm curious about what segment of society really wants the self-driving cars. And one of the concerns is that there'll be a lethargy that will come upon the rider, perhaps, or the one who's in the car and such and not really ready to—

Let's say your Roomba. If your Roomba couldn't get through— It would stall because it couldn't get into it, you'd have to interact to reset it or something. So I'm just wondering, in a self driving car if you're not going to be able to have to do anything, then you're not going to be maybe aware of what's really going around. So, Musk is the one who started the whole idea, and yet is it going to target just a certain segment of society as opposed to you know, everyone has to be in a self-driving car?

Cara LaPointe: Well, I'm not going to speak to I think in terms of who's driving. I think that's a lot of people who've been driving the self-driving cars. But your idea of when you have people that were formerly driving and who are now the passengers, I think this is actually a really important issue with autonomous systems is one of the most dangerous parts with any autonomous system is the handoff. The handoff of control between a machine and between people. And it doesn't matter whether you're talking about cars or other systems. We can be talking about a plane on autopilot going to pilot on a plane. It's a perfect example. And it's that not having that full situational awareness, so when you have this handoff that's a really dangerous time for any system.

So I think this is one of the challenges and that's why when I define the system, I don't think you can just define the machine, right. You have to define the system of, how is the system going to work together between a machine and a person and how they're going to work together.

Audience 1: We as humans don't have that capacity of putting off [inaudible]. We're not going to [inaudible] ask the machine to figure out. That's the consequence of that. So I don't think we humans are wired at that level to understand how they all fuse together and what consequence results from it.

LaPointe: Well I think cognitive load is a really big issue for engineers as well. Just think about we live in an age of so much information, right. How much information can a person process? And frankly you have data. There's tons of data. You have so many centers, you can bring in so much data—how do you kind of take that data and get the knowledge out of it and turn it into information. And I think really part of the art of some of this, is how you take so much data and turn it into information and deliver it to the human part of a system? Or even the machine part of a system. The right information at the right time to make the overall system successful.

Samuel Arbesman: And related to that, there's the computer science's Danny Hillis. He's argued that we were living in the Enlightenment, and we kind of applied our brain to understand the world around us. And we move from the Enlightenment to the Entanglement, this era of like, everything hopelessly interconnected, we're no longer fully going to understand it. And I think to a certain degree we've actually been in that world already for some time. It's not just that self-driving cars are going to herald this new era. We're already there, and I think the question is how to just actually be conscious of it and try our best to make sure we're getting the relevant information and constantly iteratively trying to understand our systems as best we can.

And I think that goes back to it in terms of thinking about how we approach what understanding means for these systems. It's not a binary situation. It's not either complete understanding or total ignorance and mystery. There's a spectrum of you can understand certain components, you can understand the lay of the land without understanding all of the details. And I think our goal when we design these technologies is to make sure that we have the ability to kind of move along that spectrum towards greater understanding, even if we never get all the way there. I think that's in many cases fine.

Eschrich: I'd like us to move on to our next question.

Audience 2: I want to dig in a little on the dialogue that we may all agree it would be a good idea to involve more people in at the start of conceiving of these technologies. And ideally, I think we might agree that some public morality would be a good element to include. But say hypothetically we lived in a society where practically we're not really good at having conversations among the public that are thorny and especially that include technical details. I mean, just say that that happened to be the case.

And I just want to clarify, is the value of broad public consensus on input, or is the value more on having a diversity of representative thought process? And if the value's on something like openness and transparency, that might have a different infrastructure of feedback whereas if it's on something about diversity of thought, you might think of a sort of council where you have a philosopher and a humanist and whatever. So I think oftentimes we end up saying something like, "We should have a broad conversation about this and that's how we'll move forward," but sort of digging in on what that might actually look like and how to get the best value in our current society.

Eschrich: Thank you for that question. I'm just going to ask that we keep our responses quick just so we can take one more really quick question before we wrap up.

Susan Tyler Hitchcock: They're not mutually exclusive.

Eschrich: Oh look at that. A quick response. Either of you want to add anything?

Arbesman: So one thing I would say is…this is maybe a little bit to the side of it. But people have actually looked at when you kind of bring in lots of different, like diversity of opinions when it comes innovation, oftentimes the more diverse the opinions, I think the lower the average value of the output but the higher variance.

So the idea is like, for the most part when you bring lots of people together who might speak lots of different languages and jargons, it often fails. But when it does succeed it succeeds in a spectacular fashion in a way it wouldn't have otherwise. And so I think we should aim towards that but recognize that sometimes these conversations involve a lot of people talking past each other and so we need to do our best to make sure that doesn't happen.

LaPointe: But I think specifically to making sure you bring diverse voices from different segments of society and different backgrounds into the conversations is really important. I always like to tell people autonomy, autonomous systems, it's not a technical problem. It's not like I can put a bunch of engineers in a room for a couple of months and they could solve it. There are all these other aspects to it. So you need to make sure you bring all the other people. You bring the lawyers, you bring the ethicists, you bring everybody else. You know, the users, all the different people. So I think you just have to be very thoughtful whenever you are looking at developing a technology to bring all those voices in at an early stage.

Eschrich: Okay. One more very quick question.

Tad Daley: Yeah, thanks. I'm Tad Daley. It's for you, Cara. In the last panel, Nancy Kress I thought made a very complex, sophisticated argument about genetic engineering. It has great benefits, also enormous risks. I think Nancy said gene editing, some aspects of that is illegal. But then Nancy said but of course you can go offshore.

So I want to ask you to address those same things, Cara, about autonomous systems. I think you've made clear that they have both risks as well as great benefits. Do you think it ought to be regulated at all, and if so who should do the regulating given that if Country A does some regulation, in our globalized world it's the easiest thing in the world to go to Country B.

LaPointe: I think it's a great question and something that we internally talk a lot about. I think the thing about autonomy to understand is that every… Autonomy is ultimately software, right. It is software that you're putting into hardware systems that helps move into this cognitive decision-making space. Now, every piece of autonomy that you develop, this software you develop, it's dual-use.

So that was my earlier point in terms of I don't think it's really useful to talk about should you regulate development, because autonomy is being developed for a lot of different things. So what you really need to think about is okay, this technology is being developed so how, where, when should the technology be used? I think those are the useful conversations to have in terms of how it's regulated, etc. You know, where is autonomy allowed to be used, where it's not allowed to be used. But the idea that you could somehow regulate the development of autonomy I just don't think is feasible or realistic.

Eschrich: Okay. I with a heavy heart have to say that we're out of time. We will all be around during during the happy hour afterwards, so we'd love to keep talking to you and answering your questions and hearing what you have to say. And thank you to all of you for being up here with me and for sharing your thoughts with us.

And to wrap up I'd like to introduce our next presenter Jacob Brogan, who is an editorial fellow here at New America. And Jacob also writes brilliantly about technology and culture for Slate magazine. And he's here to talk to you about a fantastic Frankenstein adaptation.

Further Reference

The Spawn of Frankenstein event page at New America, recap at Slate Future Tense, and Futurography's series on Frankenstein

Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.