Joey Eschrich: Thank you Ed and thank you to the first panel for setting a high bar and kicking us off and making me nervous. So I'm here to talk to you about a really bad Frankenstein adaptation that I love, Splice. Has anybody seen this movie? Show of hands. Ah, there's like six or seven Spliceheads in here. Very exciting.
Okay. Splice is a science fiction/horror hybrid. It was released in 2009. And the film follows the efforts of a married couple of genetic engineers played by Adrien Brody and the very talented Sarah Polley, who work for a big pharmaceutical company, and their job is to create genetic hybrid creatures for medical applications. They start off creating these kind of worm-like beings. But they're not satisfied with that, and so they decide to splice human DNA into the mix. And they're they're hoping in a kind of Victor Franenstein-y, hand-wavy way to like, revolutionize the human race, right. Like they want to create an organism that will produce genetic material that could cure cancer, that could cure Parkinson's, that would you know, in some again very hand-wavy way just solve all the problems that we have.
And you know, they end up creating something sentient and it's kind of cute in a creepy squid-like way. And so they decide to raise it in secret, of course. Because as Nancy said, something horrible has to happen right off the bat or else you don't have a story. So Splice is a modern day Frankenstein story. And for those of you who are sort of science fiction and horror heads, it's crossed with the gruesome biohorror of classic science fiction movies Alien and The Fly.
It's also frothy and overwrought. It's a little nuts. It goes totally off the rails near the end. And that messiness is precisely why I love it so much. I think it in and bad movies like it—bad but kinda smart movies like it—tell us a lot about the moment we live in. And in this case I think about the sense of distrust and paranoia we have about biotechnology and these other Frankensteinian technologies like AI and geoengineering and things like that in this moment, as we've started to talk about already, of great possibility and perhaps great peril as well.
So in adapting Frankenstein to this contemporary moment of actual human/pig hybrids, for those of you who have been reading your science and tech news week, with designer babies—as Nancy talked about—on the horizon, the filmmakers behind Splice make important decisions about which elements of Shelley's novel to carry through and which to transform or leave out. You know, just like any adapters of a myth or well-worn story, they want to tailor it to their own social and in this case technological moment.
And my basic premise is these decisions are really meaningful. And in this case they shape the themes and ethical messages of the film, and they shape the ways that it departs from its source material. And so today I want to talk about one really important departure that Splice makes from Shelley's novel as a way to kind of set up this panel. My panel is about unintended consequences.
So without further ado, here's a brief clip. And this is happening when the creature, which is developing at a vastly accelerated rate, is very young.
[clip was excluded from recording]
Rush out and see it, seriously.
So, names are really important. And giving something a name, whether it's a human or a child or a pet or like, your car—right, an inanimate object—it lets us imbue it with a personality. To see it as an independent being, with goals, and emotions, deserving of our attention and affection. It's no surprise that so many of our friendly technology conglomerates these days are creating virtual assistants that have names and personalities. They're encouraging us to build emotional connections with their brands and to kind of imbue those brands with all kinds of associations about desires and senses of humor and things like that.
In Frankenstein, Shelley very intentionally has Victor never give his creation a name. And I think this is really, indeed, quite intentional. It's awkward, I think, as a novelist to have a major character with no name. And it makes the writing harder. When referring to the creature, Shelley has Victor use a bunch of different substitutes for a name. He calls the creature a wretch, a demon, a monster, and many other terrible, insulting things.
Shelley goes to all this trouble, I think, because the lack of a name symbolizes in a really powerful way Victor's rejection of the creature. He abandons it right after he brings it to life. He makes no attempt to care for it, to teach it, to help it acclimate to the world. In the novel the creature becomes violent and vengeful, precisely because he's rejected, first by Victor then by other people, largely because he's so large, so ugly. He's scary-looking, right. His lack of a name brings home the idea that he's barred and shunned from human society, and the pain of that exclusion is what turns him bad—he's not born bad.
Which brings us to Splice. And on the other hand, in this movie—you can start to see it here—Dren is socialized, educated, loved. Later in the film the scientists hide with her in a barn, where they create a sort of grotesque, Lynchian parody of a traditional 50s suburban nuclear family. This movie has a kind of dark comedic underside to it and it really comes out in this pastiche of nuclear family life.
And these aren't perfect parents by a longshot. But they do try, and they care for Dren. They screw up a lot but they try. And you can really see of course, in this clip Sarah Polley's character starting to really build a bond with this creature. And this is a really pivotal scene, because you can see in the conflict between the two scientists that this is the start of the creature transitioning from being a specimen to being a daughter. That name "specimen" really becomes this sticking point between the two of them.
But of course this ends in violent mayhem. This movie ends horribly, just like Frankenstein, with death and with a really shocking, brutal sexual assault, actually. Sarah Polley's character ends up alone and despondent, just like Victor at the end of the novel. So we end up in the same place.
And so to go back to the novel, the lesson I drew from it is that Victor's sin— This is one reading, anyway. That Victor's sin wasn't in being too ambitious, not necessarily in playing God. It was in failing to care for the being he created, failing to take responsibility and to provide the creature what it needed to thrive, to reach its potential, to be a positive development for society instead of a disaster.
Splice on the other hand has this just very different ethical program. It has a very different lesson for us. It says that some lines shouldn't be crossed. Some technologies are too dangerous to meddle with. It's possible for scientists, these sort of well-meaning scientists who we kinda like and you know, we like the actors, they can fall victim to hubris. They can shoot too high. And even though they try their best, again the experiment ends in blood and sorrow. These people, these characters, do something truly groundbreaking and they fail to predict and understand the consequences of their actions. They avoid Victor's mistake. They stick around and hold the creature close. But the unintended consequences of their actions are still catastrophic.
And as we've already started to talk about, we're in a moment when these Frankensteinian technologies seem to be becoming more and more reality. AI, genetic engineering, robotics, geoengineering, promise to make us healthier and more efficient, and even help to combat the existential threat of climate change.
But Splice warns us that if we try to do these radically ambitious things right, and we make an earnest effort to do them right, might unleash terrible unintended consequences anyway. We might wipe out the economy. We might give rise to the robot uprising that everybody likes to reference in their Future Tense pieces. We might wreck our environment even faster. And for Splice it's just not about how responsibly we do science or whether we stick around and care and love. It's about the idea that some innovations are just a bridge too far.
And so to help me continue to explore this theme of unintended consequences, I would like to welcome our three expert panelists to the stage. First Sam Arbesman is the Scientist in Residence at Lux Capital, and the author of the book Overcomplicated: Technology at the Limits of Comprehension. Susan Tyler Hitchcock is the Senior Editor of books for the National Geographic Society and the author of the book Frankenstein: A Cultural History, which has just been immensely helpful to me in understanding and untangling all of this. And Cara LaPointe is an engineer who has worked with autonomous systems for both science and defense applications and development, fielding operations, and policy development. Thank you so much for being here with me.
Joey Eschrich: So I’m sort of interested, whether you’re new to Frankenstein, relatively new like Patric, or whether you are kind of like someone who’s lived and breathed Frankenstein your whole life, what got you interested in the first place? Susan, you have this entire very encyclopedic and helpful book about the Frankenstein phenomenon. Sam, your work with inventors and technology startups seems to me to be evocative of some of the themes of the story, these creators at the cusp of something new. And Cara, I’m guessing that there’s some connection between your work with autonomous systems and the autonomous systems that we see in the novel in the 19th century. So I’m interested to hear from each of you kind of what resonates with you to start us off.
Susan Tyler Hitchcock: So, my fascination with Frankenstein goes back to my graduate—well no, really my education, my fascination with the literature of the Romantics, the British Romantics. They represent a time of culture wars as interesting as the 60s, when I started my fascination with these characters and their literature.
And also today. I mean, there were a lot of amazing things happening in their day, and I began with an interest in Percy Bysshe Shelley. I ultimately taught humanities to engineering school students, and I had the great opportunity Halloween day of teaching a class on Frankenstein. And for that class, I brought—I actually wore—a Halloween mask. Green, ugly, plastic Frankenstein mask. And we started talking about the difference between the novel and the current cultural interpretation. And that’s what started me. From that point on I started collecting Frankensteiniana. And I have hundreds of objects. And then I wrote a book.
Eschrich: We should’ve done this at your house. Sam, how about you?
Hitchcock: I have them hidden away.
Samuel Arbesman: So I guess my interest in the themes of Frankenstein, the themes of I guess societal implications of technology more generally, began I guess through influences from my grandfather. My grandfather, he’s 99. He’s actually been reading science fiction since essentially the modern dawn of the genre. He read Dune when it was serialized, before it was actually a book. He gave me my first copy of the Foundation trilogy. And a lot of the science fiction that I’ve been especially drawn to is the ones that kind of really try to understand a lot of the societal implications of the gadgets, as opposed to just the gadgets of the future themselves.
And in my role at Lux— It’s a VC firm that does early‐stage investments and kind of— I guess anything that’s at the frontier of science and technology. And so one of my roles there is involved in trying to connect groups of people that are not traditionally connected to the world of venture and startups. And so related to that, when a lot of technologists and people in the world of Silicon Valley are building things, there’s often this kind of technoutopian sense of like, you build something, it’s this unalloyed good, it must be wonderful.
But of course there’s often a lot people who are thinking about the social, regulatory, ethical, legal implications of all these different technologies. But they’re often in the world of academia, and they’re often not talking to the people who are building these things in the startup world. And so one of things I’ve actually been doing is trying to connect these two different worlds together to make really sure that both parties are as engaged as possible.
And actually, even going back to the science fiction part. Since science fiction more holistically looks at a lot of the implications of these kinds of things as opposed to just saying oh, the future is the following three gadgets and what they’re going to be, science fiction is really good at saying, “Okay, here is a scenario. Let’s actually play it out,” I’ve actually been working to try to get science fiction writers involved in talking to the world startups and really trying to make them think about these kinds of things. I don’t think I’ve actually gotten people involved in like, explicitly Frankensteinian stories involved, but yes—
Eschrich: Everybody who gets money from you guys has to watch Splice before [inaudible].
Cara LaPointe: Well it’s interesting that Sam talks about this kind of holistic approach. So, I’m an autonomy systems engineer, but I’ve worked in development systems, using systems, the policy implications. So I kind of come at autonomous systems from a lot of different angles. So what’s really interesting to me about the Frankenstein story is it really seems to delve into the idea of the ethics of creation. Should we or should we not create the technology?
But I think it was brought up in the first panel, when it comes to autonomy, when it comes to artificial intelligence, this technology is being developed. So I think it’s really more productive to think about okay, what is the ethics of how, where, when, why, you’re going to use these types of technologies. Because I think someone said the genie’s out of the bottle, right. These things are being developed. So that’s what to me is very interesting, is kind of moving from that conversation about creating, to how are these technologies actually used.
The thing about autonomous systems is you start to move into a world where we’ve use machines for a long time to do things, right. But now we’re starting to get to a place where machines can move into the cognitive space in terms of the decision‐making space. There’s a really interesting construct that we use in the defense world sometimes called the OODA Loop—Observe, Orient, Decide, and Act. It’s just kind of a way to describe doing anything. So observing the world, you’re sensing the things around you. Orienting is kind of understanding what you’re sensing. And then deciding what you want to do to achieve whatever your purpose is. And then you act.
We’ve use machines for a long time to do the sensing. We have all kinds of cameras and other types of sensors. And we’ve used machines to act for us for a long time. But what’s really interesting with technology today is we’re on this cusp where these cognitive functions—machines can move into this cognitive space. So figuring out kind of where and when and how we want machines to move into the cognitive space, that’s what I think is really interesting. And I think even from very early on, Frankenstein was bringing up those ideas of when you bring something into that cognitive space. So that’s why I think it’s pretty fascinating.
Eschrich: So, Susan I was hoping you could ground is in how people at Mary Shelley’s historical moment are thinking about unintended consequences. As Ed said the word “scientist” isn’t even in use yet. But are there other ways people are thinking and talking about the ethics of creation and responsibility. And how is Shelley kind of building on the context that she’s in to kind of create this theme in Frankenstein and develop it.
Hitchcock: Yeah, well there’s an interesting intersection between her legacy from her parents and the science going on. Her father I find a really important influence on the novel because William Godwin was really— I think of him as being the father of our modern day liberal concept that people aren’t evil, that bad actions come because people have been influenced by hatred, by anger, by negative outside influences. That is, that evil is made not born. And I think that that really carries through. It’s as if Mary Shelley wanted to animate that philosophy of her father’s.
But at the same time, there are these fascinating experiments going on at the time. Galvani, the whole idea of the spark of life, what is the spark of life, in these amazing experiments. Not only with frogs, which is sort of the famous one, but even with corpses. Introducing electrical stimuli to bodies and making them move, making the eyes of a corpse open up, making it sit up, that sort of thing. Those things were being done at the time, and they were kind of like sideshow events that the public would go to.
So there was a lot of science happening that opened up a lot of questions of should we really be doing this, and that is a lot of the inspiration behind Frankenstein as well. You don’t really see that happening in the novel, but it’s so interesting that instantly the retelling of the stories bring electricity in as the “spark of life.”
Eschrich: So, that point about social context and those sort of social constructionist beliefs of William Godwin is really appropriate, I think, and also something that her mother Mary Wollstonecraft was very adamant about. She wrote a lot about women’s education and the idea that the way that women were educated socialized them to be submissive and sort of…she called them “intellectually malformed” and things like that. This idea that they were kind of violently socialized away from being intellectuals and citizens and full members of society.
Both Sam and Cara, I think you both have some interaction, Sam through your book and through Lux, and Cara through year your engineering work, with systems that learn and adapt. The systems that work in a social context and have to solve problems in complex ways. So, is this sort of social constructionist thinking, this idea that the social context for the operation of these technologies actually affects the way they work and what they become… How do we kind of react to that in this moment?
Samuel Arbesman: One of the clear examples this kind of thing is the whole like, artificial intelligence, machine learning—especially with deep learning, how this is like, we’re having a moment of deep learning right now. And with these systems and even though the algorithms of how they learn are well understood, oftentimes the resulting system based on once you kind of pour a whole bunch of data into it, the resulting thing might actually be very powerful, it might be very predictive. You can identify objects and images, or help cars drive by themselves, or do cool things with voice recognition. How they actually work, kind of the underlying components and the actual thread within the networks, are not always entirely understood. And oftentimes because of that, there’s moments when you’re actually just like, the creators are surprised by their behavior.
So we were talking about this earlier of like, there’s the Microsoft chat bot Tay, I guess a little more than a year ago, when it was designed to be a teenage girl, ended up being a white supremacist. It was because there was this mismatch between the data that they thought the system was going to get and what it actually did get. And there was this like…the socialization in this case was wrong. And you can actually see this also in situations with IBM Watson, where the engineers who were involved in Watson wanted the system to better understand slang, just kind of everyday language. And so in order to teach it that, they kind of poured in Urban Dictionary. And then it ended up just cursing out its creators. And that was also not intended.
And so I think there’s a lot of these kinds of things of recognizing that the environment that you expose a system to, and the way it kind of assimilates that, is going to affect its behavior. And sometimes you only discover that when you actually interact with it. And so I think that’s kind this iterative process of— As opposed to in Frankenstein, where it’s like you build the thing, build the creature, hopefully it’s perfect. Oh no, it sucks. I kind of give up and run away.
I think in technology ideally there’s this iterative process of understanding. You build something. You learn from it. You actually kind of find out that there’s a mismatch between how you thought It going to work and how it actually does work, embodied by glitches and failures and bugs. And then you debug it and make it better. So rather than kind of just viewing it as we fully understand it or we never understand it, there’s this constant learning process and socialization kind of really making sure you have the right environment to make sure it gets as close as possible to the thing you actually want it to be.
Hitchcock: There’s a lack of knowledge, though, of what that the forces that you’re putting on to— Whether it’s the creature or the systems. You know, maybe we don’t have the capability of fully understanding, or fully knowing. Like pouring the Urban Dictionary in. They didn’t know what influences they were making on the thing.
Arbesman: And actually related to that, there’s this idea from physics. It’s this term of when looking at a complex technological system or just complex systems in general, of robust yet fragile. So the idea that when you build a system, it’s often extremely robust to all the different eventualities that you’ve planned in, but it can be incredibly fragile to pretty much anything you didn’t think about. And so there’s all these different exceptions and edge cases that you’ve built in and you’re really proud of handling them, and suddenly there’s some tiny little thing that just makes the entire thing cascade and fall apart. And so yeah, you have to be very wary of recognizing the limits to how you actually designed it.
LaPointe: I think it’s really interesting to think about the system. We’re using the word system to talk about a machine that’s being creative. When I think of “system,” I actually think of the interaction between machines and people. Time and time again in history, technology comes in, innovative emerging technologies come in and actually change the fabric of our lives. And I think that the whole Industrial Revolution, right. I live thirty miles outside of DC but I can drive in every day. I mean, that would be unheard of centuries ago.
But then think of the personal computer, think of the Internet. You actually live your life differently because of these technologies. And so we’re on the cusp of the same kind of social change, when it comes to autonomous systems. Autonomy is going to change the fabric of our lives. I don’t know what it’s going to look like. But I can tell you it is going to change the fabric of our lives over the coming decades. So it’s interesting when you’re talking about a system to understand that it’s not kind of there’s one way, it’s not how we’re just teaching a machine and teaching a system. You’ve got to understand how kind of we collectively as a system evolve. And so I think that’s just an interesting way to kind of think about framing it as you move forward talking about these types of technologies.
Hitchcock: What do you mean when you say autonomy is going to be shaping our future? What is autonomy, that you’re talking about?
LaPointe: So, autonomy— You know what, there is no common definition of autonomy. Many days of my life have been spent in the debate about what autonomy and autonomous means. So you know, at some point you just moved beyond. But autonomy is when when you start to get machines to move into the cognitive space. Machines can start making decisions about how they’re going to act.
So the example I love to use, because a lot of people have had them, seen them, the Roomba vacuums, right? I got a Roomba—I love it. But it’s funny because when you think of a traditional vacuum, you have a traditional vacuum, you turn it on and what’s it doing? Its job is to suck up the dirt, right. And you move and decide where it’s going to go. Okay well, a Roomba, what’s its job? Its job is to suck up dirt and clean the floor. But it now decides how the pattern it’s going to follow around your room, or around you whatever the set space is, to clean that. So autonomy is, as you’re starting to look at machines starting to get into the decision space…
And I think one of the things that we really need to address and figure out as these machines come in— And it’s much more than just a technical challenge, it’s all these other things we’re talking about— …is how do we trust these systems? You trust somebody when you can rely on it to be predictable, right. And we have this kind of intuitive trust of other people, and we know that they’re not going to be perfect all the time. We have kind of this understanding, and your understanding of what a toddler’s going to do is different than what a teenager’s going to do, is different than what’s an adult going to do. So you have kind of this intuitive knowledge.
So as we’re developing these autonomous systems that can act in different ways, it’s really important for us to also spend a lot of time developing and understanding what this trust framework is, for systems. So as Sam was saying, when you have an autonomous system, when I turn that Roomba on I don’t know the path it’s going to take around the room. I don’t know if it goes straight or goes left or does a little circle. I have three kids and a dog, so it does a lot of the little circles where it finds those dirt patches, right. I don’t know just looking at it instantaneously if it’s doing the right thing. I have to kind of wait to see as it’s done its job if it did the right thing. So it’s deciding or figuring out how you really trust systems and test and evaluate systems is going to be fundamentally different with autonomous systems, and this to me is one of the real challenges that we are facing—we are facing as a society.
So think about autonomy in self‐driving cars. A lot of people like to talk about self‐driving cars. And this is a technology that is developing apace. Well, what are the challenges? The challenges are how do you integrate these into the human, existing system we already have? How do you trust the self‐driving cars? I mean, if there’s ever one accident does that mean you don’t trust all self‐driving cars? I know a lot of drivers who’ve had an accent, and they still are trusted to drive around, right. But you know, we don’t have that kind of same level of intuitive understanding of what is predictable and reliable.
Arbesman: And to relate to that, within machine learning— I was going back mentioning how these systems are making these somewhat esoteric decisions where they work, but we’re not always entirely sure why they’re making these things and that makes it difficult to trust them. And so there’s been this movement of trying to create more explainable AI, actually kind of gaining a window into the decision‐making process of these systems.
And so related to the self‐driving cars, it’s one thing when you— We have a pretty decent sense of like…intuitive sense of mind of like, when I meet someone at an intersection how they’re going to kind of interact with me in my car versus their car. They’re not entirely rational but I kind of have a sense. But if it’s a self‐driving car, I’m not really entirely sure the kind of decision‐making process that’s going on. And so if we can create certain types of windows into understanding the decision‐making process that’s really important.
And I think back in terms of the history of technology, and so the first computer my family had was the Commodore VIC‐20. And I guess William Shatner called it the wonder computer of the 1980s. He was the pitchman for it. I was too young to program at the time, but one of the ways you would get programs is you had these things called type‐ins. You would actually just get a magazine and there would be code there and you would just type the actual code in.
And so even though I didn’t know how to program, I could see this clear relationship between the text and what the computer was doing. And now we have these really powerful technologies, but I no longer have that connection. There’s a certain distance between them, and I think we need to find ways of creating sort of a gateway into kind of peeking under the hood. And I’m not entirely sure what those things would be. It could be a symbol, maybe just like a progress bar—although I guess those are only tenuously connected to reality. But we need more of those kinds of things in order to create that sort of trust.
Eschrich: Yeah, it seems to me that the ruling aesthetic is magic, right. To say oh, you know Netflix, it works according to magic. The iPhone, so much of what happens is under the hood and it’s sort of for your protection. You don’t need to worry about it. But I think we’re realizing especially with something like cybersecurity, which is a big unintended consequences problem, right. We offload everything onto the Internet to become more efficient, and suddenly everything seems at risk and insecure, in a way. We’re realizing we might need to know a little bit more about how this stuff actually works. Maybe magic isn’t good enough all the time.
Arbesman: And one of the few times you actually learn about how something works is when it goes wrong. Like sometimes the only way to learn about a system is through the failure. And you’re like, oh! It’s like I don’t know, the chat bot Tay is becoming racist. Now we actually realize it was kind of assimilating data in ways we didn’t expect. And yeah, these kind of bugs are actually teaching us something.
Hitchcock: Which brings us back to Frankenstein.
Eschrich: Thank you. Thank you so much.
Hitchcock: Because Victor was so fascinated and excited and proud and delighted with what he was doing. And then when he saw what he had done, it’s like…checking out. Horrible. End of his fascination and delight. And beginning of his downfall.
Eschrich: I wanted to say, and I’m going to kind of prompt you, Sam. That Frankenstein’s very…haughty about his really— You know, I think you can read it psychologically as a defense mechanism. But he’s so haughty later about the creature. He’s very disdainful of it, he sort of distances himself from it. All the unintended consequences it causes, he sort of works really hard to convince his listeners and the reader that he’s not responsible for that. As if not thinking ahead somehow absolves him.
But Sam, in your book Overcomplicated, you talk a bit about this concept of humility, which dates all the way back to the Medieval Period. And I feel like the conversation we’ve been having reminds me of that concept. Talking about how to live with this complexity in a way that’s not scornful, but that’s also not kind of mystified and helpless.
Arbesman: Yeah, so when I was writing about humility in the face of technology I was kind of contracting it with two extremes which we often tend towards when we think about or when we’re kind of confronted with technology we don’t fully understand. So, one is the fear in the face of the unknown, and we’re like oh my god self‐driving cars are going to kill us all, the robots are going to rise up.
And the other extreme is kind of like the magic of Netflix or the beautiful mind of Google. This almost like religious reverential sense of awe. Like these things are beautiful, they must be perfect… And of course, they’re not perfect. They’re built by imperfect beings. Humans.
And these two extremes, the downside of both of these is they end up cutting off questioning. When we’re so fearful that we can’t really process the systems that we’re dealing with, we don’t actually try to understand them. And the same thing, if we think the system is perfect and wonderful and worthy of our awe, we also don’t query. And so humility, I’ve kind of used that as like the sort of halfway point, which actually is productive. It actually ends up allowing us to try to query our system, but recognize there are going to be limits.
And so going back to the Medieval thing, I kind of bring this idea from my Maimonides, from the 12th century philosopher/physician/rabbi. And in one of his books, The Guide for the Perplexed, he wrote about sort of like, there are clear limits to what we can understand and that’s fine. He had made his peace with it. And I think in later centuries there was sort of a scientific triumphalism that if we apply our minds the world around us, we’ll understand everything.
And in many ways we’ve actually been extremely successful. Which is great. But I think we are recognizing that there are certain limits, there are certain things we’re not going to be able to understand. And I think we need to import that into the technological realm and recognize that even the systems we ourselves have built, there are certain cases where— And it’s one thing to say okay, “I don’t understand the iPhone in my pocket.” But if no one understands the iPhone completely, including the people who created it and work with it on a daily basis, that’s an interesting sort of thing.
And I think this humility is powerful in the sense that it’s better to work that and recognize our limits from the outset so that we can sort of build upon them and constantly try to increase our understanding, but recognize that we might not ever fully understand a system. As opposed to thinking we are going to fully understand it, and then be blindsided by all of these unintended consequences.
Eschrich: So Susan, I’m going query you on this first. Because I feel like the other two are going to have stuff to say too, but I want to get the Frankenstein angle on it. So what do we do? Like, should we… How do we— What does Frankenstein tell us about how we prepare for unintended consequences since they’re inevitable, clearly. Like, we’re sort of innovating and discovering very quickly, things are changing quickly. Should we ask scientists and engineers to regulate themselves? Should we create rigid laws? Do researchers need more flexible norms that they agree— You know, what does Frankenstein— You know, what does this modern myth we’re constantly using to frame these debates kind of have to say about them?
Hitchcock: What does the—oh, gosh.
Eschrich: I have in my mind something that you said.
Hitchcock: You do? Maybe you should say it, because—
Eschrich: You said something about— Well, I want to prompt you. So, you said when were talking in advance and I was picking your brains about this, about how secretive Victor is. About how he removes himself from his colleagues.
Hitchcock: Well, it’s true. Yes, indeed. Victor is representative of a scientist who works in secret, all by himself, does not share. And as a matter of fact even the James Whale film, it’s the same thing. I mean, Victor goes up into a tower and he locks the door, and his beloved Elizabeth has to knock on the door to ever see him. I mean, it is perpetuated in the retelling of Frankenstein, this whole idea of a science that is solo and not shared.
And you know, thanks for the prompt because maybe that’s a good idea, that the we share the science, that we talk about it. And I think sharing it not only with other scientists but also with philosophers, psychologists, humanists. You know, people who think of— And bioethicists. People who think about these questions from different vantage points, and talk about them as the science is being developed. That is about what human beings could do, I think. That’s about the best we could do.
LaPointe: I think this idea of sharing is really critical. So, from the kind of developer/operator perspective—and I come from kind of a Navy background, military background, it’s really important that you get the people who are developing systems talking to the people who are using systems, right. We get into trouble when people have an idea of “Oh, this is what somebody would want,” and they go off and develop it kind of in isolation, right. Maybe not secret, but in isolation, just…there’s a lot of kind of stovepipes and large organizations. And it’s really important to create these robust feedback loops.
And we have this saying in the Navy that sailors can break any system, so you always when you build something you want to make it sailor‐proof, right. But it’s really a fabulous thing to take a new idea, take a new design, take a prototype, and give it to sailors. Because there’s nothing like 19 and 20 year‐olds to completely take apart what you just gave them, tell you why all the reasons you thought it was going to be great are completely stupid and useless, and tell you the thousand other things you can do with a system.
So I think this kind of idea of sharing in the development— So, sharing in terms of talking to people who are operators, talking to people who are the infrastructure developers, right. You think about, kind of going back to the self‐driving cars, think about how we interact with the driving infrastructure. When you come to a stop light, what are you doing? You are visually looking at a stop light that will tell you to stop or go. Do you think that is the best way to interact with a computer? That’s really really hard. It’s really really hard for a computer to look and visually see a different‐colored light and take from that the instructions of whether to stop or go.
So you have to kind of include the people who are developing the infrastructure, and include the policymakers, include the ethicists. I mean, you have to bring—back to this holistic idea—you have to bring everybody in as you’re developing technology, to make sure you’re developing it in a way that works, in a way that’s useful, in a way that’s going to actually be the right way to go with technology. And I think that’s a really good example from Frankeinstein, is that because he’s kind of solo and designing something that to him is brilliant, and maybe if he had stopped and talked to anybody about it they would’ve said, “Hey maybe that’s not the most brilliant idea in the world.”
Arbesman: Yeah, and so related to this, in the open source software movement there’s this maxim of with enough eyeballs all bugs are shallow. The idea that if enough people are working on something then all bugs are going to be I guess rooted out and discovered. Which is not entirely true. There are bugs that can actually be quite serious and last for like a decade or more.
But by and large you want more people to actually be looking at the technology— And also going back to the robust yet fragile idea, that you want to make it as robust as possible, and to do that you need as many people involved to deal with all the different eventualities. But you also just need the different kinds of…like, different people from different modes of thinking, to really try to understand. Not just to make the system as robust as possible but really as well thought‐out as possible. And I think that’s a really important thing.
LaPointe: Kind of crowdsourcing your development. If you think about what’s going on with self‐driving cars, one of the most important things that’s happening today that’s going to feed into that is actually just the autonomous features in other cars that are being deployed, and all this information‐gathering, that because there are so many people out there and so many cars out there, and if there are autonomy algorithms in some of these other cars.
And little things—there’s lots of cars today that help you park, they help you do all these other things. They help you stay in the lane, right. And so those can all have unintended consequences. But as you learn from that, and the more widely you’re testing and testing this kind of incremental approach— You know, I like to say “revolution through evolution,” right. You build a little, test a little, learn a lot. And I think that’s a really good way to try to prevent unintended consequences.
So instead of just talking about managing unintended consequences when they happen, try to bring as many people as you can in from different fields and try to think through what could be possible consequences, and try to mitigate them along the way.
Arbesman: And related to the process of science more broadly, in science people have been recently talking a lot about the reproducibility crisis and the fact that there’s certain scientific research that can’t be reproducible. And I think that really speaks to the importance of opening science up and actually making sure we can share data, and actually really seeing the entire process, and putting your computer code online to allow people to reproduce all these different things, and allow people to actually partake in the wonderful messiness that is science as opposed to kind of just trying to sweep it under the rug. And I think that’s really important, to really make sure that everyone is involved in that kind of thing.
Eschrich: So we have time for one more quick question. I actually want to address it to you, Susan, at least first. And hopefully we’ll get a quick answer so we can go to questions and answers from everybody else.
Arbesman: I’m listening to you all talk about diversifying this conversation and engaging non‐specialists, it strikes me that one irony there is that Frankenstein itself, this poisonous framing of Frankenstein as this like “don’t innovate too far; disastrous outcomes might happen; we might transgress the bounds of acceptable human ambition.” That this is actually a roadblock to having a constructive conversation in a way, right. All of these themes that we’re talking today about today of unintended consequences and playing God are in fact difficult for people to grapple with in big groups. I wonder if you have any thoughts about that, Susan. Other ways to think of the novel, maybe, or recode it for people.
Hitchcock: Well, yeah. You know, I think that culture has done the novel disservice. Because I actually think that the novel doesn’t end— The novel does not end with everybody dead. Nor does Splice, by the way.
Eschrich: There’s a couple people alive at the end of Splice. [crosstalk] And of course the company is massively popular.
Hitchcock: Oh, there’s also a pregnant woman at the end.
Eschrich: That is true.
Hitchcock: Uh huh, that’s what I’m thinking about. So, Frankenstein ends with the monster, the creature—whatever we want to call him, good or bad—going off into the distance and potentially living forever. And also, Victor Frankenstein is you know, yes indeed he is saying, “I shouldn’t have done that. And you, Walton,” who is our narrator who’s been going off to the North Pole, indeed listens to him, still wants to go to the North Pole, but his crew says, “No no, we want to go home. We’re too cold—”
Eschrich: They’re worried they’re going to die.
Hitchcock: I know. But there are still these figures in the novel, both the creature and Walton to some extent, who are still questing. [crosstalk] Still questing.
Eschrich: They have moral agency, to some extent.
Hitchcock: Yeah. And I don’t know why I got onto that from your question, but—
Eschrich: You refuse to see the novel as purely black at the end, I think.
Hitchcock: Yeah. Oh, I know. I was going to say culture had done it a disservice because I think culture has simplified the story to say science is bad, pushing the limits are bad. This is a bad guy, and he shouldn’t have done it. And I don’t think that it is that simple, frankly. In the novel or today, for that matter.
Joey Eschrich: Alright. Well, I am going to ask if anybody out here has questions for any of our panelists.
Audience 1: Thank you very much for a great discussion. I'm curious about what segment of society really wants the self-driving cars. And one of the concerns is that there'll be a lethargy that will come upon the rider, perhaps, or the one who's in the car and such and not really ready to—
Let's say your Roomba. If your Roomba couldn't get through— It would stall because it couldn't get into it, you'd have to interact to reset it or something. So I'm just wondering, in a self driving car if you're not going to be able to have to do anything, then you're not going to be maybe aware of what's really going around. So, Musk is the one who started the whole idea, and yet is it going to target just a certain segment of society as opposed to you know, everyone has to be in a self-driving car?
Cara LaPointe: Well, I'm not going to speak to I think in terms of who's driving. I think that's a lot of people who've been driving the self-driving cars. But your idea of when you have people that were formerly driving and who are now the passengers, I think this is actually a really important issue with autonomous systems is one of the most dangerous parts with any autonomous system is the handoff. The handoff of control between a machine and between people. And it doesn't matter whether you're talking about cars or other systems. We can be talking about a plane on autopilot going to pilot on a plane. It's a perfect example. And it's that not having that full situational awareness, so when you have this handoff that's a really dangerous time for any system.
So I think this is one of the challenges and that's why when I define the system, I don't think you can just define the machine, right. You have to define the system of, how is the system going to work together between a machine and a person and how they're going to work together.
Audience 1: We as humans don't have that capacity of putting off [inaudible]. We're not going to [inaudible] ask the machine to figure out. That's the consequence of that. So I don't think we humans are wired at that level to understand how they all fuse together and what consequence results from it.
LaPointe: Well I think cognitive load is a really big issue for engineers as well. Just think about we live in an age of so much information, right. How much information can a person process? And frankly you have data. There's tons of data. You have so many centers, you can bring in so much data—how do you kind of take that data and get the knowledge out of it and turn it into information. And I think really part of the art of some of this, is how you take so much data and turn it into information and deliver it to the human part of a system? Or even the machine part of a system. The right information at the right time to make the overall system successful.
Samuel Arbesman: And related to that, there's the computer science's Danny Hillis. He's argued that we were living in the Enlightenment, and we kind of applied our brain to understand the world around us. And we move from the Enlightenment to the Entanglement, this era of like, everything hopelessly interconnected, we're no longer fully going to understand it. And I think to a certain degree we've actually been in that world already for some time. It's not just that self-driving cars are going to herald this new era. We're already there, and I think the question is how to just actually be conscious of it and try our best to make sure we're getting the relevant information and constantly iteratively trying to understand our systems as best we can.
And I think that goes back to it in terms of thinking about how we approach what understanding means for these systems. It's not a binary situation. It's not either complete understanding or total ignorance and mystery. There's a spectrum of you can understand certain components, you can understand the lay of the land without understanding all of the details. And I think our goal when we design these technologies is to make sure that we have the ability to kind of move along that spectrum towards greater understanding, even if we never get all the way there. I think that's in many cases fine.
Eschrich: I'd like us to move on to our next question.
Audience 2: I want to dig in a little on the dialogue that we may all agree it would be a good idea to involve more people in at the start of conceiving of these technologies. And ideally, I think we might agree that some public morality would be a good element to include. But say hypothetically we lived in a society where practically we're not really good at having conversations among the public that are thorny and especially that include technical details. I mean, just say that that happened to be the case.
And I just want to clarify, is the value of broad public consensus on input, or is the value more on having a diversity of representative thought process? And if the value's on something like openness and transparency, that might have a different infrastructure of feedback whereas if it's on something about diversity of thought, you might think of a sort of council where you have a philosopher and a humanist and whatever. So I think oftentimes we end up saying something like, "We should have a broad conversation about this and that's how we'll move forward," but sort of digging in on what that might actually look like and how to get the best value in our current society.
Eschrich: Thank you for that question. I'm just going to ask that we keep our responses quick just so we can take one more really quick question before we wrap up.
Susan Tyler Hitchcock: They're not mutually exclusive.
Eschrich: Oh look at that. A quick response. Either of you want to add anything?
Arbesman: So one thing I would say is…this is maybe a little bit to the side of it. But people have actually looked at when you kind of bring in lots of different, like diversity of opinions when it comes innovation, oftentimes the more diverse the opinions, I think the lower the average value of the output but the higher variance.
So the idea is like, for the most part when you bring lots of people together who might speak lots of different languages and jargons, it often fails. But when it does succeed it succeeds in a spectacular fashion in a way it wouldn't have otherwise. And so I think we should aim towards that but recognize that sometimes these conversations involve a lot of people talking past each other and so we need to do our best to make sure that doesn't happen.
LaPointe: But I think specifically to making sure you bring diverse voices from different segments of society and different backgrounds into the conversations is really important. I always like to tell people autonomy, autonomous systems, it's not a technical problem. It's not like I can put a bunch of engineers in a room for a couple of months and they could solve it. There are all these other aspects to it. So you need to make sure you bring all the other people. You bring the lawyers, you bring the ethicists, you bring everybody else. You know, the users, all the different people. So I think you just have to be very thoughtful whenever you are looking at developing a technology to bring all those voices in at an early stage.
Eschrich: Okay. One more very quick question.
Tad Daley: Yeah, thanks. I'm Tad Daley. It's for you, Cara. In the last panel, Nancy Kress I thought made a very complex, sophisticated argument about genetic engineering. It has great benefits, also enormous risks. I think Nancy said gene editing, some aspects of that is illegal. But then Nancy said but of course you can go offshore.
So I want to ask you to address those same things, Cara, about autonomous systems. I think you've made clear that they have both risks as well as great benefits. Do you think it ought to be regulated at all, and if so who should do the regulating given that if Country A does some regulation, in our globalized world it's the easiest thing in the world to go to Country B.
LaPointe: I think it's a great question and something that we internally talk a lot about. I think the thing about autonomy to understand is that every… Autonomy is ultimately software, right. It is software that you're putting into hardware systems that helps move into this cognitive decision-making space. Now, every piece of autonomy that you develop, this software you develop, it's dual-use.
So that was my earlier point in terms of I don't think it's really useful to talk about should you regulate development, because autonomy is being developed for a lot of different things. So what you really need to think about is okay, this technology is being developed so how, where, when should the technology be used? I think those are the useful conversations to have in terms of how it's regulated, etc. You know, where is autonomy allowed to be used, where it's not allowed to be used. But the idea that you could somehow regulate the development of autonomy I just don't think is feasible or realistic.
Eschrich: Okay. I with a heavy heart have to say that we're out of time. We will all be around during during the happy hour afterwards, so we'd love to keep talking to you and answering your questions and hearing what you have to say. And thank you to all of you for being up here with me and for sharing your thoughts with us.
And to wrap up I'd like to introduce our next presenter Jacob Brogan, who is an editorial fellow here at New America. And Jacob also writes brilliantly about technology and culture for Slate magazine. And he's here to talk to you about a fantastic Frankenstein adaptation.