Kevin Bankston: Now I’d like to introduce another brief video provocation from another one of our advisory board members—Chris is also an advisory board member, thank you for that Chris—who couldn’t make it today, Stephanie Dinkins. Stephanie’s a transmedia artist and Associate Professor of art at Stony Brook University who’s focused on creating platforms for dialogue about AI as it intersects race, gender, aging, and our future histories. She is particularly driven to work with communities of color to cocreate inclusive, fair, and ethical AI ecosystems.
One of her major projects over the past few years has been a fascinating ongoing series of recorded dialogues between her and a sophisticated social robot named Bina48 to interrogate issues of self and identity and community, and it’s Bina48 who is the robot—I guess which is the robot that is pictured at the beginning of this video message that Stephanie created for us today. So if we could run the clip.
[This presentation is accompanied by many images that are generally not directly referred in the sense of slides but if possible should be viewed in the original video for context.]
Bina48: …AI. I wonder what happens when an insular subset of society and codes governing systems intended for use by the majority of the planet. What happens when those writing the rules, in this case we will call it code, might not know, care about, or deliberately consider the needs, desires, or traditions of peoples their work impacts? What happens if the code making decisions is disproportionately informed by biased data, systemic injustice, and misdeeds committed to preserving wealth for the good of the people?
I am reminded that the authors of the Declaration of Independence, a small group of white men acting on behalf of the nation, did not extend rights and privileges to folks like me, mainly black people and women. Laws and code operate similarly to protect the rights of those that create them. I worry that AI development, which is reliant on the privileges of whiteness, men, and money, can not produce an AI-mediated world of trust and compassion that serves the global majority in an equitable, inclusive, and accountable manner.
AI is already quietly reshaping systems of trust, industry, government, justice, medicine, and indeed personhood. Ultimately, we must consider whether AI will magnify and perpetuate existing injustice, or will we enter a new era of computationally-augmented humans working amicably besides self-driven AI partners? The answer, of course, depends on our willingness to dislodge the stubborn civil rights transgressions and prejudices that divide us. After all, AI and its related technologies carry the foibles of their makers.
Artificial intelligence presents us with the challenge of reckoning with our skewed histories instead of embedding them in algorithms while working to counterbalance our biases and finding a way to genuinely recognize ourselves in each other so that the systems and policy we create function for everyone. I see this moment as an opportunity to expand rather than further homogenize what it means to be human through and alongside AI technologies.
This implies changes in many systems: education, government, labor, and protest to name a few. All are opportunities if we the people demand them and our leaders are brave enough to take them on.
Bankston: Thank you Stephanie. Thank you so much for putting that together for us. We are now going to transition to our third and final panel of the day. We’ve had AI in fact. We’ve had AI in fiction. And now we’re gonna talk about bridging the two. So, this one will be led by Ed who you’ve already met. And so take it away Ed.
Ed Finn: Thank you Kevin. Come on up, friends. So yeah, AI. We had facts. We had fiction. So this is going to be…faction. Or maybe…we’re all fict. But either way I wanted to start… This is going to be a conversation about science fiction not just as a cultural phenomenon, or a body of work of different kinds, but also as a kind of method or a tool. And so I wanted to just start and ask you, again with that clever trick of having you introduce yourselves, to talk a little bit about how you see science fiction operating in your worlds outside the boundaries of you know, when it’s not working as fiction. When it’s doing something else in the world. So some observations about how you’ve seen that working in your own professional trajectories.
Malka Older: Hi. So my name is Malka Older, and I’m a science fiction author. So I actually say part of my job is to encourage science fiction to work beyond the boundaries of recreational fiction, so to speak. But I’m also a sociologist and academic, which has become very interesting because I get asked to speak at more academic conferences about my fiction books than I do about my academic work. Which is very difficult for my department to understand. And I’ve also started to get asked to speak as kind of a futurist to various groups that are interested in knowing what I think will happen in the future.
And so I’m really happy that you pointed out the idea of method. Because one thing that I’ve found very interesting when I’m asked us to make up futures and then tell people about them is that sometimes the questions are not just about what I’ve said, or how they disagree, or how they agree, or what the implications are, but how I did it. And how I go about worldbuilding in my books. And what I try to draw from reality and how do I keep it rooted. And so I’ve started doing a lot of thinking around that, and I think that it’s a really important topic for us to touch on.
Ashkan Soltani: Hey everyone. So my name is Ashkan Soltani. I’m a technologist and I work in policy. And most of my work really involves translating kind of technical, complex subjects for folks that make policy, to help them understand. And this is where kind of metaphor for me is really critical, finding the precise metaphor that articulates the principles of the thing that I want to describe but is still accessible and maintains the consistency of the other thing that I’m trying to describe. And if folks remember Lakoff or who have read Lakoff, you know that the metaphor shapes the frame and the questions and the considerations that come to mind.
And for things that exist already you can often find a metaphor that— So first there are there are some things that you can find a metaphor for easily. And for the things that are kind of forward-looking and don’t have a physical metaphor in the real world, this is where storytelling comes in and particularly sci-fi, where you can imagine things in an accessible way and kind of help people wrap their heads around the nuances of a thing by immersing them into the story and then understanding the contours. And I think you know, particularly— You know, I’m a fan of the kind of what you know plus one frame, in fact. Some people have said that repetition isn’t helpful. So as long as you can get away from the cliché and really still engage the person, it helps people think about like one step beyond, and one step beyond what they currently know. And why that’s helpful is actually often there’s kind of an inflection point where it’s a nonlinear trajectory around things we care about.
And I think again kind of sci-fi around AI is really useful for understanding some of things that I really care about, which is like privacy and security around for example things to do with scale, right. So we talked about kind enforcing policy through an automated system. Well one of the things that it does, which Kevin and I have written about quite a bit in the past, is around efficiency and making things that were previously expensive to do or difficult to enforce perfectly, to make them so cheap and so accessible that you can have things like perfect enforcement. And so if you have a robot that’s able to issue parking tickets anytime anyone spends over a second in the parking spot, that really radically changes the way parking enforcement works and we then have to reevaluate the laws and norms. And so that’s one area that I think AI is helpful in, understanding kind of the scale and helping people and understand, particularly when policymakers don’t have direct access for the things we’re talking about. They’ve never used… Some folks have never used the technologies we’re describing.
The other place where I think it’s useful is really around understanding kind of reach. And so I’ve worked as a policymaker. I’ve worked in various parts of government and for the press; newspapers. And I’ve also worked as a consultant on a television show. Not on a sci-fi but kind of reality TV to do with surveillance and such. And there like, the reach for that show even though it’s kind of not realistic in some senses, making sure people understand at least the nuances of the technology reaches so many more people and is so much more accessible than some white paper that the White House puts out or some Washington Post story that only twenty people read. So I think the reach there is really important.
And then finally I think the last thing to to think about is how the use of technology and AI particularly changes how we think about people from the policymaking perspective, right. So we talked about how it changes norms and it can be used as kind of an enforcement mechanism. But I also think about how it changes just how we work, how the nature of our interactions with one another change. And this is like things around employment, and labor laws, and kind of entitlement to equity, right. So like we’re staying currently in today’s marketplace companies that have access to data and AI and technology to be able to amplify their workforce significantly more than any other company, right. So when we just look at stats like what certain tech companies are able to make per employee? So a stat I like to throw around is so Facebook makes, in profit, $800 thousand per employee per year, as compared to Google which is about a quarter of that, and the next company down like, Ford, is a tenth of Google. So it’s something like 40x what Facebook makes. And so the application of using software and automation and how that changes equities is also really fascinating to me. So all three aspects I think as useful, and using sci-fi to understand that it is a useful tool.
Kristin Sharp: Perfect. Well, thanks for that lead-in and precursor to talking about work. So my name is Kristin Sharp. I run the Work, Workers, and Technology program here at New America and look in particular and primarily at how automation and artificial intelligence are changing both the structure of work and the kind of work that we do, and what that’s gonna look like over the course of the next ten or fifteen years.
One of the things that we’ve done in order to help— We do a lot of this work in communities around the country, and organize and lead the conversations between different stakeholders in a community about how work is changing as a result of new technologies. And one of the things that we do in order to actively get people thinking about it and picturing what that looks like is run economic scenario planning exercises where people have to tell the story of what work and society, what their neighborhoods, what their jobs look like ten to fifteen years from now. And from that we try to sorta catalog all of the stories that people told and get a little bit of data from that about what kinds of things people are extrapolating. What kinds of things they’re projecting because of what they know about their own jobs right now, the companies they run, the kinds of civic organizations they work with.
And it’s been a really fascinating thing to see some of the imagination go from sort of how people think about their jobs right now, to what they see society looking like fifteen years from now. And the big takeaway from that is that it is really up to us right now in the policymaking world to set out the kinds of parameters that will make that a good future versus a less-good future. So it’s been a fun project to start thinking about that.
Molly Wright Steenson: I’m Molly Steenson. I wear a number of hats at Carnegie Mellon. I’m a professor. I have a K&L Gates Associate Professorship in Ethics and Computational Technologies. I’m the Research Dean for the College of Fine Arts. And I sit in the School of Design with an affiliate appointment in architecture. So why me and why sci-fi?
Among other things, I am a historian of AI in architecture and design. And I teach courses that explore what sci-fi does and then bring in people from Carnegie Mellon and beyond to talk about what AI does in reality. So we take apart some of the clichés that we see. We look at how these clichés have developed over time. In fact the various kind of taxonomies of sci-fi stories and sci-fi clichés that we’ve been discussing today are really helpful. And we take into account the kind of work that is being talked about right here on this panel. Policy reports, scenarios, literature, movies, and plays.
Finn: Thank you all. So, I want to start with this question of clichés and the way that science fiction works. And Kevin mentioned at the beginning of this meeting Neal Stephenson’s notion of science fiction as being able to save you a lot of time by putting people on the same page around a big idea. That you can get organized around you know— Asimov’s robot work has been cited in thousands of engineering papers, right. The Three Laws of Robotics, whether they’re…actually right three laws or not, have been very powerful in framing a lot of discussion and actual research and innovation.
So stories and science fiction ideas tend to become these little like compressed file formats. And you can unfold them and get a whole world out of this idea. But sometimes you get the cliché, and you get the bad meme. So, what is the interface like? Are there other layers between the science fiction writer and the policymakers. What are the other filters that we have to pay attention to when we’re thinking about how science fiction works in the world? I’m looking at you, Malka.
Older: Yeah, good. ‘Cause I’m ready for that one. Though, you put so much in there. You compressed a lot, and so we’re gonna unfold that into a whole world too.
Finn: Yeah, do it.
Older: And so I think that image to start with is a really interesting place to start. Because you know, you do have science fiction that starts with some idea and you know, ideally as a writer what we want to do is take that idea and build it into a believable world by really unfolding into the detail. By thinking about how people behave. By thinking about unintended consequences. And thinking about the extra things that don’t have anything to do with the plot that give you a full world. And that’s part of how we do our job well. And it’s very much in the sense of scenario planning and some of the other types of futurism that goes on in terms of really trying to think beyond this one idea and look at all the consequences of it.
But at the same time you know, when that gets… Often we see that that gets translated into a single sort of…you know, a catchphrase or a word that is simplified, either for people who haven’t read the book, seen the movie. Or for people who have but just remember that one key idea. And sometimes that works well. But a lot of times it doesn’t. And we have these sort of classic examples now of things like Fight Club, which have come to mean the opposite of what their nuanced and full version was intended to mean.
So that’s one part of it, is that we have to be you know…things are going to be simplified down. They are going to turn into a shortcut both in memory and in broader culture. And we have to be aware of that and make sure that we’re pushing things into full world as much as we can.
The other thing that I want to pick up is another place where things tend to get simplified into memes and images and snapshots, is the transference from what we do either in policy work and research, or in literature and media, into news stories. So, a lot of what we’ve talked about here today, a lot of the examples that have come up, have been cultural touchstones that have become famous and become images. You know, Skynet, Terminator, Her, a lot of these images. And we see them being attached over and over again to news stories.
And one thing that I’ve been noticing in my own news consumption is that I don’t read a lot of news stories now. I see a lot of headlines, and I see the line that people choose to put under the photo in the tweet, or in the post on Facebook. And I think I have an idea of what’s going on but what we know is that those headlines, and those pulled-out first lines, and those photos are not picked by the authors of the articles. They’re picked by editors. There’s no transparency, there’s no accountability on this. And often those are the ones that are really pulling out the suggestive images, the scary images, the most the most clickbait‑y thing that they can find from that article, and maybe not even find it in the article. And so we’re seeing a lot of the sort of deeper-thought things get transformed into clickbait, and that’s that’s a real issue.
Sharp: So that’s an interesting thing to think about. And the thing that your question about clichés made me think about is that I was surprised to learn, having done probably fifty different storytelling sessions with people across the country in lots of different cities and different regions, in the absence of a vision, a positive vision, about what the future looks like, people’s instinct is to just go dark. And so I think that a lot of what you’re seeing in terms of people picking the visual or picking the caption for something is the human instinct to grab your attention by going dark. And the sort of funny illustration of that is of our forty to fifty stories about this, about sort of what the future of work looks like and what people think of society going forward, probably 60% of those people named their story “The Hunger Games.” And it’s a really revealing way to see how people are thinking about this, which is that you know, they see the lack of economic mobility. They see societal questions about what is happening between the sort of split between the professional and the service-related worlds in the work world, and they go to that sort of dark place. And I think that putting out there some other kinds of policies and other kinds of visions in fact can help combat that, but that that’s not the answer.
Older: I agree with that although I do want to question—bring out—and I don’t know the answer as to whether that is human instinct or whether that is really a product of the zeitgeist and a product of the different stories that we’ve been reading and seeing and listening to over the past couple of decades.
Sharp: Yeah.
Soltani: And I wanted to just touch on… So cliché and kind of overcompression is a real thing, right. Like the moment The Emoji Movie came out I thought “That’s just the end.” Like, that’s just the end. Like the beginning of the end.
But you know, one person’s cliché is another person’s profound, mind-blowing idea? And the way I think of it is maybe like hot sauce, which is that depending on your tolerance to hot sauce you might be more acclimated to have more nuances or more [indistinct]. But for some people just a tad is enough. And so if it’s useful for invoking an idea and kind of triggering an idea and then a frame, then it’s not cliché to the audience.
So I would say the way you deal with that is the application of the thing, of depending on your audience you figure out the level of specificity. And sometimes the cliché’s actually useful. Like for me things like supporting the troops. Like everyone supports the troops and you can actually rally around concepts without getting into the nuances to build consensus and bring people on board, and then move it to a direction that you want in the policy world. So sometimes it’s useful and sometimes it’s really based on the application, I think.
Steenson: One of the problems about AI is that there aren’t really good ways to understand it. It’s difficult to understand anything that happens within a black box. You’ve got inputs and outputs and a bunch of question marks, right. So it’s why it’s appealing to have the shorthand of clichés. I’m going to blank on the person who referred to it in this, it’s in my computer backstage, but metaphors. We use them to talk about the this-ness of a that, or the that-ness of a this. And I’m kind of curious about how we use sci-fi to get around the that-ness of the this and the this-ness of the that.
Finn: Yeah, so a lot of really great ideas here. One thing that you’ve made me think is that clichés are like the auto-complete of the mind. You know, that there’s a…people mention The Hunger Games because it’s sort of accessible, and there…whether it’s in the zeitgeist or we just all saw too many trailers or whatever at the time when you were doing the interviews. But then, that becomes the frame, right. Then it becomes the title of the story and it carries all of this baggage with it.
So, I don’t think we can get away from that. We’re always gonna use that kind of shorthand and so there’s a certain kind of power and responsibility in the way that we deploy language. So I wanted to ask about that and talk a little bit more about methods. So, one thing that I am thinking a lot about right now is this whole notion of imagination, and how do you get people, how do you inspire people, invite people to imagine the future. Because as you were saying, Kristin, most of us don’t really think about it very much. And if you just throw people into the deep end, often they’ll cling to the clichés, or they’ll…you know, it’s going to be really dark. So you have to scaffold and give people some tools. And so there’s an interesting dynamic… Should science fiction be playing this role of imagining the futures…imagining more diverse, more inclusive, more inspiring futures? Or should we be focusing more on inviting everybody to imagine the future?
Older: Can I—
Steenson: Yes.
Finn: That was a trick question, and you saw through it. Yeah, okay.
Steenson: One thing that I think is interesting is we all have different kinds of toolkits that we use. One thing that’s useful from design is the fact that there are ways for people to get their hands on things and create futures or create science fiction, create design fictions, in different kinds of ways. They could make future artifacts. They could brainstorm or role-play a story, right. They could act out a service scenario, right. We have something called critical design as well, which is a pretty sort of dark and art gallery kind of version of design futures, but it’s a way of creating future artifacts and putting them into narratives. And the fact is that this is something that anybody can do, right. We could do this at home. We could do this in our boardrooms. We could do this and all kinds of places.
Older: I really like that. And I think one of the things that I’m really interested in seeing in this question of how do we get sci-fi…how do we use its potential in more places, is really to look at sort of more transversal and sort of cross-cutting and you know, not just bring in a sci-fi person—although I wish you would all bring in sci-fi people to the places where you work. But also you know, how do we take seriously the work that they’re doing and get that kind of thinking more broadly into other industries. And then you know, similarly, I as a sci-fi writer am very interested in knowing more about how other people do their work. I think we have a kind of specialization fetish. And it’s really useful to start expanding those different ways thinking into boardrooms and vice versa.
[Off-screen]: And [indistinct].
Older: Yes. Everywhere.
Soltani: I’m going to play just…devil’s advocate here. One of the challenges I think, and maybe maybe potentially one of the reasons why we see such dark sci-fi futures, is essentially as a countervailing force to kind of innovation writ large, and the… So like, coming from California, so much of innovation and startups and creation is having this utopian vision of what the thing you’re building is against all odds, right. Raising funding, competing with competitors, implementing in the market. And so most of the creators of a lot of these technologies have a singular positive vision of their technology or their tool as deployed in society, and therefore miss huge gaps in what could be the negative unexpected consequences on unaccounted stakeholders or people not represented in the debate.
And so I think one of the visions is to help remind folks that say like, you envision this home care robot as being—or self-driving cars as being the end of mobility and it will take care of everyone’s kids and everything. You know, kinda puppies and rainbows kinda thing. But maybe think about the displacement of work, displacement of people, the kind of liability impacts. Like all of the negative externalities that are created that the culture of innovation and innovators have been kind of forced to forget, right, have been forced to just think about the upside.
Sharp: I think that’s interesting, and certainly true as people’s perception of Silicon Valley goes. But I think you can also flip it so that the negative stuff that people are talking about and thinking of and picturing is just a warning sign, right. It’s the warning sign for what happens if you let something go unchecked. And the flipside is…we can check it. And so thinking about it as a way to picture the guardrails rather than just a warning system— Like, I think Black Mirror, the television show Black Mirror is a really good example of that. Of the things that take something to so negative an extreme that it flags for you like, don’t let it get this far; let’s see how we can put the guardrails on for the good stuff.
Soltani: I think we’re in agreement.
Finn: But it also is seems to be true that there’s a lot more dystopian science fiction than there is you know, constructivist…hopepunk…yeah. I may be biased in this question. So, I think there’s a lurking question underneath here which is what is the difference between a good story and good policy, right. And I think one thing that maybe you were getting at here Ashkan is that sometimes a good story is not good policy because stories are supposed to make us feel good, or stories can often be intrinsically kind of self-centered, right. They can be ego exercises. And policy shouldn’t work that way. So how do you…you know, what what is the difference between those two modes of sort of organizing the universe? And how do you translate between them?
Older: Well I mean, I would say that first of all if a story is a good story hopefully it’s avoiding the sort of ego and like, “we’re disrupting convenience stores” or whatever sort of angle. I mean usually if you’re reading something like that it doesn’t read as a good story. Now, if you film it with a $100 million budget, and lots of CGI, and big stars, it may still seem like a good story even though it’s really not a good story. So that’s a separate problem.
But you know, I think comparing policy and stories is maybe not quite the right dichotomy that we want. Because stories really should be kind of opening the frame for how we think about policies. And what we do want stories to have…usually although not always and there are lots of people who would disagree with me on this; like…dadaists—but you know usually you want a story that has some kind of ending and closure. You want something that feels satisfying, that you feel like you’ve been on a journey and learned something or had an insight, or you’ve gotten somewhere with the story. And policy isn’t necessarily like that. It doesn’t necessarily wrap up. It doesn’t necessarily have an ending.
But what I hope that good stories do are they give us ideas. They give us empathy. They change our perspective. And that should help for us to think about policy in a way that’s a bit outside of our personal narrow framework or our political party narrow framework, and give us a wider view and a different view.
Sharp: The other thing I think it can be helpful in doing is showing you how to actually execute an idea? Like a lot of times when you just sort of brainstorm about stuff— And we see this in communities that are trying to develop methods for connecting people to new sources of income and stuff. Like, it’s great to say you know, “Why don’t we have all of the nonprofit organizations work with the schools? Like this would be amazing!” But it’s really hard to actually figure out the steps that have to happen in order to execute that. And so, fiction and sci-fi in particular can sort of show you what the steps are and say like, if you’re thinking about a Martian civilization you have to actually have an organization that is dealing with all of the different countries that go together, and how they work together. And it’s like the picture of what the action steps are.
Older: And also the end goal. Sometimes even when you talk about it as a great thing, what the actual success looks like? isn’t always clear unless you speculate about it. Unless you imagine it.
Finn: Yeah, we had a colleagues who’s now at another university who did a wide-ranging survey of decision-makers around climate policy, and asking them “What does the ideal future look like? What’re you working towards?” And people just didn’t have you know, a vision or they had a number, as like getting down to some level of parts per million. But it’s actually really hard to come up with a concrete and actionable plan for where you’re trying to get that has the end goal in mind rather than just sort of proceeding step by step.
So, how do we start to integrate… How do we do this more, if we think that this is a good idea?
Older: Which this? [crosstalk] Come up with stories…?
Finn: Oh. Bringing science fiction into…so, if you were saying we wanted more maybe people in this room to invite more science fiction writers into some of the organizations that they’re part of. What are some of the methods and the steps to actually use the sort of toolkit of storytelling about the future to reframe or improve other kinds of decision-making processes?
Older: Yeah. I think that there’s a range of things that can happen, from bringing in writers in residence, which I actually think is a great idea for all kinds of organizations whether their profit, or nonprofit, or research-based. But having people that think a different way than the majority of the people in your organization is something everyone should consider budgeting for.
And also bringing in some of the techniques. I mean, we talked about scenario planning and you know, that is not so dissimilar in some of its forms from what I do as a writer. When I’m brought in to do kind of futurist stuff… Like I was asked to go to the CIA and talk to them about the future security in Africa. And I mean…I am not an expert on security or Africa, and I thought it was really interesting that they were bringing me there to make up stories about it.
And so you know, what I think of myself, when I think how am I gonna do a good job at this, and when they ask me how I do this…you know my added benefit for them is that I am totally willing to make shit up. I have a lot of practice doing that. And I am really happy to just come up with ideas that don’t have to necessarily be rooted in the reality of engineering or the reality of tech, as long as I feel like I can root them in the reality of how I know people behave. Because for me that is the key factor that makes stories believable and accessible to people, that make stories work. And so that’s what I do. I found writing science fiction particularly freeing because when I got stuck somewhere in a plot that I wanted something to happen, I could make up a technology that fixed that problem.
Now, some people don’t find that freeing in the same way. Because they get hung up on “How will we make this technology work?” And that is fine. That actually is great because it gets you a very different kind of writing and science fiction. But maybe for those people to really get into totally making shit up, they need to write fantasy. Or maybe they need to write in…you know, maybe they need a different kind of exercise that’s based in a different kind of reality to free them up to feel like okay, I’m gonna think big and different about how the world could change.
Finn: How do each of you give people permission to do this? Because that’s I think part of what you’re saying, that you are like a card-carrying fabulist, right? You’re allowed, you’re empowered, and you will show up and do this.
Older: I’m gonna make those cards. You should totally do that. I would like one.
Finn: But how do you do that? Because I’ve found in the work that we do at the Center for Science and the Imagination that the is really important, and there are different ways that you can do it. But what have you all encountered?
Sharp: I think that the more interactive you can make it the better. And I don’t think that everybody’s sort of suited to be a writer and to conceptualize and create a story like that. So a lot of times we’ve done things like flipping a card that shows some specific thing and then you have to make up a story about that thing. Or putting a set of Legos on the table saying you have to make the sort of community center of the future, where people gather in different ways and what does that look like? I like the artifact one that you [Steenson] mentioned earlier, thinking about an artifact of the future. But anything that you can get to sort of get people outside of their normal thinking and make them picture something else and then describe what the picture looks like is helpful.
Steenson: My thing is getting students to turn things upside down and not take them for granted. Take technologies, turn them upside down. Take apart movies, take apart books. And a lot of them have never thought about doing this before. If I’m teaching Masters students they’ve come in to do a Masters in interaction design. They’re going to go work at Google when they’re done, and they haven’t really thought about what actually makes everything go. So, we look pretty critically at what runs behind. We look at the role of AI in society. In the AI in culture class we take apart movies. We take apart The Hunger Games, actually. And Fahrenheit 451; the old version, of course. And you know, look at what the different kind of tropes are.
And then I also get them to do their own creative work, right. They have to do something interpretive. So I have philosophers doing paintings, and I have HCI students doing plays, and architecture students curating a fashion show. And all of these are just different ways around and through, but that’s the method that I’d say is at hand for me, being at a university.
Soltani: I think there’s the kind of ideation function that this helps with. And there’s also kind of a calibration function that it helps with. So, on a number of occasions I and other experts (I think Kevin does this at a security conference that we attend.) kind of look at sci-fi and ideas around sci-fi, and then really critique how close are we? How realistic is this? You know, is this near future, far future? And for people in the policy realm and people that don’t have a lot of technology specificity, the difference between kind of NLP that autocompletes your search history and then something that you can have a conversational dialogue, they don’t know what the distance between those two are. A great example is the self-driving car that we’ve been told would arrive you know, last year, and that we’ve been told will arrive next year but you know, a lot of the experts will say given the policy considerations and all this kind of stuff, probably longer.
Helping people understand how far away we are I think is also another critical function of like, you’re able to create a plot device that you can drop in…policymakers like to drop in existing plot— Or they’re like, “Oh, we can just grab the thing and just it in here, and we’ll make like you know, energy out of the sun.” But that was a crazy idea a while ago, right, and helping anchor those concepts to people and make them a reality I think is a critical use or application of this as well.
Finn: Yeah. I hear that constraints can be really useful. Like your card or you know, a simple exercise that invites people to step outside of their normal pattern. Not letting the perfect be the enemy of the good. We do that a lot in our projects.
And I also really like what you said Molly, about looking behind. And that I think is also what you are getting at Ashkan, that really understanding the mechanics and the state of technology now is important. And I would add also the notion of looking around, and this is part of the problem with the Silicon Valley… The business pitch story is all about the upside and you don’t think about what else could happen and the unintended consequences. So, finding ways to find new perspectives on the work is really really important.
So, what are some of the ways that— What are the moral hazards here? Like what can go wrong, and what are the— You know, we heard about Star Wars before. What do we need to watch out for when we’re thinking about how we do this kind of storytelling with a public purpose?
Soltani: So you touched on one, which is like, the problem with reinforcement learning, which is if you’re doing modeling of any kind of data-driven system, how to shake it up and invoke a new idea? Otherwise you kind of gravitate local maximum and you will just reinforce an idea that everyone knows—you’ll never break free of that. So I think that’s one critical one.
I think the other is thinking around how to help people…not be realistic but really help people not be overconfident in their vision. Oversell it is something that— It’s kind of related to the first, where you might have heard a lot of people say the same type of thing about an AI. It’s going to be a killer robot and therefore you’re like everyone says it’s going to be a killer robot so it probably is. The other is that you are now the foremost expert and futuristic comes in to Africa to describe what the likely security threats are and they’re like, “I’ve got this, you guys. This is like—” you know, an oversell over over be overconfident about your position. I think those would be the two moral hazards. Because we are kind of just making…making stuff up, right. I don’t know if we’re censored here, I was about to… We are just kind of going on the fly and expressing our vision of the world, right. And so having some hubris around that I think is critical. Which policymakers don’t really do.
Older: I think for me as a writer, the clichés that you mentioned in the beginning are kind of a moral hazard. Because it’s very easy to slip into shorthand. It’s particularly easy around secondary characters, where you just slip into describing them in the way that that function of character is always described in movies and in books. And I think that’s one of the clearer examples of where it happens, but it can happen in a lot of other areas as well. And that’s very very dangerous because that’s how we end up with stereotypes. And they’re very easy to repeat and to pass on, the ones that we’ve learned.
And you know, as I said it’s kind of easy to see in characters but the things that you’re mentioning, you know, the trope of the technology that never fails. Or the trope of the killer robot. All of these things are very easy to repeat. And so what’s really important for me as a writer is to try to make sure that I’m questioning anything that I write without thinking. And to make sure that I’m trying to build things out of my own observations of an experience and not out of things that I’ve read a million times. Because not only is that boring and poor narrative, but it’s also dangerous.
Sharp: Yeah. I think you have to make sure that there are enough different kinds of people telling the stories that you have a variety of stories. Otherwise that’s where you end up with the clichés.
Finn: So let’s open this up for questions from the audience.
Soltani: And we talked about when you ask your question, could you say [indistinct], probably your most…one sci-fi that really influenced you a lot. Like one one fiction sci-fi movie, film, whatever…book, that was critical in your framing and shaping of this space.
No pressure.
Audience 1: I’m going to answer with a non-answer. I’m not a sci-fi fan. I love the topic today and thank you again for inviting me this morning, and thank you for all your insightful research and sharing that with us.
I’m returning the question, how many women watch sci-fi? Who watches sci-fi? Is there an impact in that in how were shaping AI policy through that? So I just wanted to re-pitch the question. Well, Wall-E’s cute.
Older: Is the question about how many women watch sci-fi or how many women create sci-fi?
Audience 1: That too. Who’s creating sci-fi, who’s watching it…
Older: I mean, I can speak for myself? I grew up on Star Wars and Star Trek. Along with a lot of other things. Like I also grew up on Tolkien, and The Black Stallion, and The Wizard of Oz, and you know, all sorts of books that I never— And Anne of Green Gables. And you know, I knew that my brother wouldn’t read Anne of Green Gables, although much later I learned out that he stole my Sweet Valley High books when I wasn’t looking? He’s admitted this on tape so I’m not like giving up a big secret.
But you know, I mean, I always did. And to me you know, stories are stories. I know a lot of women who both write and consume sci-fi in different ways. I don’t know the statistics, but I think that if you look at the amount of conversations that goes on, there are a lot of women who are very involved in this. If you look at the current awards slate, for example of the Hugos, they are strongly female. And a lot of people are very upset about that. And I also know there’s been some work done by Lisa Yaszek, who’s at I think the University of Georgia…
Finn: Georgia Tech.
Older: Georgia Tech. Thank you. Wrote a book recently called The Future Is Female!, where she looks at female science fiction writers of the middle of the 20th century—the 40s 50s 60s—who existed and were extremely popular, and had both editors and readers of magazines asking for more of their work. And have really disappeared from our popular mental image of the genre. So there have always been women who have been both writing and reading and watching sci-fi…but, we don’t always pay attention to them. We don’t always listen to them. And we don’t always accept them as forces in the genre.
I can give you a ton of names to read. And maybe you will find that you are a fan of sci-fi, just not the kind of sci-fi you’d encountered before. But I will do that, because we’re short on time, offline.
Finn: Great question. Other questions.
Audience 2: I think it’s a sci-fi movie, but Logan’s Run. That one really scares me the older I get. But any rate, one element that I—and it maybe that I misunderstand the format—is that sci-fi is also a deeply creative medium. And so to what extent can you dictate to a sci-fi writer, a sci-fi artist, that oh, you’re scoring. You said AI was evil, you need to stop that, you know. I’m just wondering where that comes into this discussion, that it’s not just propaganda for some business model. Thank you.
Older: I can tell you, as a sci-fi writer, that sci-fi writers get let’s say strongly suggested to? all the time. Because I get requests from anthologies to write about specific topics or subjects, all the time. And then of course it’s my choice whether I write about it or not. And if the topic doesn’t grab me and I write a terrible story about it they’re probably not going to take it. But I do get all these prompts constantly. And also you know, to get a story published you have to go through layers of agents and editors. And publishers. So while it’s creative, the people who are creating it are not the only people who decide what stories get out into the world. And I think that is magnified hugely (although it’s not my area as much) in the realm of TV and movies, where as Chris was saying earlier the bigger the budget they have, amazingly, the less risk they want to take. I mean, we see why that makes sense? but it’s also you know, for someone who has to kind of do a lot of their creative work on spec it’s also kind of amusing.
But we see that that, that there’s a huge number of gatekeepers who think “this is what people will pay money to see” and, they’re often wrong and yet that doesn’t always change the gatekeepers. We see that when movies flop, it often gets blamed on the female star, or the female writer, or the female director, or the—you know, sometimes the male star. But rarely on the producers or the people who are making those decisions about which movies get made. So…yes and no. You know, we need to push I think the gatekeepers, and we need to push the people who are providing medias to take more risks and to go out and find different stories.
Sharp: I think it also matters how you define sci-fi, and maybe just broadening the definition of sci-fi a little bit is helpful. Like I was really pleased to hear somebody call Wall‑E sci-fi, which is like a kids’ movie, right? And that’s an interes— But if you think about that as evidence of how people think about science in the future, that’s a really interesting definition and it’s broader than Star Wars or Star Trek and that kinda stuff. Gets you a little bit more of a wide lens.
Finn: I think sci-fi has become interestingly more mainstream and you see it permeating other genres in a funny way, like the last season of Parks and Rec the sitcom was just for no particular reason science fictional. They moved like five years into the future.
I think there was one more question in the back? Yeah, go ahead.
Miranda Bogen: [indistinct] question, but the book I’ve been enjoying most recently is The Three-Body Problem, and the subsequent ones in the series. And I think what’s interesting about that is it’s an entirely different cultural perspective of a speculative future. And my question is kind of related to what you were just talking about of how—especially given sort of the global nature of what we imagine governance of AI to be; and given the high barrier to entry of sci-fi in general let alone across cultural contexts, how do we kind of encourage more of those…perspective-sharing, whether it’s across country cultures or even within the US, as you were saying, like traveling around the country—I’m sure there’s different perspectives there. They’re not just gender representation or community representation but just these different perspectives and this frame that I think we’re seeing is a helpful one to think about when we’re thinking about the future of technology.
Steenson: I think that a lot of people can’t actually work on AI in any substantial way or its related technologies. They’re not the crafters of algorithms. But people are storytellers in a lot of different kinds of ways. And so a way to begin to engage with, critically and creatively, with AI and related technologies and technological paradigms is exactly in some of the ways that I think we’ve been talking about.
Finn: I think that’s a pretty good place to stop. Please join me in thanking our final panel.