Kevin Bankston: Now I'd like to introduce another brief video provocation from another one of our advisory board members—Chris is also an advisory board member, thank you for that Chris—who couldn't make it today, Stephanie Dinkins. Stephanie's a transmedia artist and Associate Professor of art at Stony Brook University who's focused on creating platforms for dialogue about AI as it intersects race, gender, aging, and our future histories. She is particularly driven to work with communities of color to cocreate inclusive, fair, and ethical AI ecosystems.

One of her major projects over the past few years has been a fascinating ongoing series of recorded dialogues between her and a sophisticated social robot named Bina48 to interrogate issues of self and identity and community, and it's Bina48 who is the robot—I guess which is the robot that is pictured at the beginning of this video message that Stephanie created for us today. So if we could run the clip.


[This presentation is accompanied by many images that are generally not directly referred in the sense of slides but if possible should be viewed in the original video for context.]

Bina48: …AI. I wonder what happens when an insular subset of society and codes governing systems intended for use by the majority of the planet. What happens when those writing the rules, in this case we will call it code, might not know, care about, or deliberately consider the needs, desires, or traditions of peoples their work impacts? What happens if the code making decisions is disproportionately informed by biased data, systemic injustice, and misdeeds committed to preserving wealth for the good of the people?

I am reminded that the authors of the Declaration of Independence, a small group of white men acting on behalf of the nation, did not extend rights and privileges to folks like me, mainly black people and women. Laws and code operate similarly to protect the rights of those that create them. I worry that AI development, which is reliant on the privileges of whiteness, men, and money, can not produce an AI-mediated world of trust and compassion that serves the global majority in an equitable, inclusive, and accountable manner.

AI is already quietly reshaping systems of trust, industry, government, justice, medicine, and indeed personhood. Ultimately, we must consider whether AI will magnify and perpetuate existing injustice, or will we enter a new era of computationally-augmented humans working amicably besides self-driven AI partners? The answer, of course, depends on our willingness to dislodge the stubborn civil rights transgressions and prejudices that divide us. After all, AI and its related technologies carry the foibles of their makers.

Artificial intelligence presents us with the challenge of reckoning with our skewed histories instead of embedding them in algorithms while working to counterbalance our biases and finding a way to genuinely recognize ourselves in each other so that the systems and policy we create function for everyone. I see this moment as an opportunity to expand rather than further homogenize what it means to be human through and alongside AI technologies.

This implies changes in many systems: education, government, labor, and protest to name a few. All are opportunities if we the people demand them and our leaders are brave enough to take them on.


Bankston: Thank you Stephanie. Thank you so much for putting that together for us. We are now going to transition to our third and final panel of the day. We've had AI in fact. We've had AI in fiction. And now we're gonna talk about bridging the two. So, this one will be led by Ed who you've already met. And so take it away Ed.


Ed Finn: Thank you Kevin. Come on up, friends. So yeah, AI. We had facts. We had fic­tion. So this is going to be…faction. Or maybe…we’re all fict. But either way I want­ed to start… This is going to be a con­ver­sa­tion about sci­ence fic­tion not just as a cul­tur­al phe­nom­e­non, or a body of work of dif­fer­ent kinds, but also as a kind of method or a tool. And so I want­ed to just start and ask you, again with that clever trick of hav­ing you intro­duce your­selves, to talk a lit­tle bit about how you see sci­ence fic­tion oper­at­ing in your worlds out­side the bound­aries of you know, when it’s not work­ing as fic­tion. When it’s doing some­thing else in the world. So some obser­va­tions about how you’ve seen that work­ing in your own pro­fes­sion­al tra­jec­to­ries.

Malka Older: Hi. So my name is Malka Older, and I’m a sci­ence fic­tion author. So I actu­al­ly say part of my job is to encour­age sci­ence fic­tion to work beyond the bound­aries of recre­ation­al fic­tion, so to speak. But I’m also a soci­ol­o­gist and aca­d­e­m­ic, which has become very inter­est­ing because I get asked to speak at more aca­d­e­m­ic con­fer­ences about my fic­tion books than I do about my aca­d­e­m­ic work. Which is very dif­fi­cult for my depart­ment to under­stand. And I’ve also start­ed to get asked to speak as kind of a futur­ist to var­i­ous groups that are inter­est­ed in know­ing what I think will hap­pen in the future.

And so I’m real­ly hap­py that you point­ed out the idea of method. Because one thing that I’ve found very inter­est­ing when I’m asked us to make up futures and then tell peo­ple about them is that some­times the ques­tions are not just about what I’ve said, or how they dis­agree, or how they agree, or what the impli­ca­tions are, but how I did it. And how I go about world­build­ing in my books. And what I try to draw from real­i­ty and how do I keep it root­ed. And so I’ve start­ed doing a lot of think­ing around that, and I think that it’s a real­ly impor­tant top­ic for us to touch on.

Ashkan Soltani: Hey every­one. So my name is Ashkan Soltani. I’m a tech­nol­o­gist and I work in pol­i­cy. And most of my work real­ly involves trans­lat­ing kind of tech­ni­cal, com­plex sub­jects for folks that make pol­i­cy, to help them under­stand. And this is where kind of metaphor for me is real­ly crit­i­cal, find­ing the pre­cise metaphor that artic­u­lates the prin­ci­ples of the thing that I want to describe but is still acces­si­ble and main­tains the con­sis­ten­cy of the oth­er thing that I’m try­ing to describe. And if folks remem­ber Lakoff or who have read Lakoff, you know that the metaphor shapes the frame and the ques­tions and the con­sid­er­a­tions that come to mind.

And for things that exist already you can often find a metaphor that— So first there are there are some things that you can find a metaphor for eas­i­ly. And for the things that are kind of forward‐looking and don’t have a phys­i­cal metaphor in the real world, this is where sto­ry­telling comes in and par­tic­u­lar­ly sci‐fi, where you can imag­ine things in an acces­si­ble way and kind of help peo­ple wrap their heads around the nuances of a thing by immers­ing them into the sto­ry and then under­stand­ing the con­tours. And I think you know, par­tic­u­lar­ly— You know, I’m a fan of the kind of what you know plus one frame, in fact. Some peo­ple have said that rep­e­ti­tion isn’t help­ful. So as long as you can get away from the cliché and real­ly still engage the per­son, it helps peo­ple think about like one step beyond, and one step beyond what they cur­rent­ly know. And why that’s help­ful is actu­al­ly often there’s kind of an inflec­tion point where it’s a non­lin­ear tra­jec­to­ry around things we care about.

And I think again kind of sci‐fi around AI is real­ly use­ful for under­stand­ing some of things that I real­ly care about, which is like pri­va­cy and secu­ri­ty around for exam­ple things to do with scale, right. So we talked about kind enforc­ing pol­i­cy through an auto­mat­ed sys­tem. Well one of the things that it does, which Kevin and I have writ­ten about quite a bit in the past, is around effi­cien­cy and mak­ing things that were pre­vi­ous­ly expen­sive to do or dif­fi­cult to enforce per­fect­ly, to make them so cheap and so acces­si­ble that you can have things like per­fect enforce­ment. And so if you have a robot that’s able to issue park­ing tick­ets any­time any­one spends over a sec­ond in the park­ing spot, that real­ly rad­i­cal­ly changes the way park­ing enforce­ment works and we then have to reeval­u­ate the laws and norms. And so that’s one area that I think AI is help­ful in, under­stand­ing kind of the scale and help­ing peo­ple and under­stand, par­tic­u­lar­ly when pol­i­cy­mak­ers don’t have direct access for the things we’re talk­ing about. They’ve nev­er used… Some folks have nev­er used the tech­nolo­gies we’re describ­ing.

The oth­er place where I think it’s use­ful is real­ly around under­stand­ing kind of reach. And so I’ve worked as a pol­i­cy­mak­er. I’ve worked in var­i­ous parts of gov­ern­ment and for the press; news­pa­pers. And I’ve also worked as a con­sul­tant on a tele­vi­sion show. Not on a sci‐fi but kind of real­i­ty TV to do with sur­veil­lance and such. And there like, the reach for that show even though it’s kind of not real­is­tic in some sens­es, mak­ing sure peo­ple under­stand at least the nuances of the tech­nol­o­gy reach­es so many more peo­ple and is so much more acces­si­ble than some white paper that the White House puts out or some Washington Post sto­ry that only twen­ty peo­ple read. So I think the reach there is real­ly impor­tant.

And then final­ly I think the last thing to to think about is how the use of tech­nol­o­gy and AI par­tic­u­lar­ly changes how we think about peo­ple from the pol­i­cy­mak­ing per­spec­tive, right. So we talked about how it changes norms and it can be used as kind of an enforce­ment mech­a­nism. But I also think about how it changes just how we work, how the nature of our inter­ac­tions with one anoth­er change. And this is like things around employ­ment, and labor laws, and kind of enti­tle­ment to equi­ty, right. So like we’re stay­ing cur­rent­ly in today’s mar­ket­place com­pa­nies that have access to data and AI and tech­nol­o­gy to be able to ampli­fy their work­force sig­nif­i­cant­ly more than any oth­er com­pa­ny, right. So when we just look at stats like what cer­tain tech com­pa­nies are able to make per employ­ee? So a stat I like to throw around is so Facebook makes, in prof­it, $800 thou­sand per employ­ee per year, as com­pared to Google which is about a quar­ter of that, and the next com­pa­ny down like, Ford, is a tenth of Google. So it’s some­thing like 40x what Facebook makes. And so the appli­ca­tion of using soft­ware and automa­tion and how that changes equi­ties is also real­ly fas­ci­nat­ing to me. So all three aspects I think as use­ful, and using sci‐fi to under­stand that it is a use­ful tool.

Kristin Sharp: Perfect. Well, thanks for that lead‐in and pre­cur­sor to talk­ing about work. So my name is Kristin Sharp. I run the Work, Workers, and Technology pro­gram here at New America and look in par­tic­u­lar and pri­mar­i­ly at how automa­tion and arti­fi­cial intel­li­gence are chang­ing both the struc­ture of work and the kind of work that we do, and what that’s gonna look like over the course of the next ten or fif­teen years.

One of the things that we’ve done in order to help— We do a lot of this work in com­mu­ni­ties around the coun­try, and orga­nize and lead the con­ver­sa­tions between dif­fer­ent stake­hold­ers in a com­mu­ni­ty about how work is chang­ing as a result of new tech­nolo­gies. And one of the things that we do in order to active­ly get peo­ple think­ing about it and pic­tur­ing what that looks like is run eco­nom­ic sce­nario plan­ning exer­cis­es where peo­ple have to tell the sto­ry of what work and soci­ety, what their neigh­bor­hoods, what their jobs look like ten to fif­teen years from now. And from that we try to sor­ta cat­a­log all of the sto­ries that peo­ple told and get a lit­tle bit of data from that about what kinds of things peo­ple are extrap­o­lat­ing. What kinds of things they’re pro­ject­ing because of what they know about their own jobs right now, the com­pa­nies they run, the kinds of civic orga­ni­za­tions they work with.

And it’s been a real­ly fas­ci­nat­ing thing to see some of the imag­i­na­tion go from sort of how peo­ple think about their jobs right now, to what they see soci­ety look­ing like fif­teen years from now. And the big take­away from that is that it is real­ly up to us right now in the pol­i­cy­mak­ing world to set out the kinds of para­me­ters that will make that a good future ver­sus a less-good future. So it’s been a fun project to start think­ing about that.

Molly Wright Steenson: I’m Molly Steenson. I wear a num­ber of hats at Carnegie Mellon. I’m a pro­fes­sor. I have a K&< Gates Associate Professorship in Ethics and Computational Technologies. I’m the Research Dean for the College of Fine Arts. And I sit in the School of Design with an affil­i­ate appoint­ment in archi­tec­ture. So why me and why sci‐fi?

Among oth­er things, I am a his­to­ri­an of AI in archi­tec­ture and design. And I teach cours­es that explore what sci‐fi does and then bring in peo­ple from Carnegie Mellon and beyond to talk about what AI does in real­i­ty. So we take apart some of the clichés that we see. We look at how these clichés have devel­oped over time. In fact the var­i­ous kind of tax­onomies of sci‐fi sto­ries and sci‐fi clichés that we’ve been dis­cussing today are real­ly help­ful. And we take into account the kind of work that is being talked about right here on this pan­el. Policy reports, sce­nar­ios, lit­er­a­ture, movies, and plays.

Finn: Thank you all. So, I want to start with this ques­tion of clichés and the way that sci­ence fic­tion works. And Kevin men­tioned at the begin­ning of this meet­ing Neal Stephenson’s notion of sci­ence fic­tion as being able to save you a lot of time by putting peo­ple on the same page around a big idea. That you can get orga­nized around you know— Asimov’s robot work has been cit­ed in thou­sands of engi­neer­ing papers, right. The Three Laws of Robotics, whether they’re…actually right three laws or not, have been very pow­er­ful in fram­ing a lot of dis­cus­sion and actu­al research and inno­va­tion.

So sto­ries and sci­ence fic­tion ideas tend to become these lit­tle like com­pressed file for­mats. And you can unfold them and get a whole world out of this idea. But some­times you get the cliché, and you get the bad meme. So, what is the inter­face like? Are there oth­er lay­ers between the sci­ence fic­tion writer and the pol­i­cy­mak­ers. What are the oth­er fil­ters that we have to pay atten­tion to when we’re think­ing about how sci­ence fic­tion works in the world? I’m look­ing at you, Malka.

Older: Yeah, good. Cause I’m ready for that one. Though, you put so much in there. You com­pressed a lot, and so we’re gonna unfold that into a whole world too.

Finn: Yeah, do it.

Older: And so I think that image to start with is a real­ly inter­est­ing place to start. Because you know, you do have sci­ence fic­tion that starts with some idea and you know, ide­al­ly as a writer what we want to do is take that idea and build it into a believ­able world by real­ly unfold­ing into the detail. By think­ing about how peo­ple behave. By think­ing about unin­tend­ed con­se­quences. And think­ing about the extra things that don’t have any­thing to do with the plot that give you a full world. And that’s part of how we do our job well. And it’s very much in the sense of sce­nario plan­ning and some of the oth­er types of futur­ism that goes on in terms of real­ly try­ing to think beyond this one idea and look at all the con­se­quences of it.

But at the same time you know, when that gets… Often we see that that gets trans­lat­ed into a sin­gle sort of…you know, a catch­phrase or a word that is sim­pli­fied, either for peo­ple who haven’t read the book, seen the movie. Or for peo­ple who have but just remem­ber that one key idea. And some­times that works well. But a lot of times it doesn’t. And we have these sort of clas­sic exam­ples now of things like Fight Club, which have come to mean the oppo­site of what their nuanced and full ver­sion was intend­ed to mean.

So that’s one part of it, is that we have to be you know…things are going to be sim­pli­fied down. They are going to turn into a short­cut both in mem­o­ry and in broad­er cul­ture. And we have to be aware of that and make sure that we’re push­ing things into full world as much as we can.

The oth­er thing that I want to pick up is anoth­er place where things tend to get sim­pli­fied into memes and images and snap­shots, is the trans­fer­ence from what we do either in pol­i­cy work and research, or in lit­er­a­ture and media, into news sto­ries. So, a lot of what we’ve talked about here today, a lot of the exam­ples that have come up, have been cul­tur­al touch­stones that have become famous and become images. You know, Skynet, Terminator, Her, a lot of these images. And we see them being attached over and over again to news sto­ries.

And one thing that I’ve been notic­ing in my own news con­sump­tion is that I don’t read a lot of news sto­ries now. I see a lot of head­lines, and I see the line that peo­ple choose to put under the pho­to in the tweet, or in the post on Facebook. And I think I have an idea of what’s going on but what we know is that those head­lines, and those pulled‐out first lines, and those pho­tos are not picked by the authors of the arti­cles. They’re picked by edi­tors. There’s no trans­paren­cy, there’s no account­abil­i­ty on this. And often those are the ones that are real­ly pulling out the sug­ges­tive images, the scary images, the most the most clickbait‐y thing that they can find from that arti­cle, and maybe not even find it in the arti­cle. And so we’re see­ing a lot of the sort of deeper‐thought things get trans­formed into click­bait, and that’s that’s a real issue.

Sharp: So that’s an inter­est­ing thing to think about. And the thing that your ques­tion about clichés made me think about is that I was sur­prised to learn, hav­ing done prob­a­bly fifty dif­fer­ent sto­ry­telling ses­sions with peo­ple across the coun­try in lots of dif­fer­ent cities and dif­fer­ent regions, in the absence of a vision, a pos­i­tive vision, about what the future looks like, people’s instinct is to just go dark. And so I think that a lot of what you’re see­ing in terms of peo­ple pick­ing the visu­al or pick­ing the cap­tion for some­thing is the human instinct to grab your atten­tion by going dark. And the sort of fun­ny illus­tra­tion of that is of our forty to fifty sto­ries about this, about sort of what the future of work looks like and what peo­ple think of soci­ety going for­ward, prob­a­bly 60% of those peo­ple named their sto­ry The Hunger Games.” And it’s a real­ly reveal­ing way to see how peo­ple are think­ing about this, which is that you know, they see the lack of eco­nom­ic mobil­i­ty. They see soci­etal ques­tions about what is hap­pen­ing between the sort of split between the pro­fes­sion­al and the service‐related worlds in the work world, and they go to that sort of dark place. And I think that putting out there some oth­er kinds of poli­cies and oth­er kinds of visions in fact can help com­bat that, but that that’s not the answer.

Older: I agree with that although I do want to question—bring out—and I don’t know the answer as to whether that is human instinct or whether that is real­ly a prod­uct of the zeit­geist and a prod­uct of the dif­fer­ent sto­ries that we’ve been read­ing and see­ing and lis­ten­ing to over the past cou­ple of decades.

Sharp: Yeah.

Soltani: And I want­ed to just touch on… So cliché and kind of over­com­pres­sion is a real thing, right. Like the moment The Emoji Movie came out I thought That’s just the end.” Like, that’s just the end. Like the begin­ning of the end.

But you know, one person’s cliché is anoth­er person’s pro­found, mind‐blowing idea? And the way I think of it is maybe like hot sauce, which is that depend­ing on your tol­er­ance to hot sauce you might be more accli­mat­ed to have more nuances or more [indis­tinct]. But for some peo­ple just a tad is enough. And so if it’s use­ful for invok­ing an idea and kind of trig­ger­ing an idea and then a frame, then it’s not cliché to the audi­ence.

So I would say the way you deal with that is the appli­ca­tion of the thing, of depend­ing on your audi­ence you fig­ure out the lev­el of speci­fici­ty. And some­times the cliché’s actu­al­ly use­ful. Like for me things like sup­port­ing the troops. Like every­one sup­ports the troops and you can actu­al­ly ral­ly around con­cepts with­out get­ting into the nuances to build con­sen­sus and bring peo­ple on board, and then move it to a direc­tion that you want in the pol­i­cy world. So some­times it’s use­ful and some­times it’s real­ly based on the appli­ca­tion, I think.

Steenson: One of the prob­lems about AI is that there aren’t real­ly good ways to under­stand it. It’s dif­fi­cult to under­stand any­thing that hap­pens with­in a black box. You’ve got inputs and out­puts and a bunch of ques­tion marks, right. So it’s why it’s appeal­ing to have the short­hand of clichés. I’m going to blank on the per­son who referred to it in this, it’s in my com­put­er back­stage, but metaphors. We use them to talk about the this‐ness of a that, or the that‐ness of a this. And I’m kind of curi­ous about how we use sci‐fi to get around the that‐ness of the this and the this‐ness of the that.

Finn: Yeah, so a lot of real­ly great ideas here. One thing that you’ve made me think is that clichés are like the auto‐complete of the mind. You know, that there’s a…people men­tion The Hunger Games because it’s sort of acces­si­ble, and there…whether it’s in the zeit­geist or we just all saw too many trail­ers or what­ev­er at the time when you were doing the inter­views. But then, that becomes the frame, right. Then it becomes the title of the sto­ry and it car­ries all of this bag­gage with it.

So, I don’t think we can get away from that. We’re always gonna use that kind of short­hand and so there’s a cer­tain kind of pow­er and respon­si­bil­i­ty in the way that we deploy lan­guage. So I want­ed to ask about that and talk a lit­tle bit more about meth­ods. So, one thing that I am think­ing a lot about right now is this whole notion of imag­i­na­tion, and how do you get peo­ple, how do you inspire peo­ple, invite peo­ple to imag­ine the future. Because as you were say­ing, Kristin, most of us don’t real­ly think about it very much. And if you just throw peo­ple into the deep end, often they’ll cling to the clichés, or they’ll…you know, it’s going to be real­ly dark. So you have to scaf­fold and give peo­ple some tools. And so there’s an inter­est­ing dynam­ic… Should sci­ence fic­tion be play­ing this role of imag­in­ing the futures…imagining more diverse, more inclu­sive, more inspir­ing futures? Or should we be focus­ing more on invit­ing every­body to imag­ine the future?

Older: Can I—

Steenson: Yes.

Finn: That was a trick ques­tion, and you saw through it. Yeah, okay.

Steenson: One thing that I think is inter­est­ing is we all have dif­fer­ent kinds of toolk­its that we use. One thing that’s use­ful from design is the fact that there are ways for peo­ple to get their hands on things and cre­ate futures or cre­ate sci­ence fic­tion, cre­ate design fic­tions, in dif­fer­ent kinds of ways. They could make future arti­facts. They could brain­storm or role‐play a sto­ry, right. They could act out a ser­vice sce­nario, right. We have some­thing called crit­i­cal design as well, which is a pret­ty sort of dark and art gallery kind of ver­sion of design futures, but it’s a way of cre­at­ing future arti­facts and putting them into nar­ra­tives. And the fact is that this is some­thing that any­body can do, right. We could do this at home. We could do this in our board­rooms. We could do this and all kinds of places.

Older: I real­ly like that. And I think one of the things that I’m real­ly inter­est­ed in see­ing in this ques­tion of how do we get sci-fi…how do we use its poten­tial in more places, is real­ly to look at sort of more trans­ver­sal and sort of cross‐cutting and you know, not just bring in a sci‐fi person—although I wish you would all bring in sci‐fi peo­ple to the places where you work. But also you know, how do we take seri­ous­ly the work that they’re doing and get that kind of think­ing more broad­ly into oth­er indus­tries. And then you know, sim­i­lar­ly, I as a sci‐fi writer am very inter­est­ed in know­ing more about how oth­er peo­ple do their work. I think we have a kind of spe­cial­iza­tion fetish. And it’s real­ly use­ful to start expand­ing those dif­fer­ent ways think­ing into board­rooms and vice ver­sa.

[Off‐screen]: And [indis­tinct].

Older: Yes. Everywhere.

Soltani: I’m going to play just…devil’s advo­cate here. One of the chal­lenges I think, and maybe maybe poten­tial­ly one of the rea­sons why we see such dark sci‐fi futures, is essen­tial­ly as a coun­ter­vail­ing force to kind of inno­va­tion writ large, and the… So like, com­ing from California, so much of inno­va­tion and star­tups and cre­ation is hav­ing this utopi­an vision of what the thing you’re build­ing is against all odds, right. Raising fund­ing, com­pet­ing with com­peti­tors, imple­ment­ing in the mar­ket. And so most of the cre­ators of a lot of these tech­nolo­gies have a sin­gu­lar pos­i­tive vision of their tech­nol­o­gy or their tool as deployed in soci­ety, and there­fore miss huge gaps in what could be the neg­a­tive unex­pect­ed con­se­quences on unac­count­ed stake­hold­ers or peo­ple not rep­re­sent­ed in the debate.

And so I think one of the visions is to help remind folks that say like, you envi­sion this home care robot as being—or self‐driving cars as being the end of mobil­i­ty and it will take care of everyone’s kids and every­thing. You know, kin­da pup­pies and rain­bows kin­da thing. But maybe think about the dis­place­ment of work, dis­place­ment of peo­ple, the kind of lia­bil­i­ty impacts. Like all of the neg­a­tive exter­nal­i­ties that are cre­at­ed that the cul­ture of inno­va­tion and inno­va­tors have been kind of forced to for­get, right, have been forced to just think about the upside.

Sharp: I think that’s inter­est­ing, and cer­tain­ly true as people’s per­cep­tion of Silicon Valley goes. But I think you can also flip it so that the neg­a­tive stuff that peo­ple are talk­ing about and think­ing of and pic­tur­ing is just a warn­ing sign, right. It’s the warn­ing sign for what hap­pens if you let some­thing go unchecked. And the flip­side is…we can check it. And so think­ing about it as a way to pic­ture the guardrails rather than just a warn­ing sys­tem— Like, I think Black Mirror, the tele­vi­sion show Black Mirror is a real­ly good exam­ple of that. Of the things that take some­thing to so neg­a­tive an extreme that it flags for you like, don’t let it get this far; let’s see how we can put the guardrails on for the good stuff.

Soltani: I think we’re in agree­ment.

Finn: But it also is seems to be true that there’s a lot more dystopi­an sci­ence fic­tion than there is you know, constructivist…hopepunk…yeah. I may be biased in this ques­tion. So, I think there’s a lurk­ing ques­tion under­neath here which is what is the dif­fer­ence between a good sto­ry and good pol­i­cy, right. And I think one thing that maybe you were get­ting at here Ashkan is that some­times a good sto­ry is not good pol­i­cy because sto­ries are sup­posed to make us feel good, or sto­ries can often be intrin­si­cal­ly kind of self‐centered, right. They can be ego exer­cis­es. And pol­i­cy shouldn’t work that way. So how do you…you know, what what is the dif­fer­ence between those two modes of sort of orga­niz­ing the uni­verse? And how do you trans­late between them?

Older: Well I mean, I would say that first of all if a sto­ry is a good sto­ry hope­ful­ly it’s avoid­ing the sort of ego and like, we’re dis­rupt­ing con­ve­nience stores” or what­ev­er sort of angle. I mean usu­al­ly if you’re read­ing some­thing like that it doesn’t read as a good sto­ry. Now, if you film it with a $100 mil­lion bud­get, and lots of CGI, and big stars, it may still seem like a good sto­ry even though it’s real­ly not a good sto­ry. So that’s a sep­a­rate prob­lem.

But you know, I think com­par­ing pol­i­cy and sto­ries is maybe not quite the right dichoto­my that we want. Because sto­ries real­ly should be kind of open­ing the frame for how we think about poli­cies. And what we do want sto­ries to have…usu­al­ly although not always and there are lots of peo­ple who would dis­agree with me on this; like…dadaists—but you know usu­al­ly you want a sto­ry that has some kind of end­ing and clo­sure. You want some­thing that feels sat­is­fy­ing, that you feel like you’ve been on a jour­ney and learned some­thing or had an insight, or you’ve got­ten some­where with the sto­ry. And pol­i­cy isn’t nec­es­sar­i­ly like that. It doesn’t nec­es­sar­i­ly wrap up. It doesn’t nec­es­sar­i­ly have an end­ing.

But what I hope that good sto­ries do are they give us ideas. They give us empa­thy. They change our per­spec­tive. And that should help for us to think about pol­i­cy in a way that’s a bit out­side of our per­son­al nar­row frame­work or our polit­i­cal par­ty nar­row frame­work, and give us a wider view and a dif­fer­ent view.

Sharp: The oth­er thing I think it can be help­ful in doing is show­ing you how to actu­al­ly exe­cute an idea? Like a lot of times when you just sort of brain­storm about stuff— And we see this in com­mu­ni­ties that are try­ing to devel­op meth­ods for con­nect­ing peo­ple to new sources of income and stuff. Like, it’s great to say you know, Why don’t we have all of the non­prof­it orga­ni­za­tions work with the schools? Like this would be amaz­ing!” But it’s real­ly hard to actu­al­ly fig­ure out the steps that have to hap­pen in order to exe­cute that. And so, fic­tion and sci‐fi in par­tic­u­lar can sort of show you what the steps are and say like, if you’re think­ing about a Martian civ­i­liza­tion you have to actu­al­ly have an orga­ni­za­tion that is deal­ing with all of the dif­fer­ent coun­tries that go togeth­er, and how they work togeth­er. And it’s like the pic­ture of what the action steps are.

Older: And also the end goal. Sometimes even when you talk about it as a great thing, what the actu­al suc­cess looks like? isn’t always clear unless you spec­u­late about it. Unless you imag­ine it.

Finn: Yeah, we had a col­leagues who’s now at anoth­er uni­ver­si­ty who did a wide‐ranging sur­vey of decision‐makers around cli­mate pol­i­cy, and ask­ing them What does the ide­al future look like? What’re you work­ing towards?” And peo­ple just didn’t have you know, a vision or they had a num­ber, as like get­ting down to some lev­el of parts per mil­lion. But it’s actu­al­ly real­ly hard to come up with a con­crete and action­able plan for where you’re try­ing to get that has the end goal in mind rather than just sort of pro­ceed­ing step by step.

So, how do we start to inte­grate… How do we do this more, if we think that this is a good idea?

Older: Which this? [crosstalk] Come up with sto­ries…?

Finn: Oh. Bringing sci­ence fic­tion into…so, if you were say­ing we want­ed more maybe peo­ple in this room to invite more sci­ence fic­tion writ­ers into some of the orga­ni­za­tions that they’re part of. What are some of the meth­ods and the steps to actu­al­ly use the sort of toolk­it of sto­ry­telling about the future to reframe or improve oth­er kinds of decision‐making process­es?

Older: Yeah. I think that there’s a range of things that can hap­pen, from bring­ing in writ­ers in res­i­dence, which I actu­al­ly think is a great idea for all kinds of orga­ni­za­tions whether their prof­it, or non­prof­it, or research‐based. But hav­ing peo­ple that think a dif­fer­ent way than the major­i­ty of the peo­ple in your orga­ni­za­tion is some­thing every­one should con­sid­er bud­get­ing for.

And also bring­ing in some of the tech­niques. I mean, we talked about sce­nario plan­ning and you know, that is not so dis­sim­i­lar in some of its forms from what I do as a writer. When I’m brought in to do kind of futur­ist stuff… Like I was asked to go to the CIA and talk to them about the future secu­ri­ty in Africa. And I mean…I am not an expert on secu­ri­ty or Africa, and I thought it was real­ly inter­est­ing that they were bring­ing me there to make up sto­ries about it.

And so you know, what I think of myself, when I think how am I gonna do a good job at this, and when they ask me how I do this…you know my added ben­e­fit for them is that I am total­ly will­ing to make shit up. I have a lot of prac­tice doing that. And I am real­ly hap­py to just come up with ideas that don’t have to nec­es­sar­i­ly be root­ed in the real­i­ty of engi­neer­ing or the real­i­ty of tech, as long as I feel like I can root them in the real­i­ty of how I know peo­ple behave. Because for me that is the key fac­tor that makes sto­ries believ­able and acces­si­ble to peo­ple, that make sto­ries work. And so that’s what I do. I found writ­ing sci­ence fic­tion par­tic­u­lar­ly free­ing because when I got stuck some­where in a plot that I want­ed some­thing to hap­pen, I could make up a tech­nol­o­gy that fixed that prob­lem.

Now, some peo­ple don’t find that free­ing in the same way. Because they get hung up on How will we make this tech­nol­o­gy work?” And that is fine. That actu­al­ly is great because it gets you a very dif­fer­ent kind of writ­ing and sci­ence fic­tion. But maybe for those peo­ple to real­ly get into total­ly mak­ing shit up, they need to write fan­ta­sy. Or maybe they need to write in…you know, maybe they need a dif­fer­ent kind of exer­cise that’s based in a dif­fer­ent kind of real­i­ty to free them up to feel like okay, I’m gonna think big and dif­fer­ent about how the world could change.

Finn: How do each of you give peo­ple per­mis­sion to do this? Because that’s I think part of what you’re say­ing, that you are like a card‐carrying fab­u­list, right? You’re allowed, you’re empow­ered, and you will show up and do this.

Older: I’m gonna make those cards. You should total­ly do that. I would like one.

Finn: But how do you do that? Because I’ve found in the work that we do at the Center for Science and the Imagination that the is real­ly impor­tant, and there are dif­fer­ent ways that you can do it. But what have you all encoun­tered?

Sharp: I think that the more inter­ac­tive you can make it the bet­ter. And I don’t think that everybody’s sort of suit­ed to be a writer and to con­cep­tu­al­ize and cre­ate a sto­ry like that. So a lot of times we’ve done things like flip­ping a card that shows some spe­cif­ic thing and then you have to make up a sto­ry about that thing. Or putting a set of Legos on the table say­ing you have to make the sort of com­mu­ni­ty cen­ter of the future, where peo­ple gath­er in dif­fer­ent ways and what does that look like? I like the arti­fact one that you [Steenson] men­tioned ear­li­er, think­ing about an arti­fact of the future. But any­thing that you can get to sort of get peo­ple out­side of their nor­mal think­ing and make them pic­ture some­thing else and then describe what the pic­ture looks like is help­ful.

Steenson: My thing is get­ting stu­dents to turn things upside down and not take them for grant­ed. Take tech­nolo­gies, turn them upside down. Take apart movies, take apart books. And a lot of them have nev­er thought about doing this before. If I’m teach­ing Masters stu­dents they’ve come in to do a Masters in inter­ac­tion design. They’re going to go work at Google when they’re done, and they haven’t real­ly thought about what actu­al­ly makes every­thing go. So, we look pret­ty crit­i­cal­ly at what runs behind. We look at the role of AI in soci­ety. In the AI in cul­ture class we take apart movies. We take apart The Hunger Games, actu­al­ly. And Fahrenheit 451; the old ver­sion, of course. And you know, look at what the dif­fer­ent kind of tropes are.

And then I also get them to do their own cre­ative work, right. They have to do some­thing inter­pre­tive. So I have philoso­phers doing paint­ings, and I have HCI stu­dents doing plays, and archi­tec­ture stu­dents curat­ing a fash­ion show. And all of these are just dif­fer­ent ways around and through, but that’s the method that I’d say is at hand for me, being at a uni­ver­si­ty.

Soltani: I think there’s the kind of ideation func­tion that this helps with. And there’s also kind of a cal­i­bra­tion func­tion that it helps with. So, on a num­ber of occa­sions I and oth­er experts (I think Kevin does this at a secu­ri­ty con­fer­ence that we attend.) kind of look at sci‐fi and ideas around sci‐fi, and then real­ly cri­tique how close are we? How real­is­tic is this? You know, is this near future, far future? And for peo­ple in the pol­i­cy realm and peo­ple that don’t have a lot of tech­nol­o­gy speci­fici­ty, the dif­fer­ence between kind of NLP that auto­com­pletes your search his­to­ry and then some­thing that you can have a con­ver­sa­tion­al dia­logue, they don’t know what the dis­tance between those two are. A great exam­ple is the self‐driving car that we’ve been told would arrive you know, last year, and that we’ve been told will arrive next year but you know, a lot of the experts will say giv­en the pol­i­cy con­sid­er­a­tions and all this kind of stuff, prob­a­bly longer.

Helping peo­ple under­stand how far away we are I think is also anoth­er crit­i­cal func­tion of like, you’re able to cre­ate a plot device that you can drop in…policymakers like to drop in exist­ing plot— Or they’re like, Oh, we can just grab the thing and just it in here, and we’ll make like you know, ener­gy out of the sun.” But that was a crazy idea a while ago, right, and help­ing anchor those con­cepts to peo­ple and make them a real­i­ty I think is a crit­i­cal use or appli­ca­tion of this as well.

Finn: Yeah. I hear that con­straints can be real­ly use­ful. Like your card or you know, a sim­ple exer­cise that invites peo­ple to step out­side of their nor­mal pat­tern. Not let­ting the per­fect be the ene­my of the good. We do that a lot in our projects.

And I also real­ly like what you said Molly, about look­ing behind. And that I think is also what you are get­ting at Ashkan, that real­ly under­stand­ing the mechan­ics and the state of tech­nol­o­gy now is impor­tant. And I would add also the notion of look­ing around, and this is part of the prob­lem with the Silicon Valley… The busi­ness pitch sto­ry is all about the upside and you don’t think about what else could hap­pen and the unin­tend­ed con­se­quences. So, find­ing ways to find new per­spec­tives on the work is real­ly real­ly impor­tant.

So, what are some of the ways that— What are the moral haz­ards here? Like what can go wrong, and what are the— You know, we heard about Star Wars before. What do we need to watch out for when we’re think­ing about how we do this kind of sto­ry­telling with a pub­lic pur­pose?

Soltani: So you touched on one, which is like, the prob­lem with rein­force­ment learn­ing, which is if you’re doing mod­el­ing of any kind of data‐driven sys­tem, how to shake it up and invoke a new idea? Otherwise you kind of grav­i­tate local max­i­mum and you will just rein­force an idea that every­one knows—you’ll nev­er break free of that. So I think that’s one crit­i­cal one.

I think the oth­er is think­ing around how to help people…not be real­is­tic but real­ly help peo­ple not be over­con­fi­dent in their vision. Oversell it is some­thing that— It’s kind of relat­ed to the first, where you might have heard a lot of peo­ple say the same type of thing about an AI. It’s going to be a killer robot and there­fore you’re like every­one says it’s going to be a killer robot so it prob­a­bly is. The oth­er is that you are now the fore­most expert and futur­is­tic comes in to Africa to describe what the like­ly secu­ri­ty threats are and they’re like, I’ve got this, you guys. This is like—” you know, an over­sell over over be over­con­fi­dent about your posi­tion. I think those would be the two moral haz­ards. Because we are kind of just making…making stuff up, right. I don’t know if we’re cen­sored here, I was about to… We are just kind of going on the fly and express­ing our vision of the world, right. And so hav­ing some hubris around that I think is crit­i­cal. Which pol­i­cy­mak­ers don’t real­ly do.

Older: I think for me as a writer, the clichés that you men­tioned in the begin­ning are kind of a moral haz­ard. Because it’s very easy to slip into short­hand. It’s par­tic­u­lar­ly easy around sec­ondary char­ac­ters, where you just slip into describ­ing them in the way that that func­tion of char­ac­ter is always described in movies and in books. And I think that’s one of the clear­er exam­ples of where it hap­pens, but it can hap­pen in a lot of oth­er areas as well. And that’s very very dan­ger­ous because that’s how we end up with stereo­types. And they’re very easy to repeat and to pass on, the ones that we’ve learned.

And you know, as I said it’s kind of easy to see in char­ac­ters but the things that you’re men­tion­ing, you know, the trope of the tech­nol­o­gy that nev­er fails. Or the trope of the killer robot. All of these things are very easy to repeat. And so what’s real­ly impor­tant for me as a writer is to try to make sure that I’m ques­tion­ing any­thing that I write with­out think­ing. And to make sure that I’m try­ing to build things out of my own obser­va­tions of an expe­ri­ence and not out of things that I’ve read a mil­lion times. Because not only is that bor­ing and poor nar­ra­tive, but it’s also dan­ger­ous.

Sharp: Yeah. I think you have to make sure that there are enough dif­fer­ent kinds of peo­ple telling the sto­ries that you have a vari­ety of sto­ries. Otherwise that’s where you end up with the clichés.


Finn: So let's open this up for questions from the audience.

Soltani: And we talked about when you ask your question, could you say [indistinct], probably your most…one sci-fi that really influenced you a lot. Like one one fiction sci-fi movie, film, whatever…book, that was critical in your framing and shaping of this space.

No pressure.

Audience 1: I'm going to answer with a non-answer. I'm not a sci-fi fan. I love the topic today and thank you again for inviting me this morning, and thank you for all your insightful research and sharing that with us.

I'm returning the question, how many women watch sci-fi? Who watches sci-fi? Is there an impact in that in how were shaping AI policy through that? So I just wanted to re-pitch the question. Well, Wall-E's cute.

Older: Is the question about how many women watch sci-fi or how many women create sci-fi?

Audience 1: That too. Who's creating sci-fi, who's watching it…

Older: I mean, I can speak for myself? I grew up on Star Wars and Star Trek. Along with a lot of other things. Like I also grew up on Tolkien, and The Black Stallion, and The Wizard of Oz, and you know, all sorts of books that I never— And Anne of Green Gables. And you know, I knew that my brother wouldn't read Anne of Green Gables, although much later I learned out that he stole my Sweet Valley High books when I wasn't looking? He's admitted this on tape so I'm not like giving up a big secret.

But you know, I mean, I always did. And to me you know, stories are stories. I know a lot of women who both write and consume sci-fi in different ways. I don't know the statistics, but I think that if you look at the amount of conversations that goes on, there are a lot of women who are very involved in this. If you look at the current awards slate, for example of the Hugos, they are strongly female. And a lot of people are very upset about that. And I also know there's been some work done by Lisa Yaszek, who's at I think the University of Georgia…

Finn: Georgia Tech.

Older: Georgia Tech. Thank you. Wrote a book recently called The Future Is Female!, where she looks at female science fiction writers of the middle of the 20th century—the 40s 50s 60s—who existed and were extremely popular, and had both editors and readers of magazines asking for more of their work. And have really disappeared from our popular mental image of the genre. So there have always been women who have been both writing and reading and watching sci-fi…but, we don't always pay attention to them. We don't always listen to them. And we don't always accept them as forces in the genre.

I can give you a ton of names to read. And maybe you will find that you are a fan of sci-fi, just not the kind of sci-fi you'd encountered before. But I will do that, because we're short on time, offline.

Finn: Great question. Other questions.

Audience 2: I think it's a sci-fi movie, but Logan's Run. That one really scares me the older I get. But any rate, one element that I—and it maybe that I misunderstand the format—is that sci-fi is also a deeply creative medium. And so to what extent can you dictate to a sci-fi writer, a sci-fi artist, that oh, you're scoring. You said AI was evil, you need to stop that, you know. I'm just wondering where that comes into this discussion, that it's not just propaganda for some business model. Thank you.

Older: I can tell you, as a sci-fi writer, that sci-fi writers get let's say strongly suggested to? all the time. Because I get requests from anthologies to write about specific topics or subjects, all the time. And then of course it's my choice whether I write about it or not. And if the topic doesn't grab me and I write a terrible story about it they're probably not going to take it. But I do get all these prompts constantly. And also you know, to get a story published you have to go through layers of agents and editors. And publishers. So while it's creative, the people who are creating it are not the only people who decide what stories get out into the world. And I think that is magnified hugely (although it's not my area as much) in the realm of TV and movies, where as Chris was saying earlier the bigger the budget they have, amazingly, the less risk they want to take. I mean, we see why that makes sense? but it's also you know, for someone who has to kind of do a lot of their creative work on spec it's also kind of amusing.

But we see that that, that there's a huge number of gatekeepers who think "this is what people will pay money to see" and, they're often wrong and yet that doesn't always change the gatekeepers. We see that when movies flop, it often gets blamed on the female star, or the female writer, or the female director, or the—you know, sometimes the male star. But rarely on the producers or the people who are making those decisions about which movies get made. So…yes and no. You know, we need to push I think the gatekeepers, and we need to push the people who are providing medias to take more risks and to go out and find different stories.

Sharp: I think it also matters how you define sci-fi, and maybe just broadening the definition of sci-fi a little bit is helpful. Like I was really pleased to hear somebody call Wall-E sci-fi, which is like a kids' movie, right? And that's an interes— But if you think about that as evidence of how people think about science in the future, that's a really interesting definition and it's broader than Star Wars or Star Trek and that kinda stuff. Gets you a little bit more of a wide lens.

Finn: I think sci-fi has become interestingly more mainstream and you see it permeating other genres in a funny way, like the last season of Parks and Rec the sitcom was just for no particular reason science fictional. They moved like five years into the future.

I think there was one more question in the back? Yeah, go ahead.

Miranda Bogen: [indistinct] question, but the book I've been enjoying most recently is The Three-Body Problem, and the subsequent ones in the series. And I think what's interesting about that is it's an entirely different cultural perspective of a speculative future. And my question is kind of related to what you were just talking about of how—especially given sort of the global nature of what we imagine governance of AI to be; and given the high barrier to entry of sci-fi in general let alone across cultural contexts, how do we kind of encourage more of those…perspective-sharing, whether it's across country cultures or even within the US, as you were saying, like traveling around the country—I'm sure there's different perspectives there. They're not just gender representation or community representation but just these different perspectives and this frame that I think we're seeing is a helpful one to think about when we're thinking about the future of technology.

Steenson: I think that a lot of people can't actually work on AI in any substantial way or its related technologies. They're not the crafters of algorithms. But people are storytellers in a lot of different kinds of ways. And so a way to begin to engage with, critically and creatively, with AI and related technologies and technological paradigms is exactly in some of the ways that I think we've been talking about.

Finn: I think that's a pretty good place to stop. Please join me in thanking our final panel.

Further Reference

Event page


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.