Kevin Bankston: Alright, it is one o’clock so we’re going to get start­ed. Hi, and wel­come the New America for today’s event, What Sci‐Fi Futures Can (and Can’t) Teach Us About AI Policy.” I’m Kevin Bankston. I’m the direc­tor of New America’s Open Technology Institute and the co‐lead of a project called AI Policy Futures that we’re doing in con­junc­tion with our friends at Arizona State University Center for Science and the Imagination. My co‐lead in this project is stand­ing right beside me. He’s Ed Finn, the Director of that cen­ter. And I’m going to let him tell you a lit­tle bit about our project before we get start­ed with today’s con­tent.

Ed Finn: Thanks, Kevin. Thanks for join­ing us. So, AI Policy Futures is a research effort to explore the rela­tion­ship between sci­ence fic­tion around AI and the social imag­i­nar­ies of AI. What those social mea­sures can teach us about real tech­nol­o­gy pol­i­cy today. We seem to tell the same few sto­ries about AI, and they’re not very help­ful. They’re sto­ries about killer robots or super­in­tel­li­gence, and we’re talk­ing about that and miss­ing the boat on things like air­planes that are falling out of the sky, and autonomous vehi­cles and all sorts of things that are in the very near future going to impact our lives in very pow­er­ful ways.

So, this project is going to cre­ate a tax­on­o­my of dif­fer­ent ver­sions of AI, visions of AI, in the glob­al lit­er­a­ture of sci­ence fic­tion and see how we can apply that to com­mis­sion orig­i­nal sto­ries to be pub­lished in Slate that will explore real‐world, use­ful fic­tions about the near future of AI. This is sup­port­ed by the Hewlett Foundation and Google. We’re real­ly delight­ed to be able to have this event and to con­tin­ue this work with all of you. Thank you.


Kevin Bankston: The Sci‐Fi Feedback Loop

Kevin Bankston, Miranda Bogen, Rumman Chowdhury, Elana Zeide, and Lindsey Sheppard: AI in Reality

Kanta Dihal: How Sci‐Fi Reflects Our AI Hopes and Fears

Madeline Ashby (record­ed provo­ca­tion),
Andrew Hudson, Kanta Dihal, Chris Noessel, Lee Konstantinou, and Damien Williams: AI in Sci‐Fi

Chris Noessel: Untold AI – What AI Stories Should We Be Telling Ourselves?

Stephanie Dinkins/Bina48 (record­ed provo­ca­tion),
Ed Finn, Malka Older, Ashkan Soltani, Kristin Sharp, Molly Write Steenson: Bridging AI Fact and Fiction


Ed Finn: I have a cou­ple of clos­ing remarks, and I rec­og­nize that I’m the last thing stand­ing between you and our recep­tion.

So, the first thing I’m gonna do is share what one of the…we’re fig­ur­ing out this project as we go, this AI Policy Futures thing, and we’re con­tin­u­ing to look for new direc­tions to take it, new part­ners, and new ways to com­mu­ni­cate. So, as part of our gath­er­ing today we came up with a bunch of ideas for orig­i­nal sci­ence fic­tion sto­ries that we’re going to be com­mis­sion­ing over the next year or so.

But anoth­er thing we did is we con­duct­ed a bunch of inter­views at an event we had at South by Southwest a few months ago. And we have the raw mate­ri­als for a pod­cast. And now all we need is for some­body to give us some more mon­ey so we can make the pod­cast. But we did make a teas­er for the pod­cast, which I’m going to play for you, to entice you all to come up with bril­liant ways for us to bring this thing to life. So I’m hop­ing we can play this pod­cast teas­er. If it’s… Maybe my mag­i­cal pow­ers— [record­ing starts play­ing]

So, if you are inter­est­ed in talk­ing more about that or get­ting involved in the project in any oth­er way please feel free to chat with me or Kevin.

And I want to close just very briefly. My provo­ca­tion to you, since you’ve been promised a provo­ca­tion, is that when we talk about AI we get hung up this word intel­li­gence,” right. We don’t real­ly know what intel­li­gence is. We’ve nev­er real­ly known. And all of our anx­i­eties about AI are bound up in the way that this opens up the deep, exis­ten­tial of what it is to be human.

And so, the oth­er relat­ed word is that word imag­i­na­tion.” And every­thing that we’re talk­ing about here is how we can use our imag­i­na­tion to build a bet­ter path­way, to chart a bet­ter course, around all of the ways that intel­li­gent machines and learn­ing machines are already chang­ing the world. Already deeply impli­cat­ed in the fab­ric of our every­day lives.

And so, if we’re going to do any­thing about AI and devel­op­ing a bet­ter set of approach­es to our con­ver­sa­tions around AI, pol­i­cy around AI, we have to start with that word imag­i­na­tion.” We have to take it on as a ques­tion for our­selves, how do we imag­ine the future? A future where there’s a new mir­ror. A new set of sys­tems that reflect our­selves back to our­selves. That post the ques­tion to us. That throw our anx­i­eties about iden­ti­ty, and belong­ing, and per­son­hood back at us in all sorts of dif­fer­ent ways. Because we can’t help but see our­selves in all of our tools and sys­tems.

So, with that I will thank you once again for join­ing us, and turn things over to Kevin.

Kevin Bankston: And I will thank you Ed, and Andrew for the trail­blaz­ing work y’all have done at the Center for Science and the Imagination to help cat­alyze and solid­i­fy a grow­ing com­mu­ni­ty of prac­tice that is tak­ing sci­ence fic­tion seri­ous­ly as a tool for think­ing about the future of tech­nol­o­gy and the future of pol­i­cy. Applied sci‐fi, you might call it, or prac­ti­cal sci‐fi. Everything we’ve been doing this event, this project, the Sci‐Fi House at South by Southwest, has been all about try­ing to build a com­mu­ni­ty around that idea? And I want to thank first off all of the pan­elists and speak­ers for being a part of that com­mu­ni­ty. And I want to thank you the audi­ence for being a part of that com­mu­ni­ty in join­ing us today. So, thank you, and please enjoy the recep­tion.

Further Reference

Event page


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.