Ed Finn

What Sci-Fi Futures Can (and Can’t) Teach Us About AI Policy, open­ing and clos­ing comments

in What Sci-Fi Futures Can (and Can't) Teach Us About AI Policy

AI Policy Futures is a research effort to explore the rela­tion­ship between sci­ence fic­tion around AI and the social imag­i­nar­ies of AI. What those social mea­sures can teach us about real tech­nol­o­gy pol­i­cy today. We seem to tell the same few sto­ries about AI, and they’re not very helpful.

Bridging AI Fact and Fiction

in What Sci-Fi Futures Can (and Can't) Teach Us About AI Policy

This is going to be a con­ver­sa­tion about sci­ence fic­tion not just as a cul­tur­al phe­nom­e­non, or a body of work of dif­fer­ent kinds, but also as a kind of method or a tool.

The Conversation #55 — Ed Finn

in The Conversation

The Center, one of our core goals, our mis­sion state­ment, is to get peo­ple think­ing more cre­ative­ly and ambi­tious­ly about the future. What I mean when I talk about that is that we need to come up with bet­ter sto­ries about the future. If you want to build a bet­ter world you have to imag­ine that world first.

The Spawn of Frankenstein: It’s Alive

in The Spawn of Frankenstein

Mary Shelley’s nov­el has been an incred­i­bly suc­cess­ful mod­ern myth. And so this con­ver­sa­tion today is not just about what hap­pened 200 years ago, but the remark­able ways in which that moment and that set of ideas has con­tin­ued to per­co­late and evolve and reform in cul­ture, in tech­no­log­i­cal research, in ethics, since then.

The Spawn of Frankenstein: Playing God

in The Spawn of Frankenstein

In Shelley’s vision, Frankenstein was the mod­ern Prometheus. The hip, up to date, learned, vital god who chose to cre­ate human life and paid the dire con­se­quences. To Shelley, gods cre­ate and for humans to do that is bad. Bad for oth­ers but espe­cial­ly bad for one’s creator. 

Who and What Will Get to Think the Future?

in Can We Imagine Our Way to a Better Future?

There’s already a kind of cog­ni­tive invest­ment that we make, you know. At a cer­tain point, you have years of your per­son­al his­to­ry liv­ing in some­body’s cloud. And that goes beyond mere­ly being a mem­o­ry bank, it’s also a cog­ni­tive bank in some way.

What Our Algorithms Will Know in 2100

in The Tyranny of Algorithms

A lot of the sci­ence fic­tion I love the most is not about these big ques­tions. You read a book like The Diamond Age and the most inter­est­ing thing in The Diamond Age is the medi­a­tron­ic chop­sticks, the small detail that Stephenson says okay, well if you have nan­otech­nol­o­gy, peo­ple are going to use this tech­nol­o­gy in the most pedes­tri­an, kind of ordi­nary ways.

What Should We Know About Algorithms?

in The Tyranny of Algorithms

When I go talk about this, the thing that I tell peo­ple is that I’m not wor­ried about algo­rithms tak­ing over human­i­ty, because they kind of suck at a lot of things, right. And we’re real­ly not that good at a lot of things we do. But there are things that we’re good at. And so the exam­ple that I like to give is Amazon rec­om­mender sys­tems. You all run into this on Netflix or Amazon, where they rec­om­mend stuff to you. And those algo­rithms are actu­al­ly very sim­i­lar to a lot of the sophis­ti­cat­ed arti­fi­cial intel­li­gence we see now. It’s the same underneath.

What Do Algorithms Know?

in Cybersecurity for a New America

The Tyranny of Algorithms is obvi­ous­ly a polem­i­cal title to start a con­ver­sa­tion around com­pu­ta­tion and cul­ture. But I think that it helps us get into the cul­tur­al, the polit­i­cal, the legal, the eth­i­cal dimen­sions of code. Because we so often think of code, and code is so often con­struct­ed, in a pure­ly tech­ni­cal frame­work, by peo­ple who see them­selves as solv­ing tech­ni­cal problems.