Community is always part of a system that we sometimes can or cannot see or recognize. And in Gerard O’Neill’s proposals for these islands in space, those communities…were supposed to perform a very specific function in a larger system. They were supposed to be experiments.
New America (Page 1 of 4)
presented by Yuliya Panfil, Amanda Nguyen, Henry Hertzfeld, Erika Nesvold
I personally am not worried about settlements. I think they’re so far in the future that we can’t predict what they’ll look like. We can’t even keep human beings, particularly a lot of human beings, alive in space or have real settlements, the way we envision a colony or a settlement. I don’t think the lack of sovereignty is going to hurt any of this.
presented by Russell Shorto, Bina Venkataraman, Andrés Martinez, Armstrong Wiggins
I think we’re already moving into a very—uncomfortably for most of us, into a place where nation-states, governments, are being forced to cede authority to corporations. And that is going to, I assume, happen faster and faster. And if you throw in space, if you throw in the limitlessness of space, then I mean…the sky’s the limit so to speak. I don’t know what the…where that takes us.
presented by Kevin Bankston, Ed Finn
AI Policy Futures is a research effort to explore the relationship between science fiction around AI and the social imaginaries of AI. What those social measures can teach us about real technology policy today. We seem to tell the same few stories about AI, and they’re not very helpful.
presented by Malka Older, Molly Wright Steenson, Stephanie Dinkins, Kristin Sharp, Ed Finn
This is going to be a conversation about science fiction not just as a cultural phenomenon, or a body of work of different kinds, but also as a kind of method or a tool.
How people think about AI depends largely on how they know AI. And to the point, how the most people know AI is through science fiction, which sort of raises the question, yeah? What stories are we telling ourselves about AI in science fiction?
presented by Rumman Chowdhury, Lindsey Sheppard, Miranda Bogen, Kevin Bankston, Elana Zeide
When data scientists talk about bias, we talk about quantifiable bias that is a result of let’s say incomplete or incorrect data. And data scientists love living in that world—it’s very comfortable. Why? Because once it’s quantified if you can point out the error you just fix the error. What this does not ask is should you have built the facial recognition technology in the first place?
presented by Madeline Ashby, Lee Konstantinou, Andrew Hudson, Chris Noessel, Damien Williams, Kanta Dihal
What I hope we can do in this panel is have a slightly more literary discussion to try to answer well why were those the stories that we were telling and what has been the point of telling those stories even though they don’t now necessarily always align with the policy problems that we’re having.
We’re here because the imaginary futures of science fiction impact our real future much more than we probably realize. There is a powerful feedback loop between sci-fi and real-world technical and tech policy innovation and if we don’t stop and pay attention to it, we can’t harness it to help create better features including better and more inclusive futures around AI.