Self-sovereign identity is what sits in the middle enabling individuals to manage all these different relationships in a way that is significantly less complex than each of those institutions needing to have a business relationship with each other to see those credentials.
New America (Page 1 of 4)
presented by Amanda Nguyen, Erika Nesvold, Henry Hertzfeld, Yuliya Panfil
I personally am not worried about settlements. I think they’re so far in the future that we can’t predict what they’ll look like. We can’t even keep human beings, particularly a lot of human beings, alive in space or have real settlements, the way we envision a colony or a settlement. I don’t think the lack of sovereignty is going to hurt any of this.
presented by Andrés Martinez, Armstrong Wiggins, Bina Venkataraman, Russell Shorto
I think we’re already moving into a very—uncomfortably for most of us, into a place where nation-states, governments, are being forced to cede authority to corporations. And that is going to, I assume, happen faster and faster. And if you throw in space, if you throw in the limitlessness of space, then I mean…the sky’s the limit so to speak. I don’t know what the…where that takes us.
presented by Ed Finn, Kevin Bankston
AI Policy Futures is a research effort to explore the relationship between science fiction around AI and the social imaginaries of AI. What those social measures can teach us about real technology policy today. We seem to tell the same few stories about AI, and they’re not very helpful.
presented by Ed Finn, Kristin Sharp, Malka Older, Molly Wright Steenson, Stephanie Dinkins
This is going to be a conversation about science fiction not just as a cultural phenomenon, or a body of work of different kinds, but also as a kind of method or a tool.
How people think about AI depends largely on how they know AI. And to the point, how the most people know AI is through science fiction, which sort of raises the question, yeah? What stories are we telling ourselves about AI in science fiction?
presented by Elana Zeide, Kevin Bankston, Lindsey Sheppard, Miranda Bogen, Rumman Chowdhury
When data scientists talk about bias, we talk about quantifiable bias that is a result of let’s say incomplete or incorrect data. And data scientists love living in that world—it’s very comfortable. Why? Because once it’s quantified if you can point out the error you just fix the error. What this does not ask is should you have built the facial recognition technology in the first place?