We are here to talk about fucking machines. In London, on a foggy evening, on a Tuesday, for yet another debate about fucking machines. Another curated discussion underlined by our own human insecurity about versions of us in silica. Fucking anthropomorphic fucking machines. Machines that fuck us. And let’s face it, machines are already fucking us, or so we seem to be told.
Archive (Page 1 of 4)
Computers can tell stories but they’re always stories that humans have input into a computer, which are then just being regurgitated. But they don’t make stories up on their own. They don’t really understand the stories that we tell. They’re not kind of aware of the cultural importance of stories. They can’t watch the same movies or read the same books we do. And this seems like this huge missing gap between what computers can do and humans can do if you think about how important storytelling is to the human condition.
This is a moment to ask as we make the planet digital, as we totally envelop ourselves in the computing environment that we’ve been building for the last hundred years, what kind of digital planet do we want? Because we are at a point where there is no turning back, and getting to ethical decisions, values decisions, decisions about democracy, is not something we have talked about enough nor in a way that has had impact.
Victor’s sin wasn’t in being too ambitious, not necessarily in playing God. It was in failing to care for the being he created, failing to take responsibility and to provide the creature what it needed to thrive, to reach its potential, to be a positive development for society instead of a disaster.
Mary Shelley’s novel has been an incredibly successful modern myth. And so this conversation today is not just about what happened 200 years ago, but the remarkable ways in which that moment and that set of ideas has continued to percolate and evolve and reform in culture, in technological research, in ethics, since then.
Increasingly we’re using automated technology in ways that kind of support humans in what they’re doing rather than just having algorithms work on their own, because they’re not smart enough to do that yet or deal with unexpected situations.
I teach my students that design is ongoing risky decision-making. And what I mean by ongoing is that you never really get to stop questioning the assumptions that you’re making and that are underlying what it is that you’re creating—those fundamental premises.
If you have a system that can worry about stuff that you don’t have to worry about anymore, you can turn your attention to other possibly more interesting or important issues.