Archive (Page 2 of 3)

Virtual Futures Salon: Dawn of the New Everything, with Jaron Lanier

So here’s what hap­pened. If you tell peo­ple you’re going to have this super-open, absolute­ly non-commercial, money-free thing, but it has to sur­vive in this envi­ron­ment that’s based on mon­ey, where it has to make mon­ey, how does any­body square that cir­cle? How does any­body do any­thing? And so com­pa­nies like Google that came along, in my view were backed into a cor­ner. There was exact­ly one busi­ness plan avail­able to them, which was advertising.

Data & Society Databite #102: Everybody Runs

I’ve been try­ing to get as many weird futures on the table as pos­si­ble because the truth is there are these sort of ubiq­ui­tous futures, right. Ideas about how the world should or will be that have become this sort of main­stream, dom­i­nat­ing ver­nac­u­lar that’s pri­mar­i­ly kind of about a very white Western mas­cu­line vision of the future, and it kind of col­o­nized the abil­i­ty to think about and imag­ine tech­nol­o­gy in the future.

The Spawn of Frankenstein: Unintended Consequences

Victor’s sin was­n’t in being too ambi­tious, not nec­es­sar­i­ly in play­ing God. It was in fail­ing to care for the being he cre­at­ed, fail­ing to take respon­si­bil­i­ty and to pro­vide the crea­ture what it need­ed to thrive, to reach its poten­tial, to be a pos­i­tive devel­op­ment for soci­ety instead of a disaster.

The Spawn of Frankenstein: Playing God

In Shelley’s vision, Frankenstein was the mod­ern Prometheus. The hip, up to date, learned, vital god who chose to cre­ate human life and paid the dire con­se­quences. To Shelley, gods cre­ate and for humans to do that is bad. Bad for oth­ers but espe­cial­ly bad for one’s creator. 

Margaret Atwood on Fiction, the Future, and the Environment

We have already changed the world a lot, not always for the bet­ter. Some of it’s for the bet­ter, as far as we human beings are con­cerned. But every time we invent a new tech­nol­o­gy, we like to play with that tech­nol­o­gy, and we don’t always fore­see the consequences.

What Our Algorithms Will Know in 2100

A lot of the sci­ence fic­tion I love the most is not about these big ques­tions. You read a book like The Diamond Age and the most inter­est­ing thing in The Diamond Age is the medi­a­tron­ic chop­sticks, the small detail that Stephenson says okay, well if you have nan­otech­nol­o­gy, peo­ple are going to use this tech­nol­o­gy in the most pedes­tri­an, kind of ordi­nary ways.

AI Policy, Is It Possible? Is It Necessary?

When we talk about tech­nolo­gies such as AI, and pol­i­cy, one of the main prob­lems is that tech­no­log­i­cal advance­ment is fast, and pol­i­cy and democ­ra­cy is a very very slow process. And that could be poten­tial­ly a very big prob­lem if we think that AI could be poten­tial­ly dangerous.

Mindful Cyborgs #54 — A Positive Vision of Transhumanism and AI with Damien Williams

I don’t think it’s going to be nec­es­sar­i­ly a prob­lem with­in the next five to ten, fif­teen, to maybe even twen­ty years. But my per­spec­tive on it has always been, because I am more philo­soph­i­cal­ly focused in these things, why not try to address the issues before they arrive? Why not try to think about these ques­tions before they become prob­lems that we have to fix?

Where to From Here?

Although we haven’t reached peak sur­veil­lance, we’ve reached peak indif­fer­ence to sur­veil­lance. There will nev­er be anoth­er day in which few­er peo­ple give a shit about this because there’ll nev­er be a day in which few­er peo­ple’s lives have been ruined by this.