Archive (Page 1 of 3)

Ethical Machines episode 1: Mark Riedl

Computers can tell sto­ries but they’re always sto­ries that humans have input into a com­put­er, which are then just being regur­gi­tat­ed. But they don’t make sto­ries up on their own. They don’t real­ly under­stand the sto­ries that we tell. They’re not kind of aware of the cul­tur­al impor­tance of sto­ries. They can’t watch the same movies or read the same books we do. And this seems like this huge miss­ing gap between what com­put­ers can do and humans can do if you think about how impor­tant sto­ry­telling is to the human con­di­tion.

Are We Living Inside an Ethical (and Kind) Machine?

This is a moment to ask as we make the plan­et dig­i­tal, as we total­ly envel­op our­selves in the com­put­ing envi­ron­ment that we’ve been build­ing for the last hun­dred years, what kind of dig­i­tal plan­et do we want? Because we are at a point where there is no turn­ing back, and get­ting to eth­i­cal deci­sions, val­ues deci­sions, deci­sions about democ­ra­cy, is not some­thing we have talked about enough nor in a way that has had impact.

The Spawn of Frankenstein: Unintended Consequences

Victor’s sin wasn’t in being too ambi­tious, not nec­es­sar­i­ly in play­ing God. It was in fail­ing to care for the being he cre­at­ed, fail­ing to take respon­si­bil­i­ty and to pro­vide the crea­ture what it need­ed to thrive, to reach its poten­tial, to be a pos­i­tive devel­op­ment for soci­ety instead of a dis­as­ter.

The Spawn of Frankenstein: It’s Alive

Mary Shelley’s nov­el has been an incred­i­bly suc­cess­ful mod­ern myth. And so this con­ver­sa­tion today is not just about what hap­pened 200 years ago, but the remark­able ways in which that moment and that set of ideas has con­tin­ued to per­co­late and evolve and reform in cul­ture, in tech­no­log­i­cal research, in ethics, since then.

The Spawn of Frankenstein: Playing God

In Shelley’s vision, Frankenstein was the mod­ern Prometheus. The hip, up to date, learned, vital god who chose to cre­ate human life and paid the dire con­se­quences. To Shelley, gods cre­ate and for humans to do that is bad. Bad for oth­ers but espe­cial­ly bad for one’s cre­ator.

Sleepwalking into Surveillant Capitalism, Sliding into Authoritarianism

We have increas­ing­ly smart, sur­veil­lant per­sua­sion archi­tec­tures. Architectures aimed at per­suad­ing us to do some­thing. At the moment it’s click­ing on an ad. And that seems like a waste. We’re just click­ing on an ad. You know. It’s kind of a waste of our ener­gy. But increas­ing­ly it is going to be per­suad­ing us to sup­port some­thing, to think of some­thing, to imag­ine some­thing.

AI and Human Development

Increasingly we’re using auto­mat­ed tech­nol­o­gy in ways that kind of sup­port humans in what they’re doing rather than just hav­ing algo­rithms work on their own, because they’re not smart enough to do that yet or deal with unex­pect­ed sit­u­a­tions.

AI and Ethical Design

I teach my stu­dents that design is ongo­ing risky decision-making. And what I mean by ongo­ing is that you nev­er real­ly get to stop ques­tion­ing the assump­tions that you’re mak­ing and that are under­ly­ing what it is that you’re creating—those fun­da­men­tal premis­es.

Openness and Oversight of Artificial Intelligence

If you have a sys­tem that can wor­ry about stuff that you don’t have to wor­ry about any­more, you can turn your atten­tion to oth­er pos­si­bly more inter­est­ing or impor­tant issues.

Social and Ethical Challenges of AI

One of the chal­lenges of build­ing new tech­nolo­gies is that we often want them to solve things that have been very social­ly dif­fi­cult to solve. Things that we don’t have answers to, prob­lems that we don’t know how we would best go about it in a social­ly respon­si­ble way. 

Page 1 of 3

Powered by WordPress & Theme by Anders Norén