Archive (Page 1 of 4)

Virtual Futures Salon: Radical Technologies, with Adam Greenfield

I am pro­found­ly envi­ous of peo­ple who get to write about set­tled domains or sort of set­tled states of affairs in human events. For me, I was deal­ing with a set of tech­nolo­gies which are either recent­ly emerged or still in the process of emerg­ing. And so it was a con­tin­u­al Red Queen’s race to keep up with these things as they announce them­selves to us and try and wrap my head around them, under­stand what it was that they were propos­ing, under­stand what their effects were when deployed in the world.

Defying Faith

The chal­lenge for the Church and for the the­olo­gians was to say okay, per­haps that’s what is writ­ten. But for exam­ple if you con­sid­er that God has deliv­ered the Creation in sev­en days, know­ing that nowa­days Amazon can deliv­er every­thing on Earth overnight, it means that Jeff Bezos has defeat­ed God? Or does it mean some­thing dif­fer­ent? And I think it means prob­a­bly some­thing dif­fer­ent.

Data & Society Databite #101: Machine Learning: What’s Fair and How Do We Decide?

The ques­tion is what are we doing in the indus­try, or what is the machine learn­ing research com­mu­ni­ty doing, to com­bat instances of algo­rith­mic bias? So I think there is a cer­tain amount of good news, and it’s the good news that I want­ed to focus on in my talk today.

Ethical Machines episode 3: Alex J. Champandard and Gene Kogan

For any artists that are work­ing in this field now, if I was good at paint­ing I’d prob­a­bly be look­ing at how to find styles that work well with these kind of rep­re­sen­ta­tions and make them eas­i­ly automat­able or trans­fer­able so that if I had fans as an artist they could say, Hey, I would like to have a pic­ture of my cat paint­ed.”

Ethical Machines episode 2: Jack Clark

If you think about it what we’re doing is we’re turn­ing very high-dimensional math­e­mat­ic rep­re­sen­ta­tions of a sort of large knowl­edge space into intel­lec­tu­al prop­er­ty. Which should be the most fright­en­ing idea in the world to any­one. This is from most abstract thing you could pos­si­bly try and turn into a cap­i­tal­ist object.

Ethical Machines episode 1: Mark Riedl

Computers can tell sto­ries but they’re always sto­ries that humans have input into a com­put­er, which are then just being regur­gi­tat­ed. But they don’t make sto­ries up on their own. They don’t real­ly under­stand the sto­ries that we tell. They’re not kind of aware of the cul­tur­al impor­tance of sto­ries. They can’t watch the same movies or read the same books we do. And this seems like this huge miss­ing gap between what com­put­ers can do and humans can do if you think about how impor­tant sto­ry­telling is to the human con­di­tion.

Are We Living Inside an Ethical (and Kind) Machine?

This is a moment to ask as we make the plan­et dig­i­tal, as we total­ly envel­op our­selves in the com­put­ing envi­ron­ment that we’ve been build­ing for the last hun­dred years, what kind of dig­i­tal plan­et do we want? Because we are at a point where there is no turn­ing back, and get­ting to eth­i­cal deci­sions, val­ues deci­sions, deci­sions about democ­ra­cy, is not some­thing we have talked about enough nor in a way that has had impact.

The Spawn of Frankenstein: Unintended Consequences

Victor’s sin wasn’t in being too ambi­tious, not nec­es­sar­i­ly in play­ing God. It was in fail­ing to care for the being he cre­at­ed, fail­ing to take respon­si­bil­i­ty and to pro­vide the crea­ture what it need­ed to thrive, to reach its poten­tial, to be a pos­i­tive devel­op­ment for soci­ety instead of a dis­as­ter.

AI and Human Development

Increasingly we’re using auto­mat­ed tech­nol­o­gy in ways that kind of sup­port humans in what they’re doing rather than just hav­ing algo­rithms work on their own, because they’re not smart enough to do that yet or deal with unex­pect­ed sit­u­a­tions.

AI and Ethical Design

I teach my stu­dents that design is ongo­ing risky decision-making. And what I mean by ongo­ing is that you nev­er real­ly get to stop ques­tion­ing the assump­tions that you’re mak­ing and that are under­ly­ing what it is that you’re creating—those fun­da­men­tal premis­es.

Page 1 of 4