Archive (Page 1 of 3)

Data & Society Databite #101: Machine Learning: What’s Fair and How Do We Decide?

The ques­tion is what are we doing in the indus­try, or what is the machine learn­ing research com­mu­ni­ty doing, to com­bat instances of algo­rith­mic bias? So I think there is a cer­tain amount of good news, and it’s the good news that I want­ed to focus on in my talk today. 

Ethical Machines episode 3: Alex J. Champandard and Gene Kogan

For any artists that are work­ing in this field now, if I was good at paint­ing I’d prob­a­bly be look­ing at how to find styles that work well with these kind of rep­re­sen­ta­tions and make them eas­i­ly automat­able or trans­fer­able so that if I had fans as an artist they could say, Hey, I would like to have a pic­ture of my cat painted.”

Ethical Machines episode 2: Jack Clark

If you think about it what we’re doing is we’re turn­ing very high-dimensional math­e­mat­ic rep­re­sen­ta­tions of a sort of large knowl­edge space into intel­lec­tu­al prop­er­ty. Which should be the most fright­en­ing idea in the world to any­one. This is from most abstract thing you could pos­si­bly try and turn into a cap­i­tal­ist object.

Ethical Machines episode 1: Mark Riedl

Computers can tell sto­ries but they’re always sto­ries that humans have input into a com­put­er, which are then just being regur­gi­tat­ed. But they don’t make sto­ries up on their own. They don’t real­ly under­stand the sto­ries that we tell. They’re not kind of aware of the cul­tur­al impor­tance of sto­ries. They can’t watch the same movies or read the same books we do. And this seems like this huge miss­ing gap between what com­put­ers can do and humans can do if you think about how impor­tant sto­ry­telling is to the human condition. 

Are We Living Inside an Ethical (and Kind) Machine?

This is a moment to ask as we make the plan­et dig­i­tal, as we total­ly envel­op our­selves in the com­put­ing envi­ron­ment that we’ve been build­ing for the last hun­dred years, what kind of dig­i­tal plan­et do we want? Because we are at a point where there is no turn­ing back, and get­ting to eth­i­cal deci­sions, val­ues deci­sions, deci­sions about democ­ra­cy, is not some­thing we have talked about enough nor in a way that has had impact.

The Spawn of Frankenstein: Unintended Consequences

Victor’s sin wasn’t in being too ambi­tious, not nec­es­sar­i­ly in play­ing God. It was in fail­ing to care for the being he cre­at­ed, fail­ing to take respon­si­bil­i­ty and to pro­vide the crea­ture what it need­ed to thrive, to reach its poten­tial, to be a pos­i­tive devel­op­ment for soci­ety instead of a disaster.

AI and Human Development

Increasingly we’re using auto­mat­ed tech­nol­o­gy in ways that kind of sup­port humans in what they’re doing rather than just hav­ing algo­rithms work on their own, because they’re not smart enough to do that yet or deal with unex­pect­ed situations.

AI and Ethical Design

I teach my stu­dents that design is ongo­ing risky decision-making. And what I mean by ongo­ing is that you nev­er real­ly get to stop ques­tion­ing the assump­tions that you’re mak­ing and that are under­ly­ing what it is that you’re creating—those fun­da­men­tal premises.

Openness and Oversight of Artificial Intelligence

If you have a sys­tem that can wor­ry about stuff that you don’t have to wor­ry about any­more, you can turn your atten­tion to oth­er pos­si­bly more inter­est­ing or impor­tant issues.

Social and Ethical Challenges of AI

One of the chal­lenges of build­ing new tech­nolo­gies is that we often want them to solve things that have been very social­ly dif­fi­cult to solve. Things that we don’t have answers to, prob­lems that we don’t know how we would best go about it in a social­ly respon­si­ble way. 

Page 1 of 3

Powered by WordPress & Theme by Anders Norén