Archive (Page 1 of 2)

AI Blindspot

AI Blindspot is a dis­cov­ery process for spot­ting uncon­scious bias­es and struc­tur­al inequal­i­ties in AI systems.

Virtual Futures Salon: Radical Technologies, with Adam Greenfield

I am pro­found­ly envi­ous of peo­ple who get to write about set­tled domains or sort of set­tled states of affairs in human events. For me, I was deal­ing with a set of tech­nolo­gies which are either recent­ly emerged or still in the process of emerg­ing. And so it was a con­tin­u­al Red Queen’s race to keep up with these things as they announce them­selves to us and try and wrap my head around them, under­stand what it was that they were propos­ing, under­stand what their effects were when deployed in the world.

Ethical Machines episode 1: Mark Riedl

Computers can tell sto­ries but they’re always sto­ries that humans have input into a com­put­er, which are then just being regur­gi­tat­ed. But they don’t make sto­ries up on their own. They don’t real­ly under­stand the sto­ries that we tell. They’re not kind of aware of the cul­tur­al impor­tance of sto­ries. They can’t watch the same movies or read the same books we do. And this seems like this huge miss­ing gap between what com­put­ers can do and humans can do if you think about how impor­tant sto­ry­telling is to the human condition. 

AI and Human Development

Increasingly we’re using auto­mat­ed tech­nol­o­gy in ways that kind of sup­port humans in what they’re doing rather than just hav­ing algo­rithms work on their own, because they’re not smart enough to do that yet or deal with unex­pect­ed situations.

AI and Ethical Design

I teach my stu­dents that design is ongo­ing risky decision-making. And what I mean by ongo­ing is that you nev­er real­ly get to stop ques­tion­ing the assump­tions that you’re mak­ing and that are under­ly­ing what it is that you’re creating—those fun­da­men­tal premises.

Openness and Oversight of Artificial Intelligence

If you have a sys­tem that can wor­ry about stuff that you don’t have to wor­ry about any­more, you can turn your atten­tion to oth­er pos­si­bly more inter­est­ing or impor­tant issues.

Social and Ethical Challenges of AI

One of the chal­lenges of build­ing new tech­nolo­gies is that we often want them to solve things that have been very social­ly dif­fi­cult to solve. Things that we don’t have answers to, prob­lems that we don’t know how we would best go about it in a social­ly respon­si­ble way. 

AI Threats to Civil Liberties and Democracy

In a world of con­flict­ing val­ues, it’s going to be dif­fi­cult to devel­op val­ues for AI that are not the low­est com­mon denominator.

Designing AI to Complement Humanity

I think one of the things I want to say from the start is it’s not like AI is going to appear. It’s actu­al­ly out there, in some instances in ways that we nev­er even notice.

Artificial Intelligence: Challenges of Extended Intelligence

Machine learn­ing sys­tems that we have today have become so pow­er­ful and are being intro­duced into every­thing from self-driving cars, to pre­dic­tive polic­ing, to assist­ing judges, to pro­duc­ing your news feed on Facebook on what you ought to see. And they have a lot of soci­etal impacts. But they’re very dif­fi­cult to audit.

Page 1 of 2