Archive (Page 1 of 2)

Watch Your Words

The premise of our project is real­ly that we are sur­round­ed by machines that are read­ing what we write, and judg­ing us based on what­ev­er they think we’re say­ing.

Kaleidoscope: Positionality-aware Machine Learning

Positionality is the spe­cif­ic posi­tion or per­spec­tive that an indi­vid­ual takes giv­en their past expe­ri­ences, their knowl­edge; their world­view is shaped by posi­tion­al­i­ty. It’s a unique but par­tial view of the world. And when we’re design­ing machines we’re embed­ding posi­tion­al­i­ty into those machines with all of the choic­es we’re mak­ing about what counts and what does­n’t count.

AI Blindspot

AI Blindspot is a dis­cov­ery process for spot­ting uncon­scious bias­es and struc­tur­al inequal­i­ties in AI sys­tems.

Compassion through Computation: Fighting Algorithmic Bias

I think the ques­tion I’m try­ing to for­mu­late is, how in this world of increas­ing opti­miza­tion where the algo­rithms will be accu­rate… They’ll increas­ing­ly be accu­rate. But their appli­ca­tion could lead to dis­crim­i­na­tion. How do we stop that?

How an Algorithmic World Can Be Undermined

All they have to do is write to jour­nal­ists and ask ques­tions. And what they do is they ask a jour­nal­ist a ques­tion and be like, What’s going on with this thing?” And jour­nal­ists, under pres­sure to find sto­ries to report, go look­ing around. They imme­di­ate­ly search some­thing in Google. And that becomes the tool of exploita­tion.

Algorithms of Oppression: How Search Engines Reinforce Racism

One of the things that I think is real­ly impor­tant is that we’re pay­ing atten­tion to how we might be able to recu­per­ate and recov­er from these kinds of prac­tices. So rather than think­ing of this as just a tem­po­rary kind of glitch, in fact I’m going to show you sev­er­al of these glitch­es and maybe we might see a pat­tern.

Data & Society Databite #101: Machine Learning: What’s Fair and How Do We Decide?

The ques­tion is what are we doing in the indus­try, or what is the machine learn­ing research com­mu­ni­ty doing, to com­bat instances of algo­rith­mic bias? So I think there is a cer­tain amount of good news, and it’s the good news that I want­ed to focus on in my talk today.

Sleepwalking into Surveillant Capitalism, Sliding into Authoritarianism

We have increas­ing­ly smart, sur­veil­lant per­sua­sion archi­tec­tures. Architectures aimed at per­suad­ing us to do some­thing. At the moment it’s click­ing on an ad. And that seems like a waste. We’re just click­ing on an ad. You know. It’s kind of a waste of our ener­gy. But increas­ing­ly it is going to be per­suad­ing us to sup­port some­thing, to think of some­thing, to imag­ine some­thing.

Forbidden Research: Why We Can’t Do That

Quite often when we’re ask­ing these dif­fi­cult ques­tions we’re ask­ing about ques­tions where we might not even know how to ask where the line is. But in oth­er cas­es, when researchers work to advance pub­lic knowl­edge, even on uncon­tro­ver­sial top­ics, we can still find our­selves for­bid­den from doing the research or dis­sem­i­nat­ing the research.

Forbidden Research Welcome and Introduction: Ethan Zuckerman

As we dug into this top­ic, we real­ized research gets for­bid­den for all sorts of rea­sons. We’re going to talk about top­ics today that are for­bid­den in some sense because they’re so big, they’re so con­se­quen­tial, that it’s extreme­ly dif­fi­cult for any­one to think about who should actu­al­ly have the right to make this deci­sion. We’re going to talk about some top­ics that end up being off the table, that end up being for­bid­den, because they’re kind of icky. They’re real­ly uncom­fort­able. And frankly, if you make it through this day with­out some­thing mak­ing you uncom­fort­able, we did some­thing wrong in plan­ning this event.

Page 1 of 2