Archive (Page 1 of 2)

danah boyd: Algorithmic Accountability and Transparency

In the next ten years we will see data-driven tech­nolo­gies recon­fig­ure sys­tems in many dif­fer­ent sec­tors, from autonomous vehi­cles to per­son­al­ized learn­ing, pre­dic­tive polic­ing, to pre­ci­sion med­i­cine. While the advances that we will see will cre­ate phe­nom­e­nal new oppor­tu­ni­ties, they will also cre­ate new challenges—and new worries—and it behooves us to start grap­pling with these issues now so that we can build healthy sociotech­ni­cal systems.

Watch Your Words

The premise of our project is real­ly that we are sur­round­ed by machines that are read­ing what we write, and judg­ing us based on what­ev­er they think we’re saying. 

Kaleidoscope: Positionality-aware Machine Learning

Positionality is the spe­cif­ic posi­tion or per­spec­tive that an indi­vid­ual takes giv­en their past expe­ri­ences, their knowl­edge; their world­view is shaped by posi­tion­al­i­ty. It’s a unique but par­tial view of the world. And when we’re design­ing machines we’re embed­ding posi­tion­al­i­ty into those machines with all of the choic­es we’re mak­ing about what counts and what does­n’t count. 

AI Blindspot

AI Blindspot is a dis­cov­ery process for spot­ting uncon­scious bias­es and struc­tur­al inequal­i­ties in AI systems.

Compassion through Computation: Fighting Algorithmic Bias

I think the ques­tion I’m try­ing to for­mu­late is, how in this world of increas­ing opti­miza­tion where the algo­rithms will be accu­rate… They’ll increas­ing­ly be accu­rate. But their appli­ca­tion could lead to dis­crim­i­na­tion. How do we stop that?

How an Algorithmic World Can Be Undermined

All they have to do is write to jour­nal­ists and ask ques­tions. And what they do is they ask a jour­nal­ist a ques­tion and be like, What’s going on with this thing?” And jour­nal­ists, under pres­sure to find sto­ries to report, go look­ing around. They imme­di­ate­ly search some­thing in Google. And that becomes the tool of exploitation.

Algorithms of Oppression: How Search Engines Reinforce Racism

One of the things that I think is real­ly impor­tant is that we’re pay­ing atten­tion to how we might be able to recu­per­ate and recov­er from these kinds of prac­tices. So rather than think­ing of this as just a tem­po­rary kind of glitch, in fact I’m going to show you sev­er­al of these glitch­es and maybe we might see a pattern.

Data & Society Databite #101: Machine Learning: What’s Fair and How Do We Decide?

The ques­tion is what are we doing in the indus­try, or what is the machine learn­ing research com­mu­ni­ty doing, to com­bat instances of algo­rith­mic bias? So I think there is a cer­tain amount of good news, and it’s the good news that I want­ed to focus on in my talk today. 

Sleepwalking into Surveillant Capitalism, Sliding into Authoritarianism

We have increas­ing­ly smart, sur­veil­lant per­sua­sion archi­tec­tures. Architectures aimed at per­suad­ing us to do some­thing. At the moment it’s click­ing on an ad. And that seems like a waste. We’re just click­ing on an ad. You know. It’s kind of a waste of our ener­gy. But increas­ing­ly it is going to be per­suad­ing us to sup­port some­thing, to think of some­thing, to imag­ine something.

Forbidden Research: Why We Can’t Do That

Quite often when we’re ask­ing these dif­fi­cult ques­tions we’re ask­ing about ques­tions where we might not even know how to ask where the line is. But in oth­er cas­es, when researchers work to advance pub­lic knowl­edge, even on uncon­tro­ver­sial top­ics, we can still find our­selves for­bid­den from doing the research or dis­sem­i­nat­ing the research.

Page 1 of 2