Amar Asher: So, welcome everybody to the Berkman Klein Tuesday Luncheon Series. So excited to have such an oversubscribed room for such an important topic and important book. We are thrilled to— I should mention, I'm Amar Asher. I'm the Assistant Research Director here at the Berkman Klein Center. We are thrilled to have Virginia Eubanks, who is the author of this phenomenal book Automating Inequality, here at the Berkman Klein Center to talk about some of the most salient issues of the day related to emerging technologies, AI and ethics, and more generally just how many of these issues are playing out across society, and how how high-tech tools are affecting and impacting the poor. And it has so much relevance to work that's going on here at the Berkman Klein Center. In particular over the past two years we've hosted a series of conversations around the public interest and emerging technologies under our Ethics and Governance of Artificial Intelligence Initiative. It's got a number of areas that it's doing research in with the MIT Media Lab, and so if you're interested in that effort and that series of conversations I encourage you to check out the Berkman Klein web site.

And let me just take a moment to mention a couple of different housekeeping things. One is that if you are new to the Berkman Klein Luncheon Series, these events are webcast for posterity, and also because this room was oversubscribed there's lots of folks watching on the webcast, so please just be aware of that.

Second is that if you are interested in this book and actually reading it, we have them for sale via the Harvard Coop over there for $25. Virginia has graciously offered to also sign copies of the book after the talk so please do make a purchase and stick around afterwards so that she will sign them.

And third please be sure to ask questions at the end of this talk. Virginia will speak for about twenty-five to thirty minutes but we really want this to be a discussion. There's a lot of rich material here and lots of many salient questions that we'll be discussing. So please do ask questions, and you can do that in person here or over on Twitter. We'll keep an eye on that for folks that are not within the room.

So let me introduce Virginia. Virginia Eubanks is an Associate Professor of Political Science at the University of Albany, SUNY. She's the author of this tremendous book that you'll hear about in a minute. She has also authored Digital Dead End: Fighting for Social Justice in the Information Age. She's a co-editor, with Alethia Jones, of Ain’t Gonna Let Nobody Turn Me Around: Forty Years of Movement Building with Barbara Smith. And her writing about technology and social justice has appeared in The American Prospect, The Nation, Harper’s and Wired. She's for two decades worked in community technology and economic justice movements, and she's a founding member of the Our Data Bodies project and fellow at New America. So thrilled to have you here. Welcome, Virginia.


Virginia Eubanks: Hi. How’s lunch? I have, I put some aside because it looked like you peo­ple were gonna eat all the food before I got a chance to eat. So, I’m real­ly excit­ed to be here. Thank you so much for the invi­ta­tion, and to all the folks who worked so hard to get me here on time and in one piece to have this con­ver­sa­tion with you.

My goal today is to keep it a lit­tle bit on the short side because we have a real­ly great, smart room here and I’d real­ly love to have a sort of broad­er con­ver­sa­tion, par­tic­u­lar­ly around solu­tions to the kinds of prob­lems that I describe in the book.

So one thing that I think is a bit dif­fer­ent about Automating Inequality from some of the oth­er real­ly smart and fine work that’s hap­pen­ing around sort of algo­rith­mic gov­er­nance, or AI, or machine learn­ing, or auto­mat­ed deci­sion­mak­ing or what­ev­er name you want to call it by… But there’s sort of two things that are impor­tant to me about how Automating Inequality’s a bit dif­fer­ent.

So one is that I began all of my report­ing from the point of view of folks in com­mu­ni­ties who feel like they’re tar­gets of these sys­tems rather than start­ing with admin­is­tra­tors and design­ers. I did of course also talk to admin­is­tra­tors and design­ers and data sci­en­tists and econ­o­mists. But I start­ed in each case with fam­i­lies and com­mu­ni­ties who feel like they are being tar­get­ed by these sys­tems, and that real­ly shaped the way I was able to tell the sto­ries that I tell in the book.

I usu­al­ly, when I have a lit­tle bit more time, I usu­al­ly spend a lot of time intro­duc­ing the fam­i­lies who spoke to me when I was report­ing and get­ting their voic­es in the room. I’m going to do a lit­tle bit less of that today. So I just want to do two things. One is say what an incred­i­ble, gen­er­ous act it was for peo­ple to share their expe­ri­ence with me. So these are folks who were in often real­ly try­ing con­di­tions. So they’re cur­rent­ly on pub­lic assis­tance or have recent­ly got­ten kicked off pub­lic assis­tance. They’re unhoused or home­less. Or their fam­i­ly is involved in a child wel­fare inves­ti­ga­tion. So any­one who under those con­di­tions agrees to go on the record with their real name, they’re real loca­tion, the the real details of their life, is doing incred­i­bly gen­er­ous and coura­geous thing. So I just want to make sure I start by acknowl­edg­ing that the book wouldn’t exist with­out peo­ple who took that kind of risk and made them­selves real­ly vul­ner­a­ble. So par­tic­u­lar­ly since I’m not going to spend a lot of time putting their voic­es in the room I just want to start by acknowl­edg­ing that incred­i­ble con­tri­bu­tion to the work.

And the oth­er thing that’s a bit dif­fer­ent about the way I tell this sto­ry is that I start the sto­ry in 1819 rather than 1980. And that allows me to do some very spe­cif­ic work, which is to talk about what I think of as the deep social pro­gram­ming of the tools that we’re now using in pub­lic ser­vices across the United States.

So, while I think that the new tech­nolo­gies we’re see­ing absolute­ly have the poten­tial to low­er bar­ri­ers, to inte­grate ser­vices, and to real­ly act to make social ser­vice sys­tems more effi­cient and more nav­i­ga­ble, what I found in my sev­en years of report­ing on the book is that what we’re actu­al­ly doing is cre­at­ing what I call a dig­i­tal poor­house, which is an invis­i­ble insti­tu­tion that pro­files, polices, and pun­ish­es the poor when they come into con­tact with pub­lic ser­vices.

And in the book I talk about three dif­fer­ent cas­es. I talk about an attempt to auto­mate and pri­va­tize all of the eli­gi­bil­i­ty process­es for the wel­fare sys­tem in the state of Indiana. I talk about an elec­tron­ic reg­istry of the unhoused in Los Angeles County, what the design­ers call the match​.com of home­less ser­vices, the Coordinated Entry System. And I talk about a sta­tis­ti­cal mod­el that’s sup­posed to be able to pre­dict which chil­dren might be vic­tims of abuse or neglect in the future, in Allegheny County, which is the coun­ty where Pittsburgh is in Pennsylvania.

But I start the book with a chap­ter about sort of the his­to­ry of pover­ty pol­i­cy and what role sort of the new waves of tech­nol­o­gy have played in that process and in those sys­tems. And I start—and this is also always when I thank my edi­tor, because the book orig­i­nal­ly start­ed with a ninety‐page his­to­ry chap­ter that start­ed in like 1600 rather than in 1819, and my edi­tor Elisabeth Dyssegaard was like, Virginia, no. No.” Like, You can­not do that to peo­ple.”

And I was like, Oh, but all the deep his­tor­i­cal detail is so inter­est­ing!

And she was like, To you, hon­ey. To you.”

And so feel free to ask me about the his­tor­i­cal rab­bit holes I was not allowed to explore in this book. I have so much inter­est­ing infor­ma­tion. But for our pur­pos­es today and for the pur­pos­es of the book we’ll start in 1819.

So the rea­son I start in 1819 is this is the moment where there’s a real­ly big eco­nom­ic dis­lo­ca­tion in the United States. There’s a depres­sion. During the depres­sion, poor and work­ing peo­ple began to orga­nize for their needs and for their sur­vival. For their rights. And it makes eco­nom­ic elites real­ly ner­vous. So eco­nom­ic elites do what eco­nom­ic elites always do when they’re ner­vous which is the com­mis­sion a bunch of stud­ies.

And right, maybe I shouldn’t say that at Harvard. Hi.

So, they com­mis­sion a bunch of stud­ies and they frame the ques­tion as like, what’s the real prob­lem we’re fac­ing right now? Is it pover­ty? Is it a lack of access to resources? Or is it what they called at the time pau­perism,” which was depen­dence on pub­lic ben­e­fits.

And does any­one want to guess what the report said? Pauperism, that’s right. So the reports came back. They said the prob­lem is not pover­ty the prob­lem is a pau­perism prob­lem, the prob­lem is a depen­dence on pub­lic ben­e­fits, and we need to cre­ate a sys­tem that rais­es bar­ri­ers just high enough that it dis­cour­ages those who should not be receiv­ing ben­e­fits, but low enough that peo­ple who real­ly need them will get them.

And the sys­tem they invent­ed in the 1820s was a sys­tem of brick and mor­tar coun­ty poor­hous­es. These were phys­i­cal insti­tu­tions for incar­cer­at­ing poor and work­ing peo­ple who request­ed pub­lic assis­tance. And what it meant— So it’s 1820 so not every­body had this right, but basi­cal­ly what it meant was you had to give up your right to vote and to hold office as part of the entry process to the poor­house. You weren’t allowed to mar­ry. And often you had to give up your chil­dren because it was under­stood at the time that sort of inter­ac­tion with wealth­i­er fam­i­lies could redeem poor chil­dren. And by inter­ac­tion they gen­er­al­ly meant sort of leas­ing chil­dren for agri­cul­tur­al or domes­tic labor under appren­tice­ship pro­grams.

And some­thing like a third of peo­ple who entered the poor­house— Some poor­hous­es had death rates as high as 30% annu­al­ly. So it’s like a third of folks who entered them every year died.

And the rea­son I start the sto­ry of this book with the actu­al phys­i­cal brick and mor­tar poor­house is because I believe this is the moment where we decid­ed as a polit­i­cal com­mu­ni­ty that the front line of the pub­lic ser­vice sys­tem should be pri­mar­i­ly focused on moral diag­no­sis. On decid­ing whether or not you were deserv­ing enough to receive aid rather than build­ing uni­ver­sal floors under every­one. And that’s part of the sort of deep social pro­gram­ming that we see at work with­in these sys­tems that con­tin­ues to pro­duce bad out­comes for poor fam­i­lies, even when the inten­tions of the design­ers, the admin­is­tra­tors, and oth­er folks involved in the process of cre­at­ing the sys­tems are real­ly good. Even when peo­ple are smart and their inten­tions are good.

So let me talk just very briefly about the three cas­es and about sort of three big ideas that I see sort of cross‐cutting the three cas­es.

So the first because I want to talk about is Indiana. And what you need to know about Indiana is in 2006 then‐Governor Mitch Daniels signed what was even­tu­al­ly a $1.34 bil­lion con­tract with a con­sor­tium of high‐tech com­pa­nies includ­ing IBM and ACS to auto­mate all the eli­gi­bil­i­ty process­es for the wel­fare pro­gram. So that was cash assis­tance or TANF, food stamps(it was still called food stamps” at the time), and Medicaid. And basi­cal­ly how the sys­tem worked is that they moved 1,500 pub­lic case work­ers from their local coun­ty offices to these region­al­ized and pri­va­tized call cen­ters. There are sev­er­al of them across the state. And they encour­aged folks who were apply­ing for pub­lic assis­tance to do so over online forms on the Internet.

So from the point of view of case work­ers, what this felt like, what this looked like, was mov­ing from a place where you were respon­si­ble for a dock­et of fam­i­lies, for a case­load that was made up of fam­i­lies, to mov­ing to a sys­tem where you were respond­ing to a list of tasks as it dropped into a com­put­er­ized queue in your work­flow man­age­ment sys­tems in these region­al call cen­ters.

It also meant that you nev­er spoke to the same per­son twice, right. So if you got a call, once you hung up the next call to come through would come from any­where in the state and it would just be the next call in the queue.

From the point of view of appli­cants and recip­i­ents of pub­lic assis­tance in Indiana, it felt like no one was account­able for mis­takes, because you nev­er spoke to the same per­son twice and they didn’t under­stand your con­text or the sort of process of your case.

So, it was real­ly com­mon for peo­ple to receive what were known as fail­ure to coop­er­ate in estab­lish­ing eli­gi­bil­i­ty notices, or fail­ure to coop­er­ate notices. And basi­cal­ly what fail­ure to coop­er­ate notices meant is a mis­take had been made some­where in the process, right. Somebody had for­got­ten to sign page sev­en­teen of a thirty‐four‐page appli­ca­tion. Or the doc­u­ment pro­cess­ing cen­ter had scanned in a piece of doc­u­men­ta­tion upside down or dropped it behind a desk. Or a new case­work­er at the region­al call cen­ter maybe mis­ap­plied pol­i­cy. But no mat­ter whose mis­take it was, the only notice you would get is a notice that said you’d failed to coop­er­ate in estab­lish­ing eli­gi­bil­i­ty for the pro­gram so you’re denied.

What that meant is the sys­tem was so brit­tle that it con­fused like, hon­est mis­takes, with pos­si­ble fraud. And that was a real­ly pro­found shift for the peo­ple who rely on pub­lic assis­tance in Indiana. It also meant that the bur­den of fig­ur­ing out what had gone wrong and solv­ing it fell almost entire­ly on the shoul­ders of poor and work­ing fam­i­lies in Indiana, some of the most vul­ner­a­ble fam­i­lies in Indiana.

The thing that I want to point out about the Indiana case is that it assumes and is aligned with a pol­i­tics of aus­ter­i­ty that I think is real­ly worth talk­ing about in the con­text of talk­ing about these sys­tems. So the idea here, the nar­ra­tive is, we don’t have enough resources; we have to make some real­ly dif­fi­cult deci­sions, includ­ing mak­ing sys­tems more effi­cient and increas­ing­ly iden­ti­fy­ing fraud; because our resources are so lim­it­ed and our prob­lems are so great.

So one of the things that all of the design­ers and admin­is­tra­tors told me across these three cas­es was that these sys­tems are you know, per­haps regret­table? but nec­es­sary sys­tems for doing a kind of dig­i­tal triage for decid­ing on which fam­i­lies are most vul­ner­a­ble to the worst out­comes of pover­ty, and who can wait.

And one of the things that I think it’s real­ly impor­tant to point out is this idea that triage is nec­es­sary and inevitable is in fact a polit­i­cal choice. There are of course— We live in a world of abun­dance, and there is enough for every­one. This idea that there will nev­er be enough resources actu­al­ly cre­ates a sys­tem that repro­duces aus­ter­i­ty. And so in the case of Indiana, for exam­ple, it was orig­i­nal­ly a $1.34 bil­lion con­tract. Results in a mil­lion denials of appli­ca­tions over the first three years of the exper­i­ment. A 54% increase from the three years before the exper­i­ment. This caus­es huge suf­fer­ing for peo­ple on the ground, for poor and work­ing fam­i­lies but also for case­work­ers, and I’m hap­py to talk about that more a lit­tle bit lat­er.

One of them real­ly sort of inter­est­ing moments in the Indiana case, though, is that the com­mu­ni­ty mem­bers and just sort of nor­mal Hoosiers (that’s what you call peo­ple from Indiana, for peo­ple who don’t know), just nor­mal Hoosiers became frus­trat­ed and annoyed enough with the sys­tem that they real­ly orga­nized and fought back against it. They pushed back against the state. And they were so suc­cess­ful that the Governor actu­al­ly can­celed the con­tract with IBM three years into the exper­i­ment.

And then IBM turned around and sued the state for breach of con­tract. And in the first round of the court case actu­al­ly won. So they were allowed to keep the half‐billion dol­lars they had already col­lect­ed. And then they were award­ed an extra $50 mil­lion in penal­ties because the state had breached the con­tract. That case stayed in the courts for about eight years and in the end it did turn around and the Supreme Court found that IBM was in breach and gave $150 mil­lion back to the state.

But the real­i­ty is that this assump­tion that we had to trim already very lean roles pro­duced a sys­tem that denied so many peo­ple rights that it had to be can­celed. And the can­cel­la­tion actu­al­ly cost the state a lot of mon­ey, both in the mon­ey they had already spent, and in the eight years of legal bat­tles around whose fault it was that a mil­lion appli­ca­tions were denied.

So the irony here is that assum­ing aus­ter­i­ty tends to repro­duce aus­ter­i­ty, right. It’s actu­al­ly very expen­sive to pro­file, police, and pun­ish poor and work­ing fam­i­lies. And we’ll talk a bit more about that in a minute.

So, I’m going to talk now about the Allegheny coun­ty algo­rithm. And I hope we’ll have time to talk about Los Angeles. But I’ll do bits and pieces of this and we can reen­gage in con­ver­sa­tion if you feel like there’s any­thing I’ve missed.

So, the Allegheny Family Screening Tool is a sta­tis­ti­cal mod­el that’s built on top of a data ware­house that was built in 1999 in Allegheny County. So the data ware­house receives reg­u­lar data extracts from twenty‐nine dif­fer­ent agen­cies across the coun­ty. As of the writ­ing of the book it held a bil­lion records, more than 800 for every indi­vid­ual liv­ing in Allegheny County.

But it doesn’t actu­al­ly col­lect infor­ma­tion equal­ly on all peo­ple. So the agency that it’s receiv­ing data extracts from are pri­mar­i­ly agen­cies that inter­act with poor and work­ing fam­i­lies. So it’s juve­nile and adult pro­ba­tion, the state Office of Income Maintenance or Pennsylvania’s wel­fare office, the coun­ty office of men­tal health ser­vices, the coun­ty office of addic­tion, drug and alco­hol recov­ery, and I think twen­ty now pub­lic schools.

The lim­i­ta­tions of that data set then have become a real­ly impor­tant part of the tool that’s built on top of the data ware­house, which is the Allegheny Family Screening Tool. And I’m not gonna go into great tech­ni­cal depth on how that sys­tem works but I’m hap­py to talk about that a lit­tle bit lat­er and get into the tech­ni­cal weeds, because I find them real­ly inter­est­ing. But a cou­ple of things that I think are real­ly impor­tant to under­stand.

One is that it is not actu­al­ly machine learn­ing or arti­fi­cial intel­li­gence, though the coun­ty has recent­ly moved to using some machine learn­ing in their sys­tem. When I was report­ing on the sys­tem it was a sim­ple sta­tis­ti­cal regres­sion. For that quant nerds in the room, it’s a step­wise pro­bit regres­sion, so a pret­ty stan­dard regres­sion that they ran against all the data that’s avail­able in the data ware­house to pull out vari­ables they believe cor­re­late with future abuse or neglect. So, using his­tor­i­cal val­i­da­tion data, not real­ly train­ing data because it’s not machine learn­ing, but using their his­tor­i­cal val­i­da­tion data.

The real­i­ty of expe­ri­enc­ing this tool, though, from par­ents’ point of view, they feel very much like because of the lim­i­ta­tions around the data set, because the data only col­lects infor­ma­tion or pri­mar­i­ly col­lects infor­ma­tion on poor and work­ing class fam­i­lies, they feel like they are part of a sys­tem of pover­ty pro­fil­ing where because they are being… Because their data is in the sys­tem more than pro­fes­sion­al middle‐class, or middle‐class fam­i­lies, they are iden­ti­fied for pos­si­ble abuse or neglect more, risk rat­ed more high­ly. Which means they’re inves­ti­gat­ed more often. Which means they’re indi­cat­ed more often. Which means that more of their data goes in the sys­tem, sort of cre­at­ing this feed­back loop that’s very sim­i­lar to the kind of feed­back loop that peo­ple talk about around pre­dic­tive polic­ing.

So the fam­i­lies that I spoke to very often said they felt like the sys­tem con­fused par­ent­ing while poor with poor par­ent­ing. So it’s a false pos­i­tives prob­lem, right. Seeing harm were no harm may actu­al­ly exist.

Now, I also spent a lot of time with front line case work­ers in this sys­tem, par­tic­u­lar­ly with intake call cen­ter work­ers. And intake call cen­ter work­ers are the folks who receive reports of abuse or neglect from the com­mu­ni­ty, either over their their hot­line or from man­dat­ed reporters in the com­mu­ni­ty. And they make a real­ly dif­fi­cult deci­sion. They make a deci­sion about whether or not they should screen each case in for a full inves­ti­ga­tion or whether they should screen it out as not ris­ing to the lev­el of abuse or neglect, or as not hav­ing high enough risk or low enough safe­ty to the chil­dren to ratio­nal­ize run­ning a full inves­ti­ga­tion.

And intake call cen­ter work­ers, inter­est­ing­ly, were con­cerned about the oppo­site prob­lem but for the same rea­son. So they were con­cerned about false neg­a­tives prob­lems. They were con­cerned about the sys­tem not see­ing harm where harm might actu­al­ly exist. So they explained to me that because the sys­tem doesn’t real­ly col­lect infor­ma­tion on pro­fes­sion­al and middle‐class fam­i­lies… And you know, pro­fes­sion­al middle‐class fam­i­lies need as much help with their par­ent­ing as every­one else. The dif­fer­ence is that they tend to pay for it with pri­vate sources. So, if you need help with child­care, you get a nan­ny or a babysit­ter, you pay out of pock­et. If you need help with addic­tion recov­ery or with a men­tal health issue and you have pri­vate insur­ance, that information’s not going to end up in this data ware­house. Only the folks who go to coun­ty men­tal health ser­vices end up in the data ware­house, right.

So the intake call screen­ers were real­ly con­cerned that some of the things that are real­ly good indi­ca­tors for abuse and neglect in pro­fes­sion­al middle‐class fam­i­lies wouldn’t be cov­ered in the data ware­house, so it wouldn’t be rep­re­sent­ed in the mod­el. So for exam­ple, there’s some good evi­dence that geo­graph­ic iso­la­tion actu­al­ly is high­ly cor­re­lat­ed with abuse or neglect, but folks who live in the sub­urbs or in iso­lat­ed hous­ing won’t show up in the data ware­house because they’re not the folks in Allegheny County who are get­ting coun­ty health. So they won’t end up in the data ware­house. So intake call screen­ers were also real­ly con­cerned about the lim­i­ta­tions of that data set, but they were con­cerned about it from the oth­er side.

Also, I want to say anoth­er thing that’s impor­tant about this sys­tem is that you know, many of the admin­is­tra­tors I spoke to spoke a lot about effi­cien­cy and cost sav­ings as rea­sons for these tools. But that was only one rea­son. And anoth­er rea­son that was real­ly impor­tant to them was to iden­ti­fy and mit­i­gate bias in front line deci­sion­mak­ing or in pub­lic ser­vice deci­sion­mak­ing. I think it’s real­ly real­ly impor­tant to acknowl­edge that that bias exists. The human bias exists. Institutional bias exists in the sys­tem, and has for a real­ly long time. So since the Social Security Act in the 1930s until the 1970s, black and Latino fam­i­lies were large­ly blocked from receiv­ing pub­lic assis­tance by dis­crim­i­na­to­ry eli­gi­bil­i­ty rules that didn’t fall until they were direct­ly chal­lenged by the National Welfare Rights move­ment in the late 60s and ear­ly 70s. And that’s cre­at­ed all sorts of dis­cre­tionary excess­es in the sys­tem that are both human and insti­tu­tion­al and real­ly impor­tant to address.

It is also true in the child wel­fare ser­vices, although the prob­lem in child wel­fare ser­vices tends not to be exclu­sion from the sys­tem but overinclu­sion in the sys­tem. So in forty‐seven states across the United States, African‐American chil­dren are in fos­ter care at rates that far exceed their actu­al pro­por­tion of the pop­u­la­tion. It’s a prob­lem called racial dis­pro­por­tion­al­i­ty. And Allegheny County like most coun­ties has a prob­lem with dis­pro­por­tion­al­i­ty. So at the time I was doing my report­ing, 38% of chil­dren in fos­ter care in Allegheny County were black or bira­cial, and they only made up 18% of the youth pop­u­la­tion, so that’s what like, twice…more than twice where they should be giv­en their pro­por­tion of the pop­u­la­tion.

So the design­ers of this sys­tem were real­ly excit­ed to talk to me about the pos­si­bil­i­ty of using the bet­ter data that they were gath­er­ing to iden­ti­fy where pat­terns of dis­crim­i­na­to­ry deci­sion­mak­ing might be enter­ing the child wel­fare sys­tem. Now, the prob­lem with that is that the county’s own research shows that the intake call screen­ing is not actu­al­ly the point at which dis­crim­i­na­tion is enter­ing the sys­tem. In fact it’s enter­ing much ear­li­er. So it’s enter­ing at the point at which fam­i­lies are referred to the sys­tem. So it’s enter­ing at refer­ral not at screen­ing. The com­mu­ni­ty refers black and bira­cial fam­i­lies either through man­dat­ed reports or through the hot­line 350% more than—three and a half times as often—than they refer white fam­i­lies.

Once that case gets in the sys­tem, there is a tiny bit of dis­pro­por­tion­al­i­ty that’s added by the intake screen­ing process. So intake screen­ers screen in 69% of black and bira­cial fam­i­lies, and only 65% of white fam­i­lies. But the dif­fer­ence there is like a 4% per­cent­age dif­fer­ence or a 350% per­cent­age dif­fer­ence.

And I think one of the real­ly inter­est­ing ques­tions this begs is, is the ear­li­er prob­lem a data‐amenable prob­lem? That refer­ral bias, is that some­thing we can attack or address or con­front with auto­mat­ed sys­tems? And my feel­ing is that that’s real­ly a cul­tur­al issue not a data issue, although of course the two are deeply relat­ed. It’s an issue about who we as a country…what we see a good fam­i­ly look­ing like. And in the United States we see a good fam­i­ly as look­ing white and wealthy. And that has a pro­found impact on the kinds of impacts that the sys­tem can have mov­ing for­ward.

One of my real con­cerns about this sys­tem is that we’re actu­al­ly remov­ing dis­cre­tion from front line call cen­ter work­ers, at the point at which they may be push­ing back against the dis­crim­i­na­to­ry effects of refer­ral bias. So we’re actu­al­ly remov­ing a pos­si­ble stop to the ampli­fi­ca­tion of bias in that sys­tem.

And I just want to men­tion that one of the things that these sys­tems are real­ly good at is iden­ti­fy­ing bias when it is indi­vid­ual and the result of irra­tional think­ing. They are less good at iden­ti­fy­ing and address­ing bias that is struc­tur­al, sys­temic, and ratio­nal, right. And that’s some­thing I want to talk a bit more about at the end. There’s also some prox­ies that we’re not gonna talk about.

Okay, last sys­tem that I want to talk about is the Los Angeles sys­tem, which is called the Coordinated Entry System. Referred to by its design­ers as the match​.com of home­less ser­vices. What coor­di­nat­ed entry is sup­posed to do is basi­cal­ly rate unhoused peo­ple on a scale of vul­ner­a­bil­i­ty and then match them with the most appro­pri­ate avail­able resources based on their vul­ner­a­bil­i­ty.

This isn’t unusu­al, at all. In fact Los Angeles coun­ty is just one of the many places that’s using coor­di­nat­ed entry. It’s become real­ly stan­dard across the coun­try since I start­ed the research. But one of the rea­sons to look at Los Angeles is because the scale of the hous­ing cri­sis there is just so extra­or­di­nary. So as of the last point in time count, there are 58,000 unhoused peo­ple in Los Angeles coun­ty. I live in a small city in upstate New York called Troy. There’s just few­er than 50,000 peo­ple in Troy. So my entire city, plus 10,000 peo­ple is home­less in Los Angeles coun­ty, right, so just for a sense of the scale.

And some­thing like 75% of the peo­ple who are unhoused in Los Angeles coun­ty are com­plete­ly unshel­tered. So they have no access to emer­gency shel­ter, liv­ing in tents or in cars, or in encamp­ments. And so this is an absolute­ly crit­i­cal human­i­tar­i­an cri­sis in the United States.

So it total­ly makes sense, it com­plete­ly make sense to me that folks, par­tic­u­lar­ly front line case work­ers, want a lit­tle help mak­ing the incred­i­bly dif­fi­cult deci­sion of who among the like hun­dred peo­ple they see every week gets access to the two or three resources they have at their dis­pos­al, right. It’s an incred­i­bly dif­fi­cult deci­sion, and I absolute­ly under­stand the impulse to try to cre­ate a more effi­cient and more ratio­nal and more objec­tive sys­tem for match­ing need to resource.

Now, what I heard from folks who are inter­act­ing with the sys­tem, though, who are tar­gets of the sys­tem, folks in the unhoused com­mu­ni­ty, was a lit­tle dif­fer­ent. So, as of the writ­ing of the book, they had man­aged to match… Let me tell you a lit­tle bit of how how it works first.

So, coor­di­nat­ed entry, there’s basi­cal­ly four pieces. The first piece is a very inten­sive sur­vey called the VISPDAT, which is the Vulnerability Index and Service Prioritization Decision Assistance Tool. (Yes. It’s not my first time say­ing that out loud.) So there is this very intense sur­vey called the VISPDAT that is giv­en to unhoused folks either through street out­reach or when they come in to orga­ni­za­tions for help. That infor­ma­tion gets input into their home­less man­age­ment infor­ma­tion sys­tem, which we’re not going to go into depth with. Just think of it as a data­base. It’s not quite true, but think of it as a data­base for now.

So that infor­ma­tion goes into their HMIS, there’s an algo­rithm in the home­less man­age­ment infor­ma­tion sys­tem that then adds up folks vul­ner­a­bil­i­ty score, how high they are on the scale of being like­ly to expe­ri­ence the worst out­comes of being home­less, includ­ing emer­gency room vis­its, death, men­tal health cri­sis, vio­lence, right. Really awful out­comes of being announced.

From the oth­er side, there’s all this infor­ma­tion about avail­able resources enter­ing the oth­er side of the data­base. And the two meet in the mid­dle, where there’s sup­posed to be an algo­rithm that match­es unhoused peo­ple based on their vul­ner­a­bil­i­ty score with the most appro­pri­ate avail­able resource, based on what’s avail­able in the sys­tem.

The real­i­ty is…this isn’t even in the book so shh. The real­i­ty is that there’s…when I was report­ing, at least, there’s no sec­ond algo­rithm. Actually it’s like Mechanical Turk; there’s like a guy in a room who’s match­ing the two… But it doesn’t actu­al­ly real­ly mat­ter, over­all, for the ways we we need to be think­ing about this sys­tem.

So, the unhoused folks that I talked to, some of them I want to be clear thought this was the—you know, the best thing since sliced bread. Were very clear to say like, I got housed through this sys­tem. It’s the best. It’s a gift from God. It’s the best Christmas present I ever got, absolute­ly.” And they have been able to match about 9,000 peo­ple with some kind of resource through this sys­tem. That doesn’t nec­es­sar­i­ly mean hous­ing, that just means any kind of resource. It could be like a lit­tle help avoid­ing an evic­tion, or mov­ing costs, or find­ing a new rental. But they have as of the writ­ing of the book sur­veyed thirty‐nine thou­sand peo­ple with the VISPDAT.

So what I thought was a real­ly impor­tant ques­tion was talk­ing to the folks who have been sur­veyed but haven’t got­ten resources about their expe­ri­ence with the sys­tem. And what they told me is that they felt like they were being asked to poten­tial­ly incrim­i­nate them­selves in exchange for a slight­ly high­er lot­tery num­ber for hous­ing. And why they believed that is because the VISPDAT actu­al­ly asks some real­ly intense and bor­der­line inva­sive ques­tions.

For exam­ple it asks Are you cur­rent­ly trad­ing sex for drugs? Does some­one think you owe them mon­ey? Have you thought about harm­ing your­self or some­one else? Are there open war­rants out for you? Are you hav­ing unpro­tect­ed sex? Where can be found at dif­fer­ent times of the day?” And, Can we take your pic­ture?”

And though folks fill out a real­ly com­plete, informed con­sent form that lasts for sev­en years, many of them didn’t feel like they had tru­ly free, vol­un­tary con­sent in inter­act­ing with this process. Because coor­di­nat­ed entry has become the front door for pret­ty much all hous­ing resources in Los Angeles County. So they felt like—particularly those folks who had tak­en the sur­vey mul­ti­ple times and nev­er received any resources, they were begin­ning to view the sys­tem with some sus­pi­cion.

And it’s actu­al­ly not a ter­ri­ble analy­sis of the sys­tem. So, though you sign this real­ly sort of intense informed con­sent that last a real­ly long time, if you have ques­tions about how your data is being shared, you actu­al­ly have to go through anoth­er step and request that infor­ma­tion be sent to you…? (Right, unhoused.) …request that infor­ma­tion about where your data goes be sent to you. If you do request that infor­ma­tion you get a list of 161 agen­cies who share this infor­ma­tion, who share this data across their sys­tem.

And one of them, because of the fed­er­al data stan­dards, is the Los Angeles Police Department. So, under cur­rent fed­er­al data stan­dards, infor­ma­tion that’s stored in an HMIS can be accessed by law enforce­ment with­in no war­rant at all, no over­sight process, no writ­ten record. Just a line offi­cer can walk into a social ser­vice office and ask for infor­ma­tion about unhoused peo­ple. They can’t get any­thing they want out of the sys­tem, and social ser­vice work­ers can say no (This is real­ly impor­tant to know.) but they are allowed to get it and there’s no over­sight process for that.

So what I want to do is talk about two things and I’m gonna wrap up in about three min­utes and then we’re gonna have a larg­er con­ver­sa­tion. Because I also want to point towards where the work has gone since the writ­ing of the book. But I think one thing that’s real­ly impor­tant to think through is…you know, I hear from folks when I do these talks a lot, like there’s a sense that, Oh Virginia, you wrote the Frankenstein book.” Like you found the scari­est sys­tems you could and you wrote this real­ly fright­en­ing book because scary sto­ries sell books.

And the real­i­ty is that in Indiana it might be true. In Indiana it’s… Though I don’t know what was in Governor Daniel’s heart when he made the deci­sions he made to cre­ate the sys­tem, I do know, as one of the sources said, that if they had built a sys­tem on pur­pose to deny peo­ple access to pub­lic assis­tance it prob­a­bly wouldn’t have worked any bet­ter. So we might be able to put a black hat on that sys­tem. But in Los Angeles and in Allegheny County? all of the design­ers and the pol­i­cy mak­ers and the admin­is­tra­tors I talked to were very smart, very well‐intentioned peo­ple who cared deeply about the folks their agency served.

And I actu­al­ly think that sets up a bet­ter set of ques­tions. So I didn’t write about the worst cas­es out there. In fact if I want­ed to write a worst‐case book it would’ve been a lot scari­er than the one that I wrote. Because the sys­tems in Allegheny County and in Los Angeles, actu­al­ly the design­ers are doing just about every­thing that pro­gres­sive crit­ics of algo­rith­mic decision‐making ask them to do. They’ve been largely—not entire­ly, but large­ly trans­par­ent about how the sys­tems work and what’s inside them. They hold these tools in pub­lic agen­cies or at least in public/private part­ner­ships so there is some kind of demo­c­ra­t­ic account­abil­i­ty around them. And both of them actu­al­ly even engage in some kind of process of par­tic­i­pa­to­ry design, or like human‐centered design of the tools. And that’s real­ly all the things we ever ask for in sort of pro­gres­sive cri­tiques of algo­rith­mic deci­sion­mak­ing.

So these are actu­al­ly some of the best tools we have, not some of the worst. And I think that actu­al­ly rais­es some real­ly impor­tant ques­tions. Which brings us all the way back to that sto­ry I told at the begin­ning about where the deep social pro­gram­ming of these tools comes from, and how we are often sort of invis­i­bly car­ry­ing for­ward this deci­sion we made 200 years ago that social ser­vice is more a moral ther­mome­ter than a uni­ver­sal floor.

And so I just want to point out that it’s less impor­tant I think to talk about the intent of the design­ers, though of course that’s inter­est­ing and impor­tant, than it is to talk about impacts on tar­gets. And so that’s one of the sort of big pic­ture things I’d like us to talk a lit­tle bit about, about how we can move the con­ver­sa­tion away from intent and towards impact.

And final­ly I want to talk a lit­tle bit about solu­tions. So, I know that when I come and do talks like this, par­tic­u­lar­ly for rooms that are tech­ni­cal­ly sophis­ti­cat­ed or pol­i­cy sophis­ti­cat­ed, that often what peo­ple want is sort of a five‐point plan for build­ing bet­ter tech­nol­o­gy. And I get it. And I’m sor­ry and you’re wel­come that I’m gonna make you resist the urge for a sim­ple solu­tion to what is real­ly a very very com­pli­cat­ed prob­lem.

So I believe we need to be doing three kinds of work simul­ta­ne­ous­ly in order to real­ly move the way the sys­tems are work­ing. And the first is nar­ra­tive or cul­tur­al work. And that’s real­ly about chang­ing the sto­ry we tell about pover­ty. There’s a sto­ry in the United states that pover­ty is an aber­ra­tion. That it’s some­thing that hap­pens only to a tiny minor­i­ty of prob­a­bly patho­log­i­cal peo­ple. And that’s sim­ply not true. So if you look at Mark Rank’s real­ly extra­or­di­nary life cycle research around pover­ty in the United States, 51% of us will be below the pover­ty line dur­ing our adult lives, between the years of 20 and 64. And almost two thirds of us, 64% of us, will access means‐tested pub­lic assis­tance. So that’s straight wel­fare, that’s not reduced‐price school lunch­es. That’s not Social Security. That’s not unem­ploy­ment. That’s straight wel­fare.

So the sto­ry we tell that pover­ty is an aber­ra­tion, is a rare thing, is just sim­ply untrue, empir­i­cal­ly. Poverty is actu­al­ly a major­i­ty expe­ri­ence in the United States. That doesn’t mean we’re all equal­ly vul­ner­a­ble to it. That’s sim­ply untrue as well. If you’re a per­son of col­or, if you’re born poor, if you’re car­ing for oth­er peo­ple, if you have a phys­i­cal dis­abil­i­ty or men­tal health issues, you’re more like­ly to be poor and it’s hard­er to escape once you’re there. But the real­i­ty is pover­ty is a major­i­ty expe­ri­ence in the US, not a minor­i­ty expe­ri­ence.

I believe if we start to shift that nar­ra­tive, if we start to shift that sto­ry, we’ll be able to imag­ine a dif­fer­ent kind of pol­i­tics that is more about build­ing uni­ver­sal floors under all of us and dis­trib­ut­ing our shared wealth more even­ly and more fair­ly, and less about decid­ing whether or not you are des­per­ate enough and deserv­ing enough to receive help. Because many of the con­di­tions I talk about in the book, whether it’s a liv­ing on the side­walk for a decade or more or los­ing a child to the fos­ter care sys­tem because you can’t afford pre­scrip­tion med­ica­tion, in oth­er places in the world peo­ple see these as human rights vio­la­tions. And that we see them here increas­ing­ly as sys­tems engi­neer­ing prob­lems actu­al­ly says some­thing very deep and trou­bling about the state of our nation­al soul. And I think we need to get our souls right around that in order to real­ly move the nee­dle on these prob­lems.

And final­ly, in the mean­time technology’s not going to just stop and wait for us to do this incred­i­bly com­pli­cat­ed and dif­fi­cult work. And so my sort of final bit of advice is to design­ers. And it’s about not con­fus­ing design­ing a tool in neu­tral with design­ing it for jus­tice and equi­ty. And sort of to quote Paulo Freire the rad­i­cal edu­ca­tor, he says neu­tral edu­ca­tion is edu­ca­tion for the sta­tus quo. And it’s the same around tech­nolo­gies. Neutral tech­nolo­gies just means tech­nolo­gies designed to pro­tect and pro­mote the sta­tus quo. If we want to actu­al­ly address the very real land­scape of inequal­i­ty in the United States, we have to do it on pur­pose, from the begin­ning, every time.

So the metaphor I often use for folks is you know, think about this tool we’re using as a car. And think about the land­scape of inequal­i­ty we live in as being San Francisco, right. Very bumpy, very hilly, very Valley‐y, very full of twists and turns. Now, if you built your car with no gears, you should not then be sur­prised when it hur­tles to the bot­tom of a hill and smash­es to bits at the bot­tom. You have to build in gears to actu­al­ly engage with the hills and the turns that exist in your land­scape. And we have to do that when we’re build­ing these sys­tems as well. Equity and jus­tice won’t hap­pen by acci­dent. We have to design it into all of our polit­i­cal tools, so that’s both our poli­cies and our tech­nolo­gies, from the begin­ning, brick by brick and byte by byte.

Thank you so much for your time, for your atten­tion. I’m real­ly look­ing for­ward to this con­ver­sa­tion. Thank you.


Amar Asher: Thank you so much Virginia. So much to dig into here, and I'm eager to get to question since we have limited time. I see a first hand.

Virginia Eubanks: Alright.

Audience 1: Hi. You had alluded to the work that you're doing now after the book. Could you talk more about that?

Eubanks: Yeah, so one of the things—thank you for letting me put up my my last beautiful slide. So one of the things that's been happening a lot since the book came out is that… One is that I've realized that books are a moment in time and not a final answer on anything. And that my own thinking in some ways has shifted since the book was published. And one of the ways my thinking has shifted was around who I think the audience for the book is.

So originally I really saw two audiences. One was folks who have experienced these systems as targets. Because I think it's really important for those of us who are engaged in these systems to have confirmation of our stories. Because the way that stigma and poverty works in the United States makes us all feel like we're the only person this has ever happened to. So sharing those stories is a really important part of that larger narrative work of telling a different story about poverty.

And then I also thought the book's audience was mostly designers, and data scientists, and economists, and the folks who are building these models and these tools. And that's true; I do think that I've been able to engage in some really good conversations with folks who design these systems.

But the audience that I didn't think of when I was writing the book explicitly is folks who are on the ground in organizations who are seeing these tools roll out and who are actually often asked to consult about them by state agencies or local agencies and who I'm now increasingly getting a lot of phone calls from, just because they've seen the book or read the book who say like, "Hey, you know in New York City the Bronx Defenders called me and said the administration for children's services in New York City is moving towards predictive analytics in child welfare. They want us to consult on the tool. We don't even know how to frame the questions. Can you help?"

And so one of the things that's happened since the book came out is that I think we've opened up this really interesting set of questions about like, how do organizations and advocates and you know, neighborhoods frame questions so that they sort of claim their space as experts at the table in this decisionmaking? Because I think too often these are exactly the people who aren't in the room when we make these decisions. And if my book is any indication, we then frame the problems in ways that are not in the long run going to help us create more just, more fair systems.

So what's come out of that is a set of questions that we've started to think about asking. And you know, the first one, sort of Step 0 for me is those things that I talked about earlier. So transparency, accountability, and participatory decisionmaking. Or participatory design. So for me that's like bargain basement democracy? That's like Floor 0. That's like subbasement democracy. And everything should always be built on that foundation. But we need to be asking really different kinds of questions after that—and we're not quite there yet in this space/

One I think—I'll just share one or two. One that I think is really important is is the use of analytics accompanied by increased resources, or is it being deployed as a response to decreasing resources? because it is being deployed as a response to decreasing resources you can be pretty sure it's going to act as a barrier and not as a facilitator of services.

And that was certainly true across the cases I looked at, but the best example of this would be… So Georgia State University in 2012 moved to a predictive analytics in their advising. They have, like many underresourced public universities that serve first-generation college students, they've had real issues retaining students. So they moved to predictive analytics in 2012 and they've been written about widely as this sort of huge success in using predictive analytics to do better advising to keep college students in school. Their retention rate went up something like 30%.

But the part of the story that gets buried over and over again every time that it's written about is that at the same time they moved to predictive analytics, they went from doing 1,000 advising appointments a year to doing 52,000 advising appointments a year. They hired forty-two new full-time advisors. And that always ends up in paragraph 17 of these stories. So it's like, "Predictive analytics wins! And [muffles voice with her hands] also huge amounts of resources."

And so it feels to me like that story is actually the story of "adequate resources solve real problems," not "predictive analytics wins." I'm sure the predictive analytics helped them like, figure out where to send the massive wave of new resources. But I think it is…misleading to talk about those two things as separate from one another. So that's a question you should ask. Like what's the resource situation when you're moving to analytics.

Another is, really do we have a right as a community to stop one of these tools, or from the very beginning to say no? So the ACLU in Washington has made some real inroads specifically around police surveillance technology and having sort of a community accountability board that the police department has to run any use of new surveillance technologies through this community group in order to get information about it, and before they start on deploying it. And I think one of the great questions they're asking is not just can we stop it but can we say no from the beginning? And cCan we say no for reasons that are non-technical, Like if this doesn't match our values and we don't want it. Like are there ways that we can say no?

Or is there remedy, right? I think we're just getting to this part in the conversation, which is if one of these tools harms you or harms your family, is there a way for you to get redress? That's also a really important question, I think.

So that's sort of where the work has been going, in collaboration with these organizations. It's been thinking about what kind of questions do we want to ask in order to exert some control and power, and to bring the real, full breadth of expertise into the room when we're making these kinds of decisions. Thank you for that question.

Audience 2: Hi Virginia. Back to the intent and impact and also to the soul-searching comment. So what do we do about the groups whose intentions are to keep people off and who for them, their justification's that it's… They did the soul-search and for them the justification is this is better for society and people shouldn't be on benefits, etc. I think there are folks in this room that have had that argument as well. So what do we… So do we just like, not not work with those groups or what do we do with groups like that?

Eubanks: Yeah, so I think it's a really crucial question for this political moment, right. So if you look at the 2019 Trump administration budget, it identifies… I may bet this figure not exactly right. But one of the things that budget promises is to save $188 billion over the next ten years by bringing these kinds of techniques to middle-class entitlement programs. To disability, to unemployment, to Social Security. And so one of the origin points for this book that I often share is a woman on public assistance I was working with in 2000, who she and I were talking about our electronic benefits transfer cards—it's a long story, I won't go into the whole story there. But one of the things that she said was, "Oh, Virginia. You all should pay attention to what's happening to us," like, folks on public assistance, "because they're coming for you next."

And I think both that was very generous of her, to care about the fact that as canaries in the coalmine that they have some responsibility to communicate to folks who are outside these systems. The other thing I think is really important she said that in 2000. She said that almost twenty years ago. And I think it's another reason to be always starting this work from the folks who are most directly affected. Because we're just going to learn more about these systems, and we're going to be working in coalition with folks who are really invested in creating smart solutions when we do that.

So how to deal with the political moment that we're having right now, around…you know, just to be honest we are in a moment where the country is trying to dismantle the social safety net entirely, right. So work requirements for Medicaid. The state of Mississippi is denying 98.6% of cash welfare applications—a rounding error of 100%. We're starting to create ways of tracking people who receive disability help, right. We're increasingly in the situation where just basics of the social safety net are really under threat.

I think the possible good news here…? It's a real good/news bad news situation. But the possible good news here is that the very overreach of these system, and the very speed and scale of them really has the potential to touch a lot of people really quickly.

So in Indiana, part of what drove the pushback against that system was because it was affecting Medicaid it began to affect middle-class folks, like grandparents who are in nursing homes. And that was a sort of moment where public opinion changed really fast. And I think we're awfully close to that moment right now? But I do really believe we need to be doing this sort of deep work to build the coalition, and to build the connection, and to build an analysis that we'll have when one of these systems fails in a spectacular way that it impacts non-poor people. And that will create a sort of window to start to really rethink our use of these systems and what it means for our democracy and for the health and safety of our people.

I mean, from a moral point of view we should do it earlier than that. Because what happens to anyone happens to us all. But strategically and politically I think that's going to be a moment that opens up a lot of possibility. Thank you for that.

Audience 3: On one of your slides, you listed a non-discriminatory data set. What is that and where is it?

Eubanks: Wait, which slide? Where do I have a non-discriminatory data set?

Audience 3: It had two curly…one curly up at the top, one curly at the bottom.

Eubanks: Ah!

Audience 3: Like, I want to know where the data set exist that's not discriminatory.

Eubanks: Yeah. So that's a fair question. So, the model inspection, that may just be a miscommunication between the lady who worked on my slides and me—Elvia, by the way, Vasconcelos, who's a genius.

So, the idea here is that step one is to inspect the model for specific things. One is if the data set is…if and in what what ways the data set is discriminatory. And then looking at whether the outcome variables are actual measures of the thing you're trying to affect or whether they're proxies. And the third is seeing if there's patterns of disproportionality among the predictive variables.

Audience 3: [inaudible]

A non-discriminatory data set. So, I have not, myself. I do know that there has been some experimentation with creating basically fake data sets to build machine learning on. I don't know a ton about how that actually works, though I think it's interesting. I believe there will probably be a different set of issues. Because you know, if you're building a fake data set you're still building a data set based on assumptions that, you know… And where's it come from, and can your predictions then be valid if it's based on fake data—right. But I don't understand enough about how those systems work to say that for sure.

I think what your larger point is is really true, which is the data sets that we have, which are produced by say gang databases or produced by the child welfare system or produced by public assistance carry the legacy of the discriminatory data collection that we've engaged in in the past. And so it's very hard to imagine that there would be a non-discriminatory data set, yeah. But it might be a question for the folks who are more on the machine learning side than me about how that might work. It's a good question. But thanks. Appreciate it.

Audience 4: Thanks for this talk. I'm a data journalist from Germany, and I'm interested in the gears you were talking about. I'd love to hear more about that, because you already said that the algorithm you were talking about is a good example because it's already transparent, and it's held in a public/private partnership so you can control in some way how it works. So what else should you add to such a decisionmaking algorithm to make it more safe or more fair?

Eubanks: So I think the thing that's hard about that question is it's going to be different in every example. And it requires sort of knowing about how…how things actually happen on the ground around whatever agency you're interacting with. But I can give you a really good concrete example around Allegheny County. And they have actually done this.

So, originally the Allegheny Family Screening Tool, because thankfully there's not enough data on actual on physical harm to children to predict that actual outcome, they used two proxies for the outcome of actual maltreatment. And one of them was called "call re-referral." And that just meant that there was a call on a family, it was screened out as not being serious or severe enough to be fully investigated, and then there was a second call about the same family within two years. So call re-referral; that's one of the ways they defined that harm had actually happened, for the purposes of the model.

Now, the problem with that is that it's really really common for people to engage in vendetta calling inside the child welfare system. So you have a fight with your neighbor, your neighbor calls CPS on you. Like, you are going through a bad break up, your partner calls CPS on you. And this is really really really really common. It happens a lot.

And one of the things I asked the designers when we were talking about the system is like, well you know, "If one of your proxies is call re-referral, and vendetta calling has happened, you see how that's going to a bad outcome for folks." Because it basically means if you call two or three times on your neighbors because you're mad at them having a party, then it bumps up their risk score in CPS. And increases their likelihood of being investigated.

And so one of the equity gears in the Allegheny Family Screening Tool would then if you were going to use that proxy be a way to deal with vendetta calling, right. And it doesn't seem like impossible to design that. It'd be like okay, if the calls come back to back for two weeks and there's an investigation and nothing happens, then like maybe that's a vendetta call. Or if it's X person or whatever—it doesn't seem like it would be impossible to build that in, though equally troubling as the other decisions that are made in that system.

I will say that they dropped that as a proxy since the book came out. I don't know if there's a direct relationship between those two things, but they're no longer using that proxy. So I think that's a concrete example of the sort of depth of knowledge you need to know about the domain in order to really build those equity gears in. That's an important part of the process. Does that help?

Audience 4: [inaudible]

Eubanks: No. I don't think so. So it's less about the data and more about how the system itself works, right. So, many of the folks I spoke to about these models were incredibly smart about modeling, incredibly smart about data, but not very smart about the policy domain in which they were working, right. So, people who were very well-intentioned and trying to do the best they could would make assumptions about how things worked inside the system without really knowing. Like for example, if you know anything about the child protective system, you know not to use multiple calls as a proxy for anything, because that's like, it's stan— It's like, I don't know how you could talk to even two families who have gone through this process and not know about vendetta calling. So that's surprising to me that they didn't have a way of dealing with it. And so those are the kinds of equity gears we need.

And the long-run answer, really, is that building these systems well is incredibly hard and incredibly resource-intensive. And building them poorly is only cheaper and faster at first. And so I think we have a tendency to think about these tools as sort of naturally creating these efficiencies because the speed of the technology is such that it creates the appearance? of faster and easier. But in fact you really have to know a lot about how these systems work in order to build the tools for them, and to interrupt the patterns of inequity that we're already seeing.

Audience 4: [inaudible]

Eubanks: Yeah. I think that's a good way to put it. Yeah.

Audience 5: Hi. I wanted to ask about a tension that I think runs through the book and the actual nature of the problem and also some of the questions. Which is where the source of some of these challenges lies. And so in some of the cases it's about the technology and it's about the data in particular. So, for example if your target variables are correlated with membership in a sensitive group, you've got a problem. Or if you have to try and state a problem that is very complicated very precisely, similarly, you've got a problem. So that's the AFST case.

But in other places it's really about the sort of social inequality, the context of social inequalities, so such as in like the LA case it's fundamentally there aren't enough houses at a certain point

So clearly it's both. And your argument is that it's both, and they intersect in complicated ways. But I want to ask about the ways in which the technology itself does actually matter, and it is like, different. So the sort of two questions are, what are the specific challenges that you think making public decisions using lots of data, possibly machine learning…what's different about those kind of challenges? (A.) And then sort of B, which of your solutions or the sort of approaches we should take specifically have to do, in your view, with that dimension of the challenge rather than the broader social context? Does that make sense?

Eubanks: Yeah, it makes perfect sense. Um…and I'm gonna give you one of those like, frustratingly big-picture answers? Because I think the fundamental difference in these systems from the kinds of tools that came before is that we pretend that these are just administrative changes. That we're not making like, deep-seated political decisions. And it obscures the fact that we're making really profound political decisions through these systems.

And I think that is the biggest challenge, actually, is the impulse to keep trying to separate the technology and the politics. Because like, that's why I start with the poorhouse, is to say like you know, our politics have always been built into our tools, and they're built into our tools today. But they're built in in a way that…are faster. That scale more quickly. That impact networks of people rather than individuals and families, in ways that can really profoundly impact communities. And also that don't provide the same kind of space for resistance, right.

So one of things that's really interesting about poorhouses is that one of the reasons they did— So, we were supposed to have one in every county in the United States; we only ended up with about a thousand of them—that's still a lot. But we didn't get one in every county in the United States. Part of the reason is that they ended up being really expensive. And that's a lesson we should learn. They thought they were going to be cheaper, too. And it didn't work out that way.

And the other reason that they didn't spread across the country is that all of a sudden, people living in this…it's like a shared space, eating over a shared table, living in dorms, taking care of each other's kids, caring for each other when you die…like, started to care about each other and started to use poorhouses as a way of resistance, sort of building resistance in poorhouses.

And so one of my real concerns about these systems is that they seem to me profoundly isolating, right. That they reinforce this narrative that poverty's an aberration. That you've done something wrong. And that you should just shut up about it and not sort of push back against the system. So I'm really concerned about the ways it removes established rights from people. Like their their rights to fair hearings. I tell a story about that in the book. And I'm really concerned about it removing a public space of gathering, where we can come together, talk about our experiences and realize we're not alone.

And I'll just say, as a welfare rights organizer for many years, we organized in the welfare office all the time. Because people had a lot of time. They were there with their whole family. And they were mad. And so it was a really great place to organize um…until you got thrown out. So I am really concerned about the sort of larger thread of this. Which I think it's true around prisons as well. The move from prisons to ankle shackles I think creates some similar issues of increasing isolation—no less punishment but more isolation. Or a different kind of punishment and isolation, to be more clear.

So I think the primary issue is this issue around not seeing these as political decisions. And the solution, I'd just take you back to that earlier stuff, is about telling stories in a different way. And this may be because I'm really invested right now in being a writer, so I'm really invested in storytelling and in learning how to do good storytelling. I think there's a zillion ways to actually address the story and the politics of poverty in the United States, and some of it's policy work and some of it's organizing work and some of it's storytelling. For me that's the one that I'm most invested in right now. And so it's the one that I'm taking on. But there's plenty of room. There's a lot of room to do work around economic and racial inequality in the United States. You'll have good company. You'll never be bored in that work.

Audience 5: Thank you so much for your presentation, it's been really fascinating. If I may, I would like to very kindly ask you to revisit a theme that I heard around also in other questions, the theme of neutrality. But this time with a focus on technology itself. The system itself, not the designers of the system. Because we heard earlier the question, or the notion of a data set being discriminatory. But then that entails that the data set is unfair. So by having this narrative it means that we're kind of insinuating that there is a certain normativity to these systems themselves, whereas… I think there was an earlier event I think a week ago on public interest technology, and a lot of speakers had the shared opinion that technology in itself cannot be good or evil. But it's just a tool and then it depends on the intentions with which it's going to be applied.

I think that also the example that you were mentioning, when you have a system that is designed to take into account some factors that will definitely create a biased outcome, then that's also a poorly-designed frame work but not necessarily a system in itself.

So I was wondering how you see the paradigm under the third part, you were mentioning the idea of having good technology. How does that work in practice?

Eubanks: Yeah, so there's a couple of things I want to address. But keep me on track if I don't get right back to the how's that look in practice piece, because I hear that piece.

Okay, so I think it's really important to address this like…tools aren't for—you know, "tools are neutral" idea. So, part of the way that I make my living is as a brick mason. And I specialize in historic brick repair. And I'm very much an amateur. But I'm a talented amateur. And I always find it really funny when people say that tools are neutral, because it feels like you don't actually spend a lot of— Not you personally, but folks who say that don't spend a lot of time with tools, right. Because I am like I said an amateur at masonry but I have six different trowels. Because you can't use a quarter-inch repointing trowel to do what a carrying trowel does, right? So the trowels are big and flat and you use them to move material. A quarter-inch reporting trowel shoves mortar into quarter-inch cracks. I can't even use my quarter-inch reporting trowel for a three-eights-inch gap. Like I actually need another tool for that.

So I think the lesson here is the tools are never neutral. Tools are designed over time to do specific purposes. And yes, you can use a hammer to paint a barn…? But you're going to do a terrible job. Like a really bad job.

So, I think it's really really important to address that idea that tools are neutral or blank, because they're just not. I've never seen a blank tool in my life. I've never seen a tool that's not designed for a specific purpose. That doesn't mean you can't use it against its purpose, but it's hard to. Their valenced. They're directed in certain ways. They're not totally determined but they're directed.

And so I think this idea that the tool doesn't matter it's the intentions that matter is just false. I don't think that's true at all. I think the intentions are built into the tools, from the beginning, across time, over their development. So that's how you get a tuckpointing trowel and a triangle trowel and why they're different.

So, what I'd like to see us move from is this idea that neutrality is the same thing as fairness, to the idea that justice means choosing certain values over others. So the values that we're currently designing with like invisibly, are efficiency, cost savings, and sometimes anti-fraud. And all of those things should actually be part of our political system. I'm not saying like, throw efficiency out. But I do think there are other values that we need to design from that we're not acknowledging in as direct a way. So fairness, dignity, self-determination, equity. And we have to do that on purpose in the same way we're designing for efficiency and cost savings on purpose.

And sometimes those values will be directly in conflict with one another. And then we have to have political ways to make decisions over what values we care more about. And I think efficiency's important, but I think democracy is more important, right. I think cost savings is important, but I think people not dying from starvation in the United States is more important. I think fraud is important, but I think it's actually more important in the way that people escape paying taxes by moving their money offshore than it is in the welfare system where it's like literally pennies and less than 5% of the system.

So we have to start from a different set of values if we're going to get to systems that work better, based on the world we actually live in. So that I think is the best answer I can give to that, yeah. Thanks.

Audience 6: [Beginning of question is inaudible] …I think you already gave us an example of this. Things that you think should— Tasks or sub-tasks that you think just should not be automated. So that's a way of getting at this problem of automation and the problem of [indistinct] in this context.

And then zooming out a little bit from that, there are now all these initiatives. [Seemingly some examples here, but indistinct; mention of "ethics in AI."] So if you could recommend what you think should be a in curriculum for those sorts of things, I'd be very interested to hear [?] your recommendations.

Eubanks: So there's two different questions. One question is what should never be automated? And that's a super good question that I've never gotten before. In ten months I can't believe no one's asked me that. So I have to like, ponder that for a second.

And then the second thing is what should these new folks be looking at?

Audience 6: [inaudible]

Eubanks: [laughs] Oh my god what a great question. You know, I think that we have so many people who are really smart about their domain, working in the budding world of AI and ethics. I think the big piece that's missing is talking to people outside your domain. I really feel like this conversation is incredibly autopoetic, right, that we sort of turn back in on ourselves…in a way that's not gonna serve us or the expressed intent of increasing justice and fairness.

So I really think actually most of the work has to be methodological, has to be like how do we work with directly-impacted communities in ways that we can actually hear their questions and concerns, and not just be coming to them after the fact and were like, "We're going to predictive analytics in child welfare. What what do you think? We have ten days for public comment: go." So it has to really be built in from the very beginning. So maybe Paulo Freire and other people are good for helping people get to a place where they recognize the sort of extraordinary expertise of folks outside their professional lives? Maybe that's a place to start. But I really feel like the place start is less in theory and less in framing and more in method, more in how do we work with other people in the world. That feels really important to me.

And in terms of the systems that should never be automated… I don't know, what do you guys think? Seriously, what do you think? Do you guys think there's anything that should not be automated?

[inaudible]

Yeah! That's a good one. Yeah, absolutely. [crosstalk] I'd buy that.

Audience Member: The domain of social services.

[indistinct]

Eubanks: In general. Yeah, I don't talk at all about military stuff. I just heard…oh, what's her name, Lucy Suchman is doing some of that work, talking about automated military technology—

Audience 7: Could I just come in on that, because you said at one point when something got automated that used to be done by a human being then a certain check on a type of bias was gone. A buffer was gone, you said at one point.

Eubanks: Yeah.

Audience 7: So is that a sort of systemic feature you could say well, here's a type of situation where if you interact with a human being that at least they could—of course, they could do the wrong thing, they could do the right thing.

Eubanks: Yeah.

So… And here's the challenge in that. So discretion… And I know we're out of time so I want to wrap up quickly. But there's two key tensions that go through the work that don't have the easy answers.

One is the tension of integration. Integrating systems can lower the barriers for folks on public assistance who have to fill out 900 different applications for five different services and sit all day in an office and it takes forever. That can really be a step forward in making it easier to get the resources that you need and that you are entitled to and deserve. But, under a system that criminalizes poverty? integration also means that you can be tracked through all these different systems and criminalized, imprisoned, taken on for fraud, right. So that's an unreconcilable tension in some ways.

The other is discretion. So, front line case worker discretion can be the worst thing that happens to you in the public service system. It can also be the only thing that gets you out of that system successfully. And so it is also I think an irreconcilable tension, in that… The reality is, part of the intention of these systems…part of the built in politics of these systems is the idea that fairness is applying the rules in the same way every time. And in unequal systems, applying the same rules in the same way every time doesn't actually produce equality. It produces more inequality. And so at this point, I'm willing to bet on the human decisionmaker having discretion that can be interrupted and be pushed back on in ways these systems can't. But people of good faith disagree with me on that. And I can accept that. I think that's one of the central tensions of this work.

But the way to think about it I think is that, I have a smart political scientists friend named Joe Soss, and he says discretion is like energy. It's never created or destroyed; it's only moved. So when we say we're removing discretion from these systems what we're actually doing is moving it from one group of people to another group of people. So in Allegheny County we're moving it from the intake call center workers and giving it to the economists and the data scientists who built that model. And that's I think a better kind of question to think about is like, who do we think is close enough to the problem to understand the problem? To have the kind of knowledge they need to make good decisions? And I'd say in that case it's the intake call center workers, who're the most diverse part of the social service workforce in that agency. Their the most working-class, they're the most female, they're the closest to the situations on the ground. And I trust them more to make those kinds of decisions.

But yeah, those are two really important tensions, and I think they're really hard and will continue to be really hard. Thank you so much for the question.

Asher: I know there's so much to still talk about but please join me in thanking Virginia.

Further Reference

Event page


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.