Amar Asher: So, wel­come every­body to the Berkman Klein Tuesday Luncheon Series. So excit­ed to have such an over­sub­scribed room for such an impor­tant top­ic and impor­tant book. We are thrilled to— I should men­tion, I’m Amar Asher. I’m the Assistant Research Director here at the Berkman Klein Center. We are thrilled to have Virginia Eubanks, who is the author of this phe­nom­e­nal book Automating Inequality, here at the Berkman Klein Center to talk about some of the most salient issues of the day relat­ed to emerg­ing tech­nolo­gies, AI and ethics, and more gen­er­al­ly just how many of these issues are play­ing out across soci­ety, and how how high-tech tools are affect­ing and impact­ing the poor. And it has so much rel­e­vance to work that’s going on here at the Berkman Klein Center. In par­tic­u­lar over the past two years we’ve host­ed a series of con­ver­sa­tions around the pub­lic inter­est and emerg­ing tech­nolo­gies under our Ethics and Governance of Artificial Intelligence Initiative. It’s got a num­ber of areas that it’s doing research in with the MIT Media Lab, and so if you’re inter­est­ed in that effort and that series of con­ver­sa­tions I encour­age you to check out the Berkman Klein web site. 

And let me just take a moment to men­tion a cou­ple of dif­fer­ent house­keep­ing things. One is that if you are new to the Berkman Klein Luncheon Series, these events are web­cast for pos­ter­i­ty, and also because this room was over­sub­scribed there’s lots of folks watch­ing on the web­cast, so please just be aware of that. 

Second is that if you are inter­est­ed in this book and actu­al­ly read­ing it, we have them for sale via the Harvard Coop over there for $25. Virginia has gra­cious­ly offered to also sign copies of the book after the talk so please do make a pur­chase and stick around after­wards so that she will sign them. 

And third please be sure to ask ques­tions at the end of this talk. Virginia will speak for about twenty-five to thir­ty min­utes but we real­ly want this to be a dis­cus­sion. There’s a lot of rich mate­r­i­al here and lots of many salient ques­tions that we’ll be dis­cussing. So please do ask ques­tions, and you can do that in per­son here or over on Twitter. We’ll keep an eye on that for folks that are not with­in the room. 

So let me intro­duce Virginia. Virginia Eubanks is an Associate Professor of Political Science at the University of Albany, SUNY. She’s the author of this tremen­dous book that you’ll hear about in a minute. She has also authored Digital Dead End: Fighting for Social Justice in the Information Age. She’s a co-editor, with Alethia Jones, of Ain’t Gonna Let Nobody Turn Me Around: Forty Years of Movement Building with Barbara Smith. And her writ­ing about tech­nol­o­gy and social jus­tice has appeared in The American Prospect, The Nation, Harper’s and Wired. She’s for two decades worked in com­mu­ni­ty tech­nol­o­gy and eco­nom­ic jus­tice move­ments, and she’s a found­ing mem­ber of the Our Data Bodies project and fel­low at New America. So thrilled to have you here. Welcome, Virginia.


Virginia Eubanks: Hi. How’s lunch? I have, I put some aside because it looked like you peo­ple were gonna eat all the food before I got a chance to eat. So, I’m real­ly excit­ed to be here. Thank you so much for the invi­ta­tion, and to all the folks who worked so hard to get me here on time and in one piece to have this con­ver­sa­tion with you. 

My goal today is to keep it a lit­tle bit on the short side because we have a real­ly great, smart room here and I’d real­ly love to have a sort of broad­er con­ver­sa­tion, par­tic­u­lar­ly around solu­tions to the kinds of prob­lems that I describe in the book. 

So one thing that I think is a bit dif­fer­ent about Automating Inequality from some of the oth­er real­ly smart and fine work that’s hap­pen­ing around sort of algo­rith­mic gov­er­nance, or AI, or machine learn­ing, or auto­mat­ed deci­sion­mak­ing or what­ev­er name you want to call it by… But there’s sort of two things that are impor­tant to me about how Automating Inequality’s a bit different. 

So one is that I began all of my report­ing from the point of view of folks in com­mu­ni­ties who feel like they’re tar­gets of these sys­tems rather than start­ing with admin­is­tra­tors and design­ers. I did of course also talk to admin­is­tra­tors and design­ers and data sci­en­tists and econ­o­mists. But I start­ed in each case with fam­i­lies and com­mu­ni­ties who feel like they are being tar­get­ed by these sys­tems, and that real­ly shaped the way I was able to tell the sto­ries that I tell in the book. 

I usu­al­ly, when I have a lit­tle bit more time, I usu­al­ly spend a lot of time intro­duc­ing the fam­i­lies who spoke to me when I was report­ing and get­ting their voic­es in the room. I’m going to do a lit­tle bit less of that today. So I just want to do two things. One is say what an incred­i­ble, gen­er­ous act it was for peo­ple to share their expe­ri­ence with me. So these are folks who were in often real­ly try­ing con­di­tions. So they’re cur­rent­ly on pub­lic assis­tance or have recent­ly got­ten kicked off pub­lic assis­tance. They’re unhoused or home­less. Or their fam­i­ly is involved in a child wel­fare inves­ti­ga­tion. So any­one who under those con­di­tions agrees to go on the record with their real name, they’re real loca­tion, the the real details of their life, is doing incred­i­bly gen­er­ous and coura­geous thing. So I just want to make sure I start by acknowl­edg­ing that the book would­n’t exist with­out peo­ple who took that kind of risk and made them­selves real­ly vul­ner­a­ble. So par­tic­u­lar­ly since I’m not going to spend a lot of time putting their voic­es in the room I just want to start by acknowl­edg­ing that incred­i­ble con­tri­bu­tion to the work. 

And the oth­er thing that’s a bit dif­fer­ent about the way I tell this sto­ry is that I start the sto­ry in 1819 rather than 1980. And that allows me to do some very spe­cif­ic work, which is to talk about what I think of as the deep social pro­gram­ming of the tools that we’re now using in pub­lic ser­vices across the United States. 

So, while I think that the new tech­nolo­gies we’re see­ing absolute­ly have the poten­tial to low­er bar­ri­ers, to inte­grate ser­vices, and to real­ly act to make social ser­vice sys­tems more effi­cient and more nav­i­ga­ble, what I found in my sev­en years of report­ing on the book is that what we’re actu­al­ly doing is cre­at­ing what I call a dig­i­tal poor­house, which is an invis­i­ble insti­tu­tion that pro­files, polices, and pun­ish­es the poor when they come into con­tact with pub­lic services. 

And in the book I talk about three dif­fer­ent cas­es. I talk about an attempt to auto­mate and pri­va­tize all of the eli­gi­bil­i­ty process­es for the wel­fare sys­tem in the state of Indiana. I talk about an elec­tron­ic reg­istry of the unhoused in Los Angeles County, what the design­ers call the match​.com of home­less ser­vices, the Coordinated Entry System. And I talk about a sta­tis­ti­cal mod­el that’s sup­posed to be able to pre­dict which chil­dren might be vic­tims of abuse or neglect in the future, in Allegheny County, which is the coun­ty where Pittsburgh is in Pennsylvania. 

But I start the book with a chap­ter about sort of the his­to­ry of pover­ty pol­i­cy and what role sort of the new waves of tech­nol­o­gy have played in that process and in those sys­tems. And I start—and this is also always when I thank my edi­tor, because the book orig­i­nal­ly start­ed with a ninety-page his­to­ry chap­ter that start­ed in like 1600 rather than in 1819, and my edi­tor Elisabeth Dyssegaard was like, Virginia, no. No.” Like, You can­not do that to people.”

And I was like, Oh, but all the deep his­tor­i­cal detail is so inter­est­ing!

And she was like, To you, hon­ey. To you.”

And so feel free to ask me about the his­tor­i­cal rab­bit holes I was not allowed to explore in this book. I have so much inter­est­ing infor­ma­tion. But for our pur­pos­es today and for the pur­pos­es of the book we’ll start in 1819.

So the rea­son I start in 1819 is this is the moment where there’s a real­ly big eco­nom­ic dis­lo­ca­tion in the United States. There’s a depres­sion. During the depres­sion, poor and work­ing peo­ple began to orga­nize for their needs and for their sur­vival. For their rights. And it makes eco­nom­ic elites real­ly ner­vous. So eco­nom­ic elites do what eco­nom­ic elites always do when they’re ner­vous which is the com­mis­sion a bunch of studies.

And right, maybe I should­n’t say that at Harvard. Hi.

So, they com­mis­sion a bunch of stud­ies and they frame the ques­tion as like, what’s the real prob­lem we’re fac­ing right now? Is it pover­ty? Is it a lack of access to resources? Or is it what they called at the time pau­perism,” which was depen­dence on pub­lic benefits. 

And does any­one want to guess what the report said? Pauperism, that’s right. So the reports came back. They said the prob­lem is not pover­ty the prob­lem is a pau­perism prob­lem, the prob­lem is a depen­dence on pub­lic ben­e­fits, and we need to cre­ate a sys­tem that rais­es bar­ri­ers just high enough that it dis­cour­ages those who should not be receiv­ing ben­e­fits, but low enough that peo­ple who real­ly need them will get them. 

And the sys­tem they invent­ed in the 1820s was a sys­tem of brick and mor­tar coun­ty poor­hous­es. These were phys­i­cal insti­tu­tions for incar­cer­at­ing poor and work­ing peo­ple who request­ed pub­lic assis­tance. And what it meant— So it’s 1820 so not every­body had this right, but basi­cal­ly what it meant was you had to give up your right to vote and to hold office as part of the entry process to the poor­house. You weren’t allowed to mar­ry. And often you had to give up your chil­dren because it was under­stood at the time that sort of inter­ac­tion with wealth­i­er fam­i­lies could redeem poor chil­dren. And by inter­ac­tion they gen­er­al­ly meant sort of leas­ing chil­dren for agri­cul­tur­al or domes­tic labor under appren­tice­ship programs. 

And some­thing like a third of peo­ple who entered the poor­house— Some poor­hous­es had death rates as high as 30% annu­al­ly. So it’s like a third of folks who entered them every year died. 

And the rea­son I start the sto­ry of this book with the actu­al phys­i­cal brick and mor­tar poor­house is because I believe this is the moment where we decid­ed as a polit­i­cal com­mu­ni­ty that the front line of the pub­lic ser­vice sys­tem should be pri­mar­i­ly focused on moral diag­no­sis. On decid­ing whether or not you were deserv­ing enough to receive aid rather than build­ing uni­ver­sal floors under every­one. And that’s part of the sort of deep social pro­gram­ming that we see at work with­in these sys­tems that con­tin­ues to pro­duce bad out­comes for poor fam­i­lies, even when the inten­tions of the design­ers, the admin­is­tra­tors, and oth­er folks involved in the process of cre­at­ing the sys­tems are real­ly good. Even when peo­ple are smart and their inten­tions are good. 

So let me talk just very briefly about the three cas­es and about sort of three big ideas that I see sort of cross-cutting the three cases. 

So the first because I want to talk about is Indiana. And what you need to know about Indiana is in 2006 then-Governor Mitch Daniels signed what was even­tu­al­ly a $1.34 bil­lion con­tract with a con­sor­tium of high-tech com­pa­nies includ­ing IBM and ACS to auto­mate all the eli­gi­bil­i­ty process­es for the wel­fare pro­gram. So that was cash assis­tance or TANF, food stamps(it was still called food stamps” at the time), and Medicaid. And basi­cal­ly how the sys­tem worked is that they moved 1,500 pub­lic case work­ers from their local coun­ty offices to these region­al­ized and pri­va­tized call cen­ters. There are sev­er­al of them across the state. And they encour­aged folks who were apply­ing for pub­lic assis­tance to do so over online forms on the Internet. 

So from the point of view of case work­ers, what this felt like, what this looked like, was mov­ing from a place where you were respon­si­ble for a dock­et of fam­i­lies, for a case­load that was made up of fam­i­lies, to mov­ing to a sys­tem where you were respond­ing to a list of tasks as it dropped into a com­put­er­ized queue in your work­flow man­age­ment sys­tems in these region­al call centers. 

It also meant that you nev­er spoke to the same per­son twice, right. So if you got a call, once you hung up the next call to come through would come from any­where in the state and it would just be the next call in the queue. 

From the point of view of appli­cants and recip­i­ents of pub­lic assis­tance in Indiana, it felt like no one was account­able for mis­takes, because you nev­er spoke to the same per­son twice and they did­n’t under­stand your con­text or the sort of process of your case. 

So, it was real­ly com­mon for peo­ple to receive what were known as fail­ure to coop­er­ate in estab­lish­ing eli­gi­bil­i­ty notices, or fail­ure to coop­er­ate notices. And basi­cal­ly what fail­ure to coop­er­ate notices meant is a mis­take had been made some­where in the process, right. Somebody had for­got­ten to sign page sev­en­teen of a thirty-four-page appli­ca­tion. Or the doc­u­ment pro­cess­ing cen­ter had scanned in a piece of doc­u­men­ta­tion upside down or dropped it behind a desk. Or a new case­work­er at the region­al call cen­ter maybe mis­ap­plied pol­i­cy. But no mat­ter whose mis­take it was, the only notice you would get is a notice that said you’d failed to coop­er­ate in estab­lish­ing eli­gi­bil­i­ty for the pro­gram so you’re denied. 

What that meant is the sys­tem was so brit­tle that it con­fused like, hon­est mis­takes, with pos­si­ble fraud. And that was a real­ly pro­found shift for the peo­ple who rely on pub­lic assis­tance in Indiana. It also meant that the bur­den of fig­ur­ing out what had gone wrong and solv­ing it fell almost entire­ly on the shoul­ders of poor and work­ing fam­i­lies in Indiana, some of the most vul­ner­a­ble fam­i­lies in Indiana. 

The thing that I want to point out about the Indiana case is that it assumes and is aligned with a pol­i­tics of aus­ter­i­ty that I think is real­ly worth talk­ing about in the con­text of talk­ing about these sys­tems. So the idea here, the nar­ra­tive is, we don’t have enough resources; we have to make some real­ly dif­fi­cult deci­sions, includ­ing mak­ing sys­tems more effi­cient and increas­ing­ly iden­ti­fy­ing fraud; because our resources are so lim­it­ed and our prob­lems are so great. 

So one of the things that all of the design­ers and admin­is­tra­tors told me across these three cas­es was that these sys­tems are you know, per­haps regret­table? but nec­es­sary sys­tems for doing a kind of dig­i­tal triage for decid­ing on which fam­i­lies are most vul­ner­a­ble to the worst out­comes of pover­ty, and who can wait. 

And one of the things that I think it’s real­ly impor­tant to point out is this idea that triage is nec­es­sary and inevitable is in fact a polit­i­cal choice. There are of course— We live in a world of abun­dance, and there is enough for every­one. This idea that there will nev­er be enough resources actu­al­ly cre­ates a sys­tem that repro­duces aus­ter­i­ty. And so in the case of Indiana, for exam­ple, it was orig­i­nal­ly a $1.34 bil­lion con­tract. Results in a mil­lion denials of appli­ca­tions over the first three years of the exper­i­ment. A 54% increase from the three years before the exper­i­ment. This caus­es huge suf­fer­ing for peo­ple on the ground, for poor and work­ing fam­i­lies but also for case­work­ers, and I’m hap­py to talk about that more a lit­tle bit later. 

One of them real­ly sort of inter­est­ing moments in the Indiana case, though, is that the com­mu­ni­ty mem­bers and just sort of nor­mal Hoosiers (that’s what you call peo­ple from Indiana, for peo­ple who don’t know), just nor­mal Hoosiers became frus­trat­ed and annoyed enough with the sys­tem that they real­ly orga­nized and fought back against it. They pushed back against the state. And they were so suc­cess­ful that the Governor actu­al­ly can­celed the con­tract with IBM three years into the experiment. 

And then IBM turned around and sued the state for breach of con­tract. And in the first round of the court case actu­al­ly won. So they were allowed to keep the half-billion dol­lars they had already col­lect­ed. And then they were award­ed an extra $50 mil­lion in penal­ties because the state had breached the con­tract. That case stayed in the courts for about eight years and in the end it did turn around and the Supreme Court found that IBM was in breach and gave $150 mil­lion back to the state. 

But the real­i­ty is that this assump­tion that we had to trim already very lean roles pro­duced a sys­tem that denied so many peo­ple rights that it had to be can­celed. And the can­cel­la­tion actu­al­ly cost the state a lot of mon­ey, both in the mon­ey they had already spent, and in the eight years of legal bat­tles around whose fault it was that a mil­lion appli­ca­tions were denied. 

So the irony here is that assum­ing aus­ter­i­ty tends to repro­duce aus­ter­i­ty, right. It’s actu­al­ly very expen­sive to pro­file, police, and pun­ish poor and work­ing fam­i­lies. And we’ll talk a bit more about that in a minute. 

So, I’m going to talk now about the Allegheny coun­ty algo­rithm. And I hope we’ll have time to talk about Los Angeles. But I’ll do bits and pieces of this and we can reen­gage in con­ver­sa­tion if you feel like there’s any­thing I’ve missed. 

So, the Allegheny Family Screening Tool is a sta­tis­ti­cal mod­el that’s built on top of a data ware­house that was built in 1999 in Allegheny County. So the data ware­house receives reg­u­lar data extracts from twenty-nine dif­fer­ent agen­cies across the coun­ty. As of the writ­ing of the book it held a bil­lion records, more than 800 for every indi­vid­ual liv­ing in Allegheny County. 

But it does­n’t actu­al­ly col­lect infor­ma­tion equal­ly on all peo­ple. So the agency that it’s receiv­ing data extracts from are pri­mar­i­ly agen­cies that inter­act with poor and work­ing fam­i­lies. So it’s juve­nile and adult pro­ba­tion, the state Office of Income Maintenance or Pennsylvania’s wel­fare office, the coun­ty office of men­tal health ser­vices, the coun­ty office of addic­tion, drug and alco­hol recov­ery, and I think twen­ty now pub­lic schools. 

The lim­i­ta­tions of that data set then have become a real­ly impor­tant part of the tool that’s built on top of the data ware­house, which is the Allegheny Family Screening Tool. And I’m not gonna go into great tech­ni­cal depth on how that sys­tem works but I’m hap­py to talk about that a lit­tle bit lat­er and get into the tech­ni­cal weeds, because I find them real­ly inter­est­ing. But a cou­ple of things that I think are real­ly impor­tant to understand. 

One is that it is not actu­al­ly machine learn­ing or arti­fi­cial intel­li­gence, though the coun­ty has recent­ly moved to using some machine learn­ing in their sys­tem. When I was report­ing on the sys­tem it was a sim­ple sta­tis­ti­cal regres­sion. For that quant nerds in the room, it’s a step­wise pro­bit regres­sion, so a pret­ty stan­dard regres­sion that they ran against all the data that’s avail­able in the data ware­house to pull out vari­ables they believe cor­re­late with future abuse or neglect. So, using his­tor­i­cal val­i­da­tion data, not real­ly train­ing data because it’s not machine learn­ing, but using their his­tor­i­cal val­i­da­tion data. 

The real­i­ty of expe­ri­enc­ing this tool, though, from par­ents’ point of view, they feel very much like because of the lim­i­ta­tions around the data set, because the data only col­lects infor­ma­tion or pri­mar­i­ly col­lects infor­ma­tion on poor and work­ing class fam­i­lies, they feel like they are part of a sys­tem of pover­ty pro­fil­ing where because they are being… Because their data is in the sys­tem more than pro­fes­sion­al middle-class, or middle-class fam­i­lies, they are iden­ti­fied for pos­si­ble abuse or neglect more, risk rat­ed more high­ly. Which means they’re inves­ti­gat­ed more often. Which means they’re indi­cat­ed more often. Which means that more of their data goes in the sys­tem, sort of cre­at­ing this feed­back loop that’s very sim­i­lar to the kind of feed­back loop that peo­ple talk about around pre­dic­tive policing.

So the fam­i­lies that I spoke to very often said they felt like the sys­tem con­fused par­ent­ing while poor with poor par­ent­ing. So it’s a false pos­i­tives prob­lem, right. Seeing harm were no harm may actu­al­ly exist. 

Now, I also spent a lot of time with front line case work­ers in this sys­tem, par­tic­u­lar­ly with intake call cen­ter work­ers. And intake call cen­ter work­ers are the folks who receive reports of abuse or neglect from the com­mu­ni­ty, either over their their hot­line or from man­dat­ed reporters in the com­mu­ni­ty. And they make a real­ly dif­fi­cult deci­sion. They make a deci­sion about whether or not they should screen each case in for a full inves­ti­ga­tion or whether they should screen it out as not ris­ing to the lev­el of abuse or neglect, or as not hav­ing high enough risk or low enough safe­ty to the chil­dren to ratio­nal­ize run­ning a full investigation. 

And intake call cen­ter work­ers, inter­est­ing­ly, were con­cerned about the oppo­site prob­lem but for the same rea­son. So they were con­cerned about false neg­a­tives prob­lems. They were con­cerned about the sys­tem not see­ing harm where harm might actu­al­ly exist. So they explained to me that because the sys­tem does­n’t real­ly col­lect infor­ma­tion on pro­fes­sion­al and middle-class fam­i­lies… And you know, pro­fes­sion­al middle-class fam­i­lies need as much help with their par­ent­ing as every­one else. The dif­fer­ence is that they tend to pay for it with pri­vate sources. So, if you need help with child­care, you get a nan­ny or a babysit­ter, you pay out of pock­et. If you need help with addic­tion recov­ery or with a men­tal health issue and you have pri­vate insur­ance, that infor­ma­tion’s not going to end up in this data ware­house. Only the folks who go to coun­ty men­tal health ser­vices end up in the data ware­house, right. 

So the intake call screen­ers were real­ly con­cerned that some of the things that are real­ly good indi­ca­tors for abuse and neglect in pro­fes­sion­al middle-class fam­i­lies would­n’t be cov­ered in the data ware­house, so it would­n’t be rep­re­sent­ed in the mod­el. So for exam­ple, there’s some good evi­dence that geo­graph­ic iso­la­tion actu­al­ly is high­ly cor­re­lat­ed with abuse or neglect, but folks who live in the sub­urbs or in iso­lat­ed hous­ing won’t show up in the data ware­house because they’re not the folks in Allegheny County who are get­ting coun­ty health. So they won’t end up in the data ware­house. So intake call screen­ers were also real­ly con­cerned about the lim­i­ta­tions of that data set, but they were con­cerned about it from the oth­er side. 

Also, I want to say anoth­er thing that’s impor­tant about this sys­tem is that you know, many of the admin­is­tra­tors I spoke to spoke a lot about effi­cien­cy and cost sav­ings as rea­sons for these tools. But that was only one rea­son. And anoth­er rea­son that was real­ly impor­tant to them was to iden­ti­fy and mit­i­gate bias in front line deci­sion­mak­ing or in pub­lic ser­vice deci­sion­mak­ing. I think it’s real­ly real­ly impor­tant to acknowl­edge that that bias exists. The human bias exists. Institutional bias exists in the sys­tem, and has for a real­ly long time. So since the Social Security Act in the 1930s until the 1970s, black and Latino fam­i­lies were large­ly blocked from receiv­ing pub­lic assis­tance by dis­crim­i­na­to­ry eli­gi­bil­i­ty rules that did­n’t fall until they were direct­ly chal­lenged by the National Welfare Rights move­ment in the late 60s and ear­ly 70s. And that’s cre­at­ed all sorts of dis­cre­tionary excess­es in the sys­tem that are both human and insti­tu­tion­al and real­ly impor­tant to address. 

It is also true in the child wel­fare ser­vices, although the prob­lem in child wel­fare ser­vices tends not to be exclu­sion from the sys­tem but overinclu­sion in the sys­tem. So in forty-seven states across the United States, African-American chil­dren are in fos­ter care at rates that far exceed their actu­al pro­por­tion of the pop­u­la­tion. It’s a prob­lem called racial dis­pro­por­tion­al­i­ty. And Allegheny County like most coun­ties has a prob­lem with dis­pro­por­tion­al­i­ty. So at the time I was doing my report­ing, 38% of chil­dren in fos­ter care in Allegheny County were black or bira­cial, and they only made up 18% of the youth pop­u­la­tion, so that’s what like, twice…more than twice where they should be giv­en their pro­por­tion of the population. 

So the design­ers of this sys­tem were real­ly excit­ed to talk to me about the pos­si­bil­i­ty of using the bet­ter data that they were gath­er­ing to iden­ti­fy where pat­terns of dis­crim­i­na­to­ry deci­sion­mak­ing might be enter­ing the child wel­fare sys­tem. Now, the prob­lem with that is that the coun­ty’s own research shows that the intake call screen­ing is not actu­al­ly the point at which dis­crim­i­na­tion is enter­ing the sys­tem. In fact it’s enter­ing much ear­li­er. So it’s enter­ing at the point at which fam­i­lies are referred to the sys­tem. So it’s enter­ing at refer­ral not at screen­ing. The com­mu­ni­ty refers black and bira­cial fam­i­lies either through man­dat­ed reports or through the hot­line 350% more than—three and a half times as often—than they refer white families. 

Once that case gets in the sys­tem, there is a tiny bit of dis­pro­por­tion­al­i­ty that’s added by the intake screen­ing process. So intake screen­ers screen in 69% of black and bira­cial fam­i­lies, and only 65% of white fam­i­lies. But the dif­fer­ence there is like a 4% per­cent­age dif­fer­ence or a 350% per­cent­age difference. 

And I think one of the real­ly inter­est­ing ques­tions this begs is, is the ear­li­er prob­lem a data-amenable prob­lem? That refer­ral bias, is that some­thing we can attack or address or con­front with auto­mat­ed sys­tems? And my feel­ing is that that’s real­ly a cul­tur­al issue not a data issue, although of course the two are deeply relat­ed. It’s an issue about who we as a country…what we see a good fam­i­ly look­ing like. And in the United States we see a good fam­i­ly as look­ing white and wealthy. And that has a pro­found impact on the kinds of impacts that the sys­tem can have mov­ing forward. 

One of my real con­cerns about this sys­tem is that we’re actu­al­ly remov­ing dis­cre­tion from front line call cen­ter work­ers, at the point at which they may be push­ing back against the dis­crim­i­na­to­ry effects of refer­ral bias. So we’re actu­al­ly remov­ing a pos­si­ble stop to the ampli­fi­ca­tion of bias in that system. 

And I just want to men­tion that one of the things that these sys­tems are real­ly good at is iden­ti­fy­ing bias when it is indi­vid­ual and the result of irra­tional think­ing. They are less good at iden­ti­fy­ing and address­ing bias that is struc­tur­al, sys­temic, and ratio­nal, right. And that’s some­thing I want to talk a bit more about at the end. There’s also some prox­ies that we’re not gonna talk about. 

Okay, last sys­tem that I want to talk about is the Los Angeles sys­tem, which is called the Coordinated Entry System. Referred to by its design­ers as the match​.com of home­less ser­vices. What coor­di­nat­ed entry is sup­posed to do is basi­cal­ly rate unhoused peo­ple on a scale of vul­ner­a­bil­i­ty and then match them with the most appro­pri­ate avail­able resources based on their vulnerability. 

This isn’t unusu­al, at all. In fact Los Angeles coun­ty is just one of the many places that’s using coor­di­nat­ed entry. It’s become real­ly stan­dard across the coun­try since I start­ed the research. But one of the rea­sons to look at Los Angeles is because the scale of the hous­ing cri­sis there is just so extra­or­di­nary. So as of the last point in time count, there are 58,000 unhoused peo­ple in Los Angeles coun­ty. I live in a small city in upstate New York called Troy. There’s just few­er than 50,000 peo­ple in Troy. So my entire city, plus 10,000 peo­ple is home­less in Los Angeles coun­ty, right, so just for a sense of the scale. 

And some­thing like 75% of the peo­ple who are unhoused in Los Angeles coun­ty are com­plete­ly unshel­tered. So they have no access to emer­gency shel­ter, liv­ing in tents or in cars, or in encamp­ments. And so this is an absolute­ly crit­i­cal human­i­tar­i­an cri­sis in the United States.

So it total­ly makes sense, it com­plete­ly make sense to me that folks, par­tic­u­lar­ly front line case work­ers, want a lit­tle help mak­ing the incred­i­bly dif­fi­cult deci­sion of who among the like hun­dred peo­ple they see every week gets access to the two or three resources they have at their dis­pos­al, right. It’s an incred­i­bly dif­fi­cult deci­sion, and I absolute­ly under­stand the impulse to try to cre­ate a more effi­cient and more ratio­nal and more objec­tive sys­tem for match­ing need to resource. 

Now, what I heard from folks who are inter­act­ing with the sys­tem, though, who are tar­gets of the sys­tem, folks in the unhoused com­mu­ni­ty, was a lit­tle dif­fer­ent. So, as of the writ­ing of the book, they had man­aged to match… Let me tell you a lit­tle bit of how how it works first. 

So, coor­di­nat­ed entry, there’s basi­cal­ly four pieces. The first piece is a very inten­sive sur­vey called the VI-SPDAT, which is the Vulnerability Index and Service Prioritization Decision Assistance Tool. (Yes. It’s not my first time say­ing that out loud.) So there is this very intense sur­vey called the VI-SPDAT that is giv­en to unhoused folks either through street out­reach or when they come in to orga­ni­za­tions for help. That infor­ma­tion gets input into their home­less man­age­ment infor­ma­tion sys­tem, which we’re not going to go into depth with. Just think of it as a data­base. It’s not quite true, but think of it as a data­base for now. 

So that infor­ma­tion goes into their HMIS, there’s an algo­rithm in the home­less man­age­ment infor­ma­tion sys­tem that then adds up folks vul­ner­a­bil­i­ty score, how high they are on the scale of being like­ly to expe­ri­ence the worst out­comes of being home­less, includ­ing emer­gency room vis­its, death, men­tal health cri­sis, vio­lence, right. Really awful out­comes of being announced. 

From the oth­er side, there’s all this infor­ma­tion about avail­able resources enter­ing the oth­er side of the data­base. And the two meet in the mid­dle, where there’s sup­posed to be an algo­rithm that match­es unhoused peo­ple based on their vul­ner­a­bil­i­ty score with the most appro­pri­ate avail­able resource, based on what’s avail­able in the system. 

The real­i­ty is…this isn’t even in the book so shh. The real­i­ty is that there’s…when I was report­ing, at least, there’s no sec­ond algo­rithm. Actually it’s like Mechanical Turk; there’s like a guy in a room who’s match­ing the two… But it does­n’t actu­al­ly real­ly mat­ter, over­all, for the ways we we need to be think­ing about this system. 

So, the unhoused folks that I talked to, some of them I want to be clear thought this was the—you know, the best thing since sliced bread. Were very clear to say like, I got housed through this sys­tem. It’s the best. It’s a gift from God. It’s the best Christmas present I ever got, absolute­ly.” And they have been able to match about 9,000 peo­ple with some kind of resource through this sys­tem. That does­n’t nec­es­sar­i­ly mean hous­ing, that just means any kind of resource. It could be like a lit­tle help avoid­ing an evic­tion, or mov­ing costs, or find­ing a new rental. But they have as of the writ­ing of the book sur­veyed thirty-nine thou­sand peo­ple with the VI-SPDAT

So what I thought was a real­ly impor­tant ques­tion was talk­ing to the folks who have been sur­veyed but haven’t got­ten resources about their expe­ri­ence with the sys­tem. And what they told me is that they felt like they were being asked to poten­tial­ly incrim­i­nate them­selves in exchange for a slight­ly high­er lot­tery num­ber for hous­ing. And why they believed that is because the VI-SPDAT actu­al­ly asks some real­ly intense and bor­der­line inva­sive questions. 

For exam­ple it asks Are you cur­rent­ly trad­ing sex for drugs? Does some­one think you owe them mon­ey? Have you thought about harm­ing your­self or some­one else? Are there open war­rants out for you? Are you hav­ing unpro­tect­ed sex? Where can be found at dif­fer­ent times of the day?” And, Can we take your picture?” 

And though folks fill out a real­ly com­plete, informed con­sent form that lasts for sev­en years, many of them did­n’t feel like they had tru­ly free, vol­un­tary con­sent in inter­act­ing with this process. Because coor­di­nat­ed entry has become the front door for pret­ty much all hous­ing resources in Los Angeles County. So they felt like—particularly those folks who had tak­en the sur­vey mul­ti­ple times and nev­er received any resources, they were begin­ning to view the sys­tem with some suspicion. 

And it’s actu­al­ly not a ter­ri­ble analy­sis of the sys­tem. So, though you sign this real­ly sort of intense informed con­sent that last a real­ly long time, if you have ques­tions about how your data is being shared, you actu­al­ly have to go through anoth­er step and request that infor­ma­tion be sent to you…? (Right, unhoused.) …request that infor­ma­tion about where your data goes be sent to you. If you do request that infor­ma­tion you get a list of 161 agen­cies who share this infor­ma­tion, who share this data across their system. 

And one of them, because of the fed­er­al data stan­dards, is the Los Angeles Police Department. So, under cur­rent fed­er­al data stan­dards, infor­ma­tion that’s stored in an HMIS can be accessed by law enforce­ment with­in no war­rant at all, no over­sight process, no writ­ten record. Just a line offi­cer can walk into a social ser­vice office and ask for infor­ma­tion about unhoused peo­ple. They can’t get any­thing they want out of the sys­tem, and social ser­vice work­ers can say no (This is real­ly impor­tant to know.) but they are allowed to get it and there’s no over­sight process for that. 

So what I want to do is talk about two things and I’m gonna wrap up in about three min­utes and then we’re gonna have a larg­er con­ver­sa­tion. Because I also want to point towards where the work has gone since the writ­ing of the book. But I think one thing that’s real­ly impor­tant to think through is…you know, I hear from folks when I do these talks a lot, like there’s a sense that, Oh Virginia, you wrote the Frankenstein book.” Like you found the scari­est sys­tems you could and you wrote this real­ly fright­en­ing book because scary sto­ries sell books. 

And the real­i­ty is that in Indiana it might be true. In Indiana it’s… Though I don’t know what was in Governor Daniel’s heart when he made the deci­sions he made to cre­ate the sys­tem, I do know, as one of the sources said, that if they had built a sys­tem on pur­pose to deny peo­ple access to pub­lic assis­tance it prob­a­bly would­n’t have worked any bet­ter. So we might be able to put a black hat on that sys­tem. But in Los Angeles and in Allegheny County? all of the design­ers and the pol­i­cy mak­ers and the admin­is­tra­tors I talked to were very smart, very well-intentioned peo­ple who cared deeply about the folks their agency served. 

And I actu­al­ly think that sets up a bet­ter set of ques­tions. So I did­n’t write about the worst cas­es out there. In fact if I want­ed to write a worst-case book it would’ve been a lot scari­er than the one that I wrote. Because the sys­tems in Allegheny County and in Los Angeles, actu­al­ly the design­ers are doing just about every­thing that pro­gres­sive crit­ics of algo­rith­mic decision-making ask them to do. They’ve been largely—not entire­ly, but large­ly trans­par­ent about how the sys­tems work and what’s inside them. They hold these tools in pub­lic agen­cies or at least in public/private part­ner­ships so there is some kind of demo­c­ra­t­ic account­abil­i­ty around them. And both of them actu­al­ly even engage in some kind of process of par­tic­i­pa­to­ry design, or like human-centered design of the tools. And that’s real­ly all the things we ever ask for in sort of pro­gres­sive cri­tiques of algo­rith­mic decisionmaking. 

So these are actu­al­ly some of the best tools we have, not some of the worst. And I think that actu­al­ly rais­es some real­ly impor­tant ques­tions. Which brings us all the way back to that sto­ry I told at the begin­ning about where the deep social pro­gram­ming of these tools comes from, and how we are often sort of invis­i­bly car­ry­ing for­ward this deci­sion we made 200 years ago that social ser­vice is more a moral ther­mome­ter than a uni­ver­sal floor. 

And so I just want to point out that it’s less impor­tant I think to talk about the intent of the design­ers, though of course that’s inter­est­ing and impor­tant, than it is to talk about impacts on tar­gets. And so that’s one of the sort of big pic­ture things I’d like us to talk a lit­tle bit about, about how we can move the con­ver­sa­tion away from intent and towards impact. 

And final­ly I want to talk a lit­tle bit about solu­tions. So, I know that when I come and do talks like this, par­tic­u­lar­ly for rooms that are tech­ni­cal­ly sophis­ti­cat­ed or pol­i­cy sophis­ti­cat­ed, that often what peo­ple want is sort of a five-point plan for build­ing bet­ter tech­nol­o­gy. And I get it. And I’m sor­ry and you’re wel­come that I’m gonna make you resist the urge for a sim­ple solu­tion to what is real­ly a very very com­pli­cat­ed problem. 

So I believe we need to be doing three kinds of work simul­ta­ne­ous­ly in order to real­ly move the way the sys­tems are work­ing. And the first is nar­ra­tive or cul­tur­al work. And that’s real­ly about chang­ing the sto­ry we tell about pover­ty. There’s a sto­ry in the United states that pover­ty is an aber­ra­tion. That it’s some­thing that hap­pens only to a tiny minor­i­ty of prob­a­bly patho­log­i­cal peo­ple. And that’s sim­ply not true. So if you look at Mark Rank’s real­ly extra­or­di­nary life cycle research around pover­ty in the United States, 51% of us will be below the pover­ty line dur­ing our adult lives, between the years of 20 and 64. And almost two thirds of us, 64% of us, will access means-tested pub­lic assis­tance. So that’s straight wel­fare, that’s not reduced-price school lunch­es. That’s not Social Security. That’s not unem­ploy­ment. That’s straight welfare. 

So the sto­ry we tell that pover­ty is an aber­ra­tion, is a rare thing, is just sim­ply untrue, empir­i­cal­ly. Poverty is actu­al­ly a major­i­ty expe­ri­ence in the United States. That does­n’t mean we’re all equal­ly vul­ner­a­ble to it. That’s sim­ply untrue as well. If you’re a per­son of col­or, if you’re born poor, if you’re car­ing for oth­er peo­ple, if you have a phys­i­cal dis­abil­i­ty or men­tal health issues, you’re more like­ly to be poor and it’s hard­er to escape once you’re there. But the real­i­ty is pover­ty is a major­i­ty expe­ri­ence in the US, not a minor­i­ty experience.

I believe if we start to shift that nar­ra­tive, if we start to shift that sto­ry, we’ll be able to imag­ine a dif­fer­ent kind of pol­i­tics that is more about build­ing uni­ver­sal floors under all of us and dis­trib­ut­ing our shared wealth more even­ly and more fair­ly, and less about decid­ing whether or not you are des­per­ate enough and deserv­ing enough to receive help. Because many of the con­di­tions I talk about in the book, whether it’s a liv­ing on the side­walk for a decade or more or los­ing a child to the fos­ter care sys­tem because you can’t afford pre­scrip­tion med­ica­tion, in oth­er places in the world peo­ple see these as human rights vio­la­tions. And that we see them here increas­ing­ly as sys­tems engi­neer­ing prob­lems actu­al­ly says some­thing very deep and trou­bling about the state of our nation­al soul. And I think we need to get our souls right around that in order to real­ly move the nee­dle on these problems. 

And final­ly, in the mean­time tech­nol­o­gy’s not going to just stop and wait for us to do this incred­i­bly com­pli­cat­ed and dif­fi­cult work. And so my sort of final bit of advice is to design­ers. And it’s about not con­fus­ing design­ing a tool in neu­tral with design­ing it for jus­tice and equi­ty. And sort of to quote Paulo Freire the rad­i­cal edu­ca­tor, he says neu­tral edu­ca­tion is edu­ca­tion for the sta­tus quo. And it’s the same around tech­nolo­gies. Neutral tech­nolo­gies just means tech­nolo­gies designed to pro­tect and pro­mote the sta­tus quo. If we want to actu­al­ly address the very real land­scape of inequal­i­ty in the United States, we have to do it on pur­pose, from the begin­ning, every time. 

So the metaphor I often use for folks is you know, think about this tool we’re using as a car. And think about the land­scape of inequal­i­ty we live in as being San Francisco, right. Very bumpy, very hilly, very Valley‑y, very full of twists and turns. Now, if you built your car with no gears, you should not then be sur­prised when it hur­tles to the bot­tom of a hill and smash­es to bits at the bot­tom. You have to build in gears to actu­al­ly engage with the hills and the turns that exist in your land­scape. And we have to do that when we’re build­ing these sys­tems as well. Equity and jus­tice won’t hap­pen by acci­dent. We have to design it into all of our polit­i­cal tools, so that’s both our poli­cies and our tech­nolo­gies, from the begin­ning, brick by brick and byte by byte. 

Thank you so much for your time, for your atten­tion. I’m real­ly look­ing for­ward to this con­ver­sa­tion. Thank you.


Amar Asher: Thank you so much Virginia. So much to dig into here, and I’m eager to get to ques­tion since we have lim­it­ed time. I see a first hand.

Virginia Eubanks: Alright.

Audience 1: Hi. You had allud­ed to the work that you’re doing now after the book. Could you talk more about that?

Eubanks: Yeah, so one of the things—thank you for let­ting me put up my my last beau­ti­ful slide. So one of the things that’s been hap­pen­ing a lot since the book came out is that… One is that I’ve real­ized that books are a moment in time and not a final answer on any­thing. And that my own think­ing in some ways has shift­ed since the book was pub­lished. And one of the ways my think­ing has shift­ed was around who I think the audi­ence for the book is. 

So orig­i­nal­ly I real­ly saw two audi­ences. One was folks who have expe­ri­enced these sys­tems as tar­gets. Because I think it’s real­ly impor­tant for those of us who are engaged in these sys­tems to have con­fir­ma­tion of our sto­ries. Because the way that stig­ma and pover­ty works in the United States makes us all feel like we’re the only per­son this has ever hap­pened to. So shar­ing those sto­ries is a real­ly impor­tant part of that larg­er nar­ra­tive work of telling a dif­fer­ent sto­ry about poverty. 

And then I also thought the book’s audi­ence was most­ly design­ers, and data sci­en­tists, and econ­o­mists, and the folks who are build­ing these mod­els and these tools. And that’s true; I do think that I’ve been able to engage in some real­ly good con­ver­sa­tions with folks who design these systems. 

But the audi­ence that I did­n’t think of when I was writ­ing the book explic­it­ly is folks who are on the ground in orga­ni­za­tions who are see­ing these tools roll out and who are actu­al­ly often asked to con­sult about them by state agen­cies or local agen­cies and who I’m now increas­ing­ly get­ting a lot of phone calls from, just because they’ve seen the book or read the book who say like, Hey, you know in New York City the Bronx Defenders called me and said the admin­is­tra­tion for chil­dren’s ser­vices in New York City is mov­ing towards pre­dic­tive ana­lyt­ics in child wel­fare. They want us to con­sult on the tool. We don’t even know how to frame the ques­tions. Can you help?”

And so one of the things that’s hap­pened since the book came out is that I think we’ve opened up this real­ly inter­est­ing set of ques­tions about like, how do orga­ni­za­tions and advo­cates and you know, neigh­bor­hoods frame ques­tions so that they sort of claim their space as experts at the table in this deci­sion­mak­ing? Because I think too often these are exact­ly the peo­ple who aren’t in the room when we make these deci­sions. And if my book is any indi­ca­tion, we then frame the prob­lems in ways that are not in the long run going to help us cre­ate more just, more fair systems.

So what’s come out of that is a set of ques­tions that we’ve start­ed to think about ask­ing. And you know, the first one, sort of Step 0 for me is those things that I talked about ear­li­er. So trans­paren­cy, account­abil­i­ty, and par­tic­i­pa­to­ry deci­sion­mak­ing. Or par­tic­i­pa­to­ry design. So for me that’s like bar­gain base­ment democ­ra­cy? That’s like Floor 0. That’s like sub­base­ment democ­ra­cy. And every­thing should always be built on that foun­da­tion. But we need to be ask­ing real­ly dif­fer­ent kinds of ques­tions after that—and we’re not quite there yet in this space/ 

One I think—I’ll just share one or two. One that I think is real­ly impor­tant is is the use of ana­lyt­ics accom­pa­nied by increased resources, or is it being deployed as a response to decreas­ing resources? because it is being deployed as a response to decreas­ing resources you can be pret­ty sure it’s going to act as a bar­ri­er and not as a facil­i­ta­tor of services. 

And that was cer­tain­ly true across the cas­es I looked at, but the best exam­ple of this would be… So Georgia State University in 2012 moved to a pre­dic­tive ana­lyt­ics in their advis­ing. They have, like many under­re­sourced pub­lic uni­ver­si­ties that serve first-generation col­lege stu­dents, they’ve had real issues retain­ing stu­dents. So they moved to pre­dic­tive ana­lyt­ics in 2012 and they’ve been writ­ten about wide­ly as this sort of huge suc­cess in using pre­dic­tive ana­lyt­ics to do bet­ter advis­ing to keep col­lege stu­dents in school. Their reten­tion rate went up some­thing like 30%.

But the part of the sto­ry that gets buried over and over again every time that it’s writ­ten about is that at the same time they moved to pre­dic­tive ana­lyt­ics, they went from doing 1,000 advis­ing appoint­ments a year to doing 52,000 advis­ing appoint­ments a year. They hired forty-two new full-time advi­sors. And that always ends up in para­graph 17 of these sto­ries. So it’s like, Predictive ana­lyt­ics wins! And [muf­fles voice with her hands] also huge amounts of resources.” 

And so it feels to me like that sto­ry is actu­al­ly the sto­ry of ade­quate resources solve real prob­lems,” not pre­dic­tive ana­lyt­ics wins.” I’m sure the pre­dic­tive ana­lyt­ics helped them like, fig­ure out where to send the mas­sive wave of new resources. But I think it is…misleading to talk about those two things as sep­a­rate from one anoth­er. So that’s a ques­tion you should ask. Like what’s the resource sit­u­a­tion when you’re mov­ing to analytics. 

Another is, real­ly do we have a right as a com­mu­ni­ty to stop one of these tools, or from the very begin­ning to say no? So the ACLU in Washington has made some real inroads specif­i­cal­ly around police sur­veil­lance tech­nol­o­gy and hav­ing sort of a com­mu­ni­ty account­abil­i­ty board that the police depart­ment has to run any use of new sur­veil­lance tech­nolo­gies through this com­mu­ni­ty group in order to get infor­ma­tion about it, and before they start on deploy­ing it. And I think one of the great ques­tions they’re ask­ing is not just can we stop it but can we say no from the begin­ning? And cCan we say no for rea­sons that are non-technical, Like if this does­n’t match our val­ues and we don’t want it. Like are there ways that we can say no? 

Or is there rem­e­dy, right? I think we’re just get­ting to this part in the con­ver­sa­tion, which is if one of these tools harms you or harms your fam­i­ly, is there a way for you to get redress? That’s also a real­ly impor­tant ques­tion, I think. 

So that’s sort of where the work has been going, in col­lab­o­ra­tion with these orga­ni­za­tions. It’s been think­ing about what kind of ques­tions do we want to ask in order to exert some con­trol and pow­er, and to bring the real, full breadth of exper­tise into the room when we’re mak­ing these kinds of deci­sions. Thank you for that question.

Audience 2: Hi Virginia. Back to the intent and impact and also to the soul-searching com­ment. So what do we do about the groups whose inten­tions are to keep peo­ple off and who for them, their jus­ti­fi­ca­tion’s that it’s… They did the soul-search and for them the jus­ti­fi­ca­tion is this is bet­ter for soci­ety and peo­ple should­n’t be on ben­e­fits, etc. I think there are folks in this room that have had that argu­ment as well. So what do we… So do we just like, not not work with those groups or what do we do with groups like that?

Eubanks: Yeah, so I think it’s a real­ly cru­cial ques­tion for this polit­i­cal moment, right. So if you look at the 2019 Trump admin­is­tra­tion bud­get, it iden­ti­fies… I may bet this fig­ure not exact­ly right. But one of the things that bud­get promis­es is to save $188 bil­lion over the next ten years by bring­ing these kinds of tech­niques to middle-class enti­tle­ment pro­grams. To dis­abil­i­ty, to unem­ploy­ment, to Social Security. And so one of the ori­gin points for this book that I often share is a woman on pub­lic assis­tance I was work­ing with in 2000, who she and I were talk­ing about our elec­tron­ic ben­e­fits trans­fer cards—it’s a long sto­ry, I won’t go into the whole sto­ry there. But one of the things that she said was, Oh, Virginia. You all should pay atten­tion to what’s hap­pen­ing to us,” like, folks on pub­lic assis­tance, because they’re com­ing for you next.” 

And I think both that was very gen­er­ous of her, to care about the fact that as canaries in the coalmine that they have some respon­si­bil­i­ty to com­mu­ni­cate to folks who are out­side these sys­tems. The oth­er thing I think is real­ly impor­tant she said that in 2000. She said that almost twen­ty years ago. And I think it’s anoth­er rea­son to be always start­ing this work from the folks who are most direct­ly affect­ed. Because we’re just going to learn more about these sys­tems, and we’re going to be work­ing in coali­tion with folks who are real­ly invest­ed in cre­at­ing smart solu­tions when we do that. 

So how to deal with the polit­i­cal moment that we’re hav­ing right now, around…you know, just to be hon­est we are in a moment where the coun­try is try­ing to dis­man­tle the social safe­ty net entire­ly, right. So work require­ments for Medicaid. The state of Mississippi is deny­ing 98.6% of cash wel­fare applications—a round­ing error of 100%. We’re start­ing to cre­ate ways of track­ing peo­ple who receive dis­abil­i­ty help, right. We’re increas­ing­ly in the sit­u­a­tion where just basics of the social safe­ty net are real­ly under threat. 

I think the pos­si­ble good news here…? It’s a real good/news bad news sit­u­a­tion. But the pos­si­ble good news here is that the very over­reach of these sys­tem, and the very speed and scale of them real­ly has the poten­tial to touch a lot of peo­ple real­ly quickly. 

So in Indiana, part of what drove the push­back against that sys­tem was because it was affect­ing Medicaid it began to affect middle-class folks, like grand­par­ents who are in nurs­ing homes. And that was a sort of moment where pub­lic opin­ion changed real­ly fast. And I think we’re awful­ly close to that moment right now? But I do real­ly believe we need to be doing this sort of deep work to build the coali­tion, and to build the con­nec­tion, and to build an analy­sis that we’ll have when one of these sys­tems fails in a spec­tac­u­lar way that it impacts non-poor peo­ple. And that will cre­ate a sort of win­dow to start to real­ly rethink our use of these sys­tems and what it means for our democ­ra­cy and for the health and safe­ty of our people. 

I mean, from a moral point of view we should do it ear­li­er than that. Because what hap­pens to any­one hap­pens to us all. But strate­gi­cal­ly and polit­i­cal­ly I think that’s going to be a moment that opens up a lot of pos­si­bil­i­ty. Thank you for that.

Audience 3: On one of your slides, you list­ed a non-discriminatory data set. What is that and where is it?

Eubanks: Wait, which slide? Where do I have a non-discriminatory data set?

Audience 3: It had two curly…one curly up at the top, one curly at the bottom.

Eubanks: Ah!

Audience 3: Like, I want to know where the data set exist that’s not discriminatory.

Eubanks: Yeah. So that’s a fair ques­tion. So, the mod­el inspec­tion, that may just be a mis­com­mu­ni­ca­tion between the lady who worked on my slides and me—Elvia, by the way, Vasconcelos, who’s a genius. 

So, the idea here is that step one is to inspect the mod­el for spe­cif­ic things. One is if the data set is…if and in what what ways the data set is dis­crim­i­na­to­ry. And then look­ing at whether the out­come vari­ables are actu­al mea­sures of the thing you’re try­ing to affect or whether they’re prox­ies. And the third is see­ing if there’s pat­terns of dis­pro­por­tion­al­i­ty among the pre­dic­tive variables. 

Audience 3: [inaudi­ble]

A non-discriminatory data set. So, I have not, myself. I do know that there has been some exper­i­men­ta­tion with cre­at­ing basi­cal­ly fake data sets to build machine learn­ing on. I don’t know a ton about how that actu­al­ly works, though I think it’s inter­est­ing. I believe there will prob­a­bly be a dif­fer­ent set of issues. Because you know, if you’re build­ing a fake data set you’re still build­ing a data set based on assump­tions that, you know… And where’s it come from, and can your pre­dic­tions then be valid if it’s based on fake data—right. But I don’t under­stand enough about how those sys­tems work to say that for sure. 

I think what your larg­er point is is real­ly true, which is the data sets that we have, which are pro­duced by say gang data­bas­es or pro­duced by the child wel­fare sys­tem or pro­duced by pub­lic assis­tance car­ry the lega­cy of the dis­crim­i­na­to­ry data col­lec­tion that we’ve engaged in in the past. And so it’s very hard to imag­ine that there would be a non-discriminatory data set, yeah. But it might be a ques­tion for the folks who are more on the machine learn­ing side than me about how that might work. It’s a good ques­tion. But thanks. Appreciate it.

Audience 4: Thanks for this talk. I’m a data jour­nal­ist from Germany, and I’m inter­est­ed in the gears you were talk­ing about. I’d love to hear more about that, because you already said that the algo­rithm you were talk­ing about is a good exam­ple because it’s already trans­par­ent, and it’s held in a public/private part­ner­ship so you can con­trol in some way how it works. So what else should you add to such a deci­sion­mak­ing algo­rithm to make it more safe or more fair? 

Eubanks: So I think the thing that’s hard about that ques­tion is it’s going to be dif­fer­ent in every exam­ple. And it requires sort of know­ing about how…how things actu­al­ly hap­pen on the ground around what­ev­er agency you’re inter­act­ing with. But I can give you a real­ly good con­crete exam­ple around Allegheny County. And they have actu­al­ly done this. 

So, orig­i­nal­ly the Allegheny Family Screening Tool, because thank­ful­ly there’s not enough data on actu­al on phys­i­cal harm to chil­dren to pre­dict that actu­al out­come, they used two prox­ies for the out­come of actu­al mal­treat­ment. And one of them was called call re-referral.” And that just meant that there was a call on a fam­i­ly, it was screened out as not being seri­ous or severe enough to be ful­ly inves­ti­gat­ed, and then there was a sec­ond call about the same fam­i­ly with­in two years. So call re-referral; that’s one of the ways they defined that harm had actu­al­ly hap­pened, for the pur­pos­es of the model. 

Now, the prob­lem with that is that it’s real­ly real­ly com­mon for peo­ple to engage in vendet­ta call­ing inside the child wel­fare sys­tem. So you have a fight with your neigh­bor, your neigh­bor calls CPS on you. Like, you are going through a bad break up, your part­ner calls CPS on you. And this is real­ly real­ly real­ly real­ly com­mon. It hap­pens a lot. 

And one of the things I asked the design­ers when we were talk­ing about the sys­tem is like, well you know, If one of your prox­ies is call re-referral, and vendet­ta call­ing has hap­pened, you see how that’s going to a bad out­come for folks.” Because it basi­cal­ly means if you call two or three times on your neigh­bors because you’re mad at them hav­ing a par­ty, then it bumps up their risk score in CPS. And increas­es their like­li­hood of being investigated. 

And so one of the equi­ty gears in the Allegheny Family Screening Tool would then if you were going to use that proxy be a way to deal with vendet­ta call­ing, right. And it does­n’t seem like impos­si­ble to design that. It’d be like okay, if the calls come back to back for two weeks and there’s an inves­ti­ga­tion and noth­ing hap­pens, then like maybe that’s a vendet­ta call. Or if it’s X per­son or whatever—it does­n’t seem like it would be impos­si­ble to build that in, though equal­ly trou­bling as the oth­er deci­sions that are made in that system. 

I will say that they dropped that as a proxy since the book came out. I don’t know if there’s a direct rela­tion­ship between those two things, but they’re no longer using that proxy. So I think that’s a con­crete exam­ple of the sort of depth of knowl­edge you need to know about the domain in order to real­ly build those equi­ty gears in. That’s an impor­tant part of the process. Does that help?

Audience 4: [inaudi­ble]

Eubanks: No. I don’t think so. So it’s less about the data and more about how the sys­tem itself works, right. So, many of the folks I spoke to about these mod­els were incred­i­bly smart about mod­el­ing, incred­i­bly smart about data, but not very smart about the pol­i­cy domain in which they were work­ing, right. So, peo­ple who were very well-intentioned and try­ing to do the best they could would make assump­tions about how things worked inside the sys­tem with­out real­ly know­ing. Like for exam­ple, if you know any­thing about the child pro­tec­tive sys­tem, you know not to use mul­ti­ple calls as a proxy for any­thing, because that’s like, it’s stan— It’s like, I don’t know how you could talk to even two fam­i­lies who have gone through this process and not know about vendet­ta call­ing. So that’s sur­pris­ing to me that they did­n’t have a way of deal­ing with it. And so those are the kinds of equi­ty gears we need. 

And the long-run answer, real­ly, is that build­ing these sys­tems well is incred­i­bly hard and incred­i­bly resource-intensive. And build­ing them poor­ly is only cheap­er and faster at first. And so I think we have a ten­den­cy to think about these tools as sort of nat­u­ral­ly cre­at­ing these effi­cien­cies because the speed of the tech­nol­o­gy is such that it cre­ates the appear­ance? of faster and eas­i­er. But in fact you real­ly have to know a lot about how these sys­tems work in order to build the tools for them, and to inter­rupt the pat­terns of inequity that we’re already seeing. 

Audience 4: [inaudi­ble]

Eubanks: Yeah. I think that’s a good way to put it. Yeah. 

Audience 5: Hi. I want­ed to ask about a ten­sion that I think runs through the book and the actu­al nature of the prob­lem and also some of the ques­tions. Which is where the source of some of these chal­lenges lies. And so in some of the cas­es it’s about the tech­nol­o­gy and it’s about the data in par­tic­u­lar. So, for exam­ple if your tar­get vari­ables are cor­re­lat­ed with mem­ber­ship in a sen­si­tive group, you’ve got a prob­lem. Or if you have to try and state a prob­lem that is very com­pli­cat­ed very pre­cise­ly, sim­i­lar­ly, you’ve got a prob­lem. So that’s the AFST case. 

But in oth­er places it’s real­ly about the sort of social inequal­i­ty, the con­text of social inequal­i­ties, so such as in like the LA case it’s fun­da­men­tal­ly there aren’t enough hous­es at a cer­tain point 

So clear­ly it’s both. And your argu­ment is that it’s both, and they inter­sect in com­pli­cat­ed ways. But I want to ask about the ways in which the tech­nol­o­gy itself does actu­al­ly mat­ter, and it is like, dif­fer­ent. So the sort of two ques­tions are, what are the spe­cif­ic chal­lenges that you think mak­ing pub­lic deci­sions using lots of data, pos­si­bly machine learning…what’s dif­fer­ent about those kind of chal­lenges? (A.) And then sort of B, which of your solu­tions or the sort of approach­es we should take specif­i­cal­ly have to do, in your view, with that dimen­sion of the chal­lenge rather than the broad­er social con­text? Does that make sense?

Eubanks: Yeah, it makes per­fect sense. Um…and I’m gonna give you one of those like, frus­trat­ing­ly big-picture answers? Because I think the fun­da­men­tal dif­fer­ence in these sys­tems from the kinds of tools that came before is that we pre­tend that these are just admin­is­tra­tive changes. That we’re not mak­ing like, deep-seated polit­i­cal deci­sions. And it obscures the fact that we’re mak­ing real­ly pro­found polit­i­cal deci­sions through these systems. 

And I think that is the biggest chal­lenge, actu­al­ly, is the impulse to keep try­ing to sep­a­rate the tech­nol­o­gy and the pol­i­tics. Because like, that’s why I start with the poor­house, is to say like you know, our pol­i­tics have always been built into our tools, and they’re built into our tools today. But they’re built in in a way that…are faster. That scale more quick­ly. That impact net­works of peo­ple rather than indi­vid­u­als and fam­i­lies, in ways that can real­ly pro­found­ly impact com­mu­ni­ties. And also that don’t pro­vide the same kind of space for resis­tance, right. 

So one of things that’s real­ly inter­est­ing about poor­hous­es is that one of the rea­sons they did— So, we were sup­posed to have one in every coun­ty in the United States; we only end­ed up with about a thou­sand of them—that’s still a lot. But we did­n’t get one in every coun­ty in the United States. Part of the rea­son is that they end­ed up being real­ly expen­sive. And that’s a les­son we should learn. They thought they were going to be cheap­er, too. And it did­n’t work out that way. 

And the oth­er rea­son that they did­n’t spread across the coun­try is that all of a sud­den, peo­ple liv­ing in this…it’s like a shared space, eat­ing over a shared table, liv­ing in dorms, tak­ing care of each oth­er’s kids, car­ing for each oth­er when you die…like, start­ed to care about each oth­er and start­ed to use poor­hous­es as a way of resis­tance, sort of build­ing resis­tance in poorhouses. 

And so one of my real con­cerns about these sys­tems is that they seem to me pro­found­ly iso­lat­ing, right. That they rein­force this nar­ra­tive that pover­ty’s an aber­ra­tion. That you’ve done some­thing wrong. And that you should just shut up about it and not sort of push back against the sys­tem. So I’m real­ly con­cerned about the ways it removes estab­lished rights from peo­ple. Like their their rights to fair hear­ings. I tell a sto­ry about that in the book. And I’m real­ly con­cerned about it remov­ing a pub­lic space of gath­er­ing, where we can come togeth­er, talk about our expe­ri­ences and real­ize we’re not alone. 

And I’ll just say, as a wel­fare rights orga­niz­er for many years, we orga­nized in the wel­fare office all the time. Because peo­ple had a lot of time. They were there with their whole fam­i­ly. And they were mad. And so it was a real­ly great place to orga­nize um…until you got thrown out. So I am real­ly con­cerned about the sort of larg­er thread of this. Which I think it’s true around pris­ons as well. The move from pris­ons to ankle shack­les I think cre­ates some sim­i­lar issues of increas­ing isolation—no less pun­ish­ment but more iso­la­tion. Or a dif­fer­ent kind of pun­ish­ment and iso­la­tion, to be more clear.

So I think the pri­ma­ry issue is this issue around not see­ing these as polit­i­cal deci­sions. And the solu­tion, I’d just take you back to that ear­li­er stuff, is about telling sto­ries in a dif­fer­ent way. And this may be because I’m real­ly invest­ed right now in being a writer, so I’m real­ly invest­ed in sto­ry­telling and in learn­ing how to do good sto­ry­telling. I think there’s a zil­lion ways to actu­al­ly address the sto­ry and the pol­i­tics of pover­ty in the United States, and some of it’s pol­i­cy work and some of it’s orga­niz­ing work and some of it’s sto­ry­telling. For me that’s the one that I’m most invest­ed in right now. And so it’s the one that I’m tak­ing on. But there’s plen­ty of room. There’s a lot of room to do work around eco­nom­ic and racial inequal­i­ty in the United States. You’ll have good com­pa­ny. You’ll nev­er be bored in that work. 

Audience 5: Thank you so much for your pre­sen­ta­tion, it’s been real­ly fas­ci­nat­ing. If I may, I would like to very kind­ly ask you to revis­it a theme that I heard around also in oth­er ques­tions, the theme of neu­tral­i­ty. But this time with a focus on tech­nol­o­gy itself. The sys­tem itself, not the design­ers of the sys­tem. Because we heard ear­li­er the ques­tion, or the notion of a data set being dis­crim­i­na­to­ry. But then that entails that the data set is unfair. So by hav­ing this nar­ra­tive it means that we’re kind of insin­u­at­ing that there is a cer­tain nor­ma­tiv­i­ty to these sys­tems them­selves, where­as… I think there was an ear­li­er event I think a week ago on pub­lic inter­est tech­nol­o­gy, and a lot of speak­ers had the shared opin­ion that tech­nol­o­gy in itself can­not be good or evil. But it’s just a tool and then it depends on the inten­tions with which it’s going to be applied. 

I think that also the exam­ple that you were men­tion­ing, when you have a sys­tem that is designed to take into account some fac­tors that will def­i­nite­ly cre­ate a biased out­come, then that’s also a poorly-designed frame work but not nec­es­sar­i­ly a sys­tem in itself. 

So I was won­der­ing how you see the par­a­digm under the third part, you were men­tion­ing the idea of hav­ing good tech­nol­o­gy. How does that work in practice?

Eubanks: Yeah, so there’s a cou­ple of things I want to address. But keep me on track if I don’t get right back to the how’s that look in prac­tice piece, because I hear that piece.

Okay, so I think it’s real­ly impor­tant to address this like…tools aren’t for—you know, tools are neu­tral” idea. So, part of the way that I make my liv­ing is as a brick mason. And I spe­cial­ize in his­toric brick repair. And I’m very much an ama­teur. But I’m a tal­ent­ed ama­teur. And I always find it real­ly fun­ny when peo­ple say that tools are neu­tral, because it feels like you don’t actu­al­ly spend a lot of— Not you per­son­al­ly, but folks who say that don’t spend a lot of time with tools, right. Because I am like I said an ama­teur at mason­ry but I have six dif­fer­ent trow­els. Because you can’t use a quarter-inch repoint­ing trow­el to do what a car­ry­ing trow­el does, right? So the trow­els are big and flat and you use them to move mate­r­i­al. A quarter-inch report­ing trow­el shoves mor­tar into quarter-inch cracks. I can’t even use my quarter-inch report­ing trow­el for a three-eights-inch gap. Like I actu­al­ly need anoth­er tool for that. 

So I think the les­son here is the tools are nev­er neu­tral. Tools are designed over time to do spe­cif­ic pur­pos­es. And yes, you can use a ham­mer to paint a barn…? But you’re going to do a ter­ri­ble job. Like a real­ly bad job. 

So, I think it’s real­ly real­ly impor­tant to address that idea that tools are neu­tral or blank, because they’re just not. I’ve nev­er seen a blank tool in my life. I’ve nev­er seen a tool that’s not designed for a spe­cif­ic pur­pose. That does­n’t mean you can’t use it against its pur­pose, but it’s hard to. Their valenced. They’re direct­ed in cer­tain ways. They’re not total­ly deter­mined but they’re directed. 

And so I think this idea that the tool does­n’t mat­ter it’s the inten­tions that mat­ter is just false. I don’t think that’s true at all. I think the inten­tions are built into the tools, from the begin­ning, across time, over their devel­op­ment. So that’s how you get a tuck­point­ing trow­el and a tri­an­gle trow­el and why they’re different. 

So, what I’d like to see us move from is this idea that neu­tral­i­ty is the same thing as fair­ness, to the idea that jus­tice means choos­ing cer­tain val­ues over oth­ers. So the val­ues that we’re cur­rent­ly design­ing with like invis­i­bly, are effi­cien­cy, cost sav­ings, and some­times anti-fraud. And all of those things should actu­al­ly be part of our polit­i­cal sys­tem. I’m not say­ing like, throw effi­cien­cy out. But I do think there are oth­er val­ues that we need to design from that we’re not acknowl­edg­ing in as direct a way. So fair­ness, dig­ni­ty, self-determination, equi­ty. And we have to do that on pur­pose in the same way we’re design­ing for effi­cien­cy and cost sav­ings on purpose. 

And some­times those val­ues will be direct­ly in con­flict with one anoth­er. And then we have to have polit­i­cal ways to make deci­sions over what val­ues we care more about. And I think effi­cien­cy’s impor­tant, but I think democ­ra­cy is more impor­tant, right. I think cost sav­ings is impor­tant, but I think peo­ple not dying from star­va­tion in the United States is more impor­tant. I think fraud is impor­tant, but I think it’s actu­al­ly more impor­tant in the way that peo­ple escape pay­ing tax­es by mov­ing their mon­ey off­shore than it is in the wel­fare sys­tem where it’s like lit­er­al­ly pen­nies and less than 5% of the system.

So we have to start from a dif­fer­ent set of val­ues if we’re going to get to sys­tems that work bet­ter, based on the world we actu­al­ly live in. So that I think is the best answer I can give to that, yeah. Thanks. 

Audience 6: [Beginning of ques­tion is inaudi­ble] …I think you already gave us an exam­ple of this. Things that you think should— Tasks or sub-tasks that you think just should not be auto­mat­ed. So that’s a way of get­ting at this prob­lem of automa­tion and the prob­lem of [indis­tinct] in this context. 

And then zoom­ing out a lit­tle bit from that, there are now all these ini­tia­tives. [Seemingly some exam­ples here, but indis­tinct; men­tion of ethics in AI.”] So if you could rec­om­mend what you think should be a in cur­ricu­lum for those sorts of things, I’d be very inter­est­ed to hear [?] your recommendations.

Eubanks: So there’s two dif­fer­ent ques­tions. One ques­tion is what should nev­er be auto­mat­ed? And that’s a super good ques­tion that I’ve nev­er got­ten before. In ten months I can’t believe no one’s asked me that. So I have to like, pon­der that for a second. 

And then the sec­ond thing is what should these new folks be look­ing at? 

Audience 6: [inaudi­ble]

Eubanks: [laughs] Oh my god what a great ques­tion. You know, I think that we have so many peo­ple who are real­ly smart about their domain, work­ing in the bud­ding world of AI and ethics. I think the big piece that’s miss­ing is talk­ing to peo­ple out­side your domain. I real­ly feel like this con­ver­sa­tion is incred­i­bly autopo­et­ic, right, that we sort of turn back in on ourselves…in a way that’s not gonna serve us or the expressed intent of increas­ing jus­tice and fairness. 

So I real­ly think actu­al­ly most of the work has to be method­olog­i­cal, has to be like how do we work with directly-impacted com­mu­ni­ties in ways that we can actu­al­ly hear their ques­tions and con­cerns, and not just be com­ing to them after the fact and were like, We’re going to pre­dic­tive ana­lyt­ics in child wel­fare. What what do you think? We have ten days for pub­lic com­ment: go.” So it has to real­ly be built in from the very begin­ning. So maybe Paulo Freire and oth­er peo­ple are good for help­ing peo­ple get to a place where they rec­og­nize the sort of extra­or­di­nary exper­tise of folks out­side their pro­fes­sion­al lives? Maybe that’s a place to start. But I real­ly feel like the place start is less in the­o­ry and less in fram­ing and more in method, more in how do we work with oth­er peo­ple in the world. That feels real­ly impor­tant to me. 

And in terms of the sys­tems that should nev­er be auto­mat­ed… I don’t know, what do you guys think? Seriously, what do you think? Do you guys think there’s any­thing that should not be automated? 

[inaudi­ble]

Yeah! That’s a good one. Yeah, absolute­ly. [crosstalk] I’d buy that.

Audience Member: The domain of social services.

[indis­tinct]

Eubanks: In gen­er­al. Yeah, I don’t talk at all about mil­i­tary stuff. I just heard…oh, what’s her name, Lucy Suchman is doing some of that work, talk­ing about auto­mat­ed mil­i­tary technology— 

Audience 7: Could I just come in on that, because you said at one point when some­thing got auto­mat­ed that used to be done by a human being then a cer­tain check on a type of bias was gone. A buffer was gone, you said at one point. 

Eubanks: Yeah.

Audience 7: So is that a sort of sys­temic fea­ture you could say well, here’s a type of sit­u­a­tion where if you inter­act with a human being that at least they could—of course, they could do the wrong thing, they could do the right thing.

Eubanks: Yeah.

So… And here’s the chal­lenge in that. So dis­cre­tion… And I know we’re out of time so I want to wrap up quick­ly. But there’s two key ten­sions that go through the work that don’t have the easy answers. 

One is the ten­sion of inte­gra­tion. Integrating sys­tems can low­er the bar­ri­ers for folks on pub­lic assis­tance who have to fill out 900 dif­fer­ent appli­ca­tions for five dif­fer­ent ser­vices and sit all day in an office and it takes for­ev­er. That can real­ly be a step for­ward in mak­ing it eas­i­er to get the resources that you need and that you are enti­tled to and deserve. But, under a sys­tem that crim­i­nal­izes pover­ty? inte­gra­tion also means that you can be tracked through all these dif­fer­ent sys­tems and crim­i­nal­ized, impris­oned, tak­en on for fraud, right. So that’s an unrec­on­cil­able ten­sion in some ways.

The oth­er is dis­cre­tion. So, front line case work­er dis­cre­tion can be the worst thing that hap­pens to you in the pub­lic ser­vice sys­tem. It can also be the only thing that gets you out of that sys­tem suc­cess­ful­ly. And so it is also I think an irrec­on­cil­able ten­sion, in that… The real­i­ty is, part of the inten­tion of these systems…part of the built in pol­i­tics of these sys­tems is the idea that fair­ness is apply­ing the rules in the same way every time. And in unequal sys­tems, apply­ing the same rules in the same way every time does­n’t actu­al­ly pro­duce equal­i­ty. It pro­duces more inequal­i­ty. And so at this point, I’m will­ing to bet on the human deci­sion­mak­er hav­ing dis­cre­tion that can be inter­rupt­ed and be pushed back on in ways these sys­tems can’t. But peo­ple of good faith dis­agree with me on that. And I can accept that. I think that’s one of the cen­tral ten­sions of this work. 

But the way to think about it I think is that, I have a smart polit­i­cal sci­en­tists friend named Joe Soss, and he says dis­cre­tion is like ener­gy. It’s nev­er cre­at­ed or destroyed; it’s only moved. So when we say we’re remov­ing dis­cre­tion from these sys­tems what we’re actu­al­ly doing is mov­ing it from one group of peo­ple to anoth­er group of peo­ple. So in Allegheny County we’re mov­ing it from the intake call cen­ter work­ers and giv­ing it to the econ­o­mists and the data sci­en­tists who built that mod­el. And that’s I think a bet­ter kind of ques­tion to think about is like, who do we think is close enough to the prob­lem to under­stand the prob­lem? To have the kind of knowl­edge they need to make good deci­sions? And I’d say in that case it’s the intake call cen­ter work­ers, who’re the most diverse part of the social ser­vice work­force in that agency. Their the most working-class, they’re the most female, they’re the clos­est to the sit­u­a­tions on the ground. And I trust them more to make those kinds of decisions.

But yeah, those are two real­ly impor­tant ten­sions, and I think they’re real­ly hard and will con­tin­ue to be real­ly hard. Thank you so much for the question.

Asher: I know there’s so much to still talk about but please join me in thank­ing Virginia.

Further Reference

Event page