Amar Asher: So, welcome everybody to the Berkman Klein Tuesday Luncheon Series. So excited to have such an oversubscribed room for such an important topic and important book. We are thrilled to— I should mention, I’m Amar Asher. I’m the Assistant Research Director here at the Berkman Klein Center. We are thrilled to have Virginia Eubanks, who is the author of this phenomenal book Automating Inequality, here at the Berkman Klein Center to talk about some of the most salient issues of the day related to emerging technologies, AI and ethics, and more generally just how many of these issues are playing out across society, and how how high-tech tools are affecting and impacting the poor. And it has so much relevance to work that’s going on here at the Berkman Klein Center. In particular over the past two years we’ve hosted a series of conversations around the public interest and emerging technologies under our Ethics and Governance of Artificial Intelligence Initiative. It’s got a number of areas that it’s doing research in with the MIT Media Lab, and so if you’re interested in that effort and that series of conversations I encourage you to check out the Berkman Klein web site.
And let me just take a moment to mention a couple of different housekeeping things. One is that if you are new to the Berkman Klein Luncheon Series, these events are webcast for posterity, and also because this room was oversubscribed there’s lots of folks watching on the webcast, so please just be aware of that.
Second is that if you are interested in this book and actually reading it, we have them for sale via the Harvard Coop over there for $25. Virginia has graciously offered to also sign copies of the book after the talk so please do make a purchase and stick around afterwards so that she will sign them.
And third please be sure to ask questions at the end of this talk. Virginia will speak for about twenty-five to thirty minutes but we really want this to be a discussion. There’s a lot of rich material here and lots of many salient questions that we’ll be discussing. So please do ask questions, and you can do that in person here or over on Twitter. We’ll keep an eye on that for folks that are not within the room.
So let me introduce Virginia. Virginia Eubanks is an Associate Professor of Political Science at the University of Albany, SUNY. She’s the author of this tremendous book that you’ll hear about in a minute. She has also authored Digital Dead End: Fighting for Social Justice in the Information Age. She’s a co-editor, with Alethia Jones, of Ain’t Gonna Let Nobody Turn Me Around: Forty Years of Movement Building with Barbara Smith. And her writing about technology and social justice has appeared in The American Prospect, The Nation, Harper’s and Wired. She’s for two decades worked in community technology and economic justice movements, and she’s a founding member of the Our Data Bodies project and fellow at New America. So thrilled to have you here. Welcome, Virginia.
Virginia Eubanks: Hi. How’s lunch? I have, I put some aside because it looked like you people were gonna eat all the food before I got a chance to eat. So, I’m really excited to be here. Thank you so much for the invitation, and to all the folks who worked so hard to get me here on time and in one piece to have this conversation with you.
My goal today is to keep it a little bit on the short side because we have a really great, smart room here and I’d really love to have a sort of broader conversation, particularly around solutions to the kinds of problems that I describe in the book.
So one thing that I think is a bit different about Automating Inequality from some of the other really smart and fine work that’s happening around sort of algorithmic governance, or AI, or machine learning, or automated decisionmaking or whatever name you want to call it by… But there’s sort of two things that are important to me about how Automating Inequality’s a bit different.
So one is that I began all of my reporting from the point of view of folks in communities who feel like they’re targets of these systems rather than starting with administrators and designers. I did of course also talk to administrators and designers and data scientists and economists. But I started in each case with families and communities who feel like they are being targeted by these systems, and that really shaped the way I was able to tell the stories that I tell in the book.
I usually, when I have a little bit more time, I usually spend a lot of time introducing the families who spoke to me when I was reporting and getting their voices in the room. I’m going to do a little bit less of that today. So I just want to do two things. One is say what an incredible, generous act it was for people to share their experience with me. So these are folks who were in often really trying conditions. So they’re currently on public assistance or have recently gotten kicked off public assistance. They’re unhoused or homeless. Or their family is involved in a child welfare investigation. So anyone who under those conditions agrees to go on the record with their real name, they’re real location, the the real details of their life, is doing incredibly generous and courageous thing. So I just want to make sure I start by acknowledging that the book wouldn’t exist without people who took that kind of risk and made themselves really vulnerable. So particularly since I’m not going to spend a lot of time putting their voices in the room I just want to start by acknowledging that incredible contribution to the work.
And the other thing that’s a bit different about the way I tell this story is that I start the story in 1819 rather than 1980. And that allows me to do some very specific work, which is to talk about what I think of as the deep social programming of the tools that we’re now using in public services across the United States.
So, while I think that the new technologies we’re seeing absolutely have the potential to lower barriers, to integrate services, and to really act to make social service systems more efficient and more navigable, what I found in my seven years of reporting on the book is that what we’re actually doing is creating what I call a digital poorhouse, which is an invisible institution that profiles, polices, and punishes the poor when they come into contact with public services.
And in the book I talk about three different cases. I talk about an attempt to automate and privatize all of the eligibility processes for the welfare system in the state of Indiana. I talk about an electronic registry of the unhoused in Los Angeles County, what the designers call the match.com of homeless services, the Coordinated Entry System. And I talk about a statistical model that’s supposed to be able to predict which children might be victims of abuse or neglect in the future, in Allegheny County, which is the county where Pittsburgh is in Pennsylvania.
But I start the book with a chapter about sort of the history of poverty policy and what role sort of the new waves of technology have played in that process and in those systems. And I start—and this is also always when I thank my editor, because the book originally started with a ninety-page history chapter that started in like 1600 rather than in 1819, and my editor Elisabeth Dyssegaard was like, “Virginia, no. No.” Like, “You cannot do that to people.”
And I was like, “Oh, but all the deep historical detail is so interesting!”
And she was like, “To you, honey. To you.”
And so feel free to ask me about the historical rabbit holes I was not allowed to explore in this book. I have so much interesting information. But for our purposes today and for the purposes of the book we’ll start in 1819.
So the reason I start in 1819 is this is the moment where there’s a really big economic dislocation in the United States. There’s a depression. During the depression, poor and working people began to organize for their needs and for their survival. For their rights. And it makes economic elites really nervous. So economic elites do what economic elites always do when they’re nervous which is the commission a bunch of studies.
And right, maybe I shouldn’t say that at Harvard. Hi.
So, they commission a bunch of studies and they frame the question as like, what’s the real problem we’re facing right now? Is it poverty? Is it a lack of access to resources? Or is it what they called at the time “pauperism,” which was dependence on public benefits.
And does anyone want to guess what the report said? Pauperism, that’s right. So the reports came back. They said the problem is not poverty the problem is a pauperism problem, the problem is a dependence on public benefits, and we need to create a system that raises barriers just high enough that it discourages those who should not be receiving benefits, but low enough that people who really need them will get them.
And the system they invented in the 1820s was a system of brick and mortar county poorhouses. These were physical institutions for incarcerating poor and working people who requested public assistance. And what it meant— So it’s 1820 so not everybody had this right, but basically what it meant was you had to give up your right to vote and to hold office as part of the entry process to the poorhouse. You weren’t allowed to marry. And often you had to give up your children because it was understood at the time that sort of interaction with wealthier families could redeem poor children. And by interaction they generally meant sort of leasing children for agricultural or domestic labor under apprenticeship programs.
And something like a third of people who entered the poorhouse— Some poorhouses had death rates as high as 30% annually. So it’s like a third of folks who entered them every year died.
And the reason I start the story of this book with the actual physical brick and mortar poorhouse is because I believe this is the moment where we decided as a political community that the front line of the public service system should be primarily focused on moral diagnosis. On deciding whether or not you were deserving enough to receive aid rather than building universal floors under everyone. And that’s part of the sort of deep social programming that we see at work within these systems that continues to produce bad outcomes for poor families, even when the intentions of the designers, the administrators, and other folks involved in the process of creating the systems are really good. Even when people are smart and their intentions are good.
So let me talk just very briefly about the three cases and about sort of three big ideas that I see sort of cross-cutting the three cases.
So the first because I want to talk about is Indiana. And what you need to know about Indiana is in 2006 then-Governor Mitch Daniels signed what was eventually a $1.34 billion contract with a consortium of high-tech companies including IBM and ACS to automate all the eligibility processes for the welfare program. So that was cash assistance or TANF, food stamps(it was still called “food stamps” at the time), and Medicaid. And basically how the system worked is that they moved 1,500 public case workers from their local county offices to these regionalized and privatized call centers. There are several of them across the state. And they encouraged folks who were applying for public assistance to do so over online forms on the Internet.
So from the point of view of case workers, what this felt like, what this looked like, was moving from a place where you were responsible for a docket of families, for a caseload that was made up of families, to moving to a system where you were responding to a list of tasks as it dropped into a computerized queue in your workflow management systems in these regional call centers.
It also meant that you never spoke to the same person twice, right. So if you got a call, once you hung up the next call to come through would come from anywhere in the state and it would just be the next call in the queue.
From the point of view of applicants and recipients of public assistance in Indiana, it felt like no one was accountable for mistakes, because you never spoke to the same person twice and they didn’t understand your context or the sort of process of your case.
So, it was really common for people to receive what were known as failure to cooperate in establishing eligibility notices, or failure to cooperate notices. And basically what failure to cooperate notices meant is a mistake had been made somewhere in the process, right. Somebody had forgotten to sign page seventeen of a thirty-four-page application. Or the document processing center had scanned in a piece of documentation upside down or dropped it behind a desk. Or a new caseworker at the regional call center maybe misapplied policy. But no matter whose mistake it was, the only notice you would get is a notice that said you’d failed to cooperate in establishing eligibility for the program so you’re denied.
What that meant is the system was so brittle that it confused like, honest mistakes, with possible fraud. And that was a really profound shift for the people who rely on public assistance in Indiana. It also meant that the burden of figuring out what had gone wrong and solving it fell almost entirely on the shoulders of poor and working families in Indiana, some of the most vulnerable families in Indiana.
The thing that I want to point out about the Indiana case is that it assumes and is aligned with a politics of austerity that I think is really worth talking about in the context of talking about these systems. So the idea here, the narrative is, we don’t have enough resources; we have to make some really difficult decisions, including making systems more efficient and increasingly identifying fraud; because our resources are so limited and our problems are so great.
So one of the things that all of the designers and administrators told me across these three cases was that these systems are you know, perhaps regrettable? but necessary systems for doing a kind of digital triage for deciding on which families are most vulnerable to the worst outcomes of poverty, and who can wait.
And one of the things that I think it’s really important to point out is this idea that triage is necessary and inevitable is in fact a political choice. There are of course— We live in a world of abundance, and there is enough for everyone. This idea that there will never be enough resources actually creates a system that reproduces austerity. And so in the case of Indiana, for example, it was originally a $1.34 billion contract. Results in a million denials of applications over the first three years of the experiment. A 54% increase from the three years before the experiment. This causes huge suffering for people on the ground, for poor and working families but also for caseworkers, and I’m happy to talk about that more a little bit later.
One of them really sort of interesting moments in the Indiana case, though, is that the community members and just sort of normal Hoosiers (that’s what you call people from Indiana, for people who don’t know), just normal Hoosiers became frustrated and annoyed enough with the system that they really organized and fought back against it. They pushed back against the state. And they were so successful that the Governor actually canceled the contract with IBM three years into the experiment.
And then IBM turned around and sued the state for breach of contract. And in the first round of the court case actually won. So they were allowed to keep the half-billion dollars they had already collected. And then they were awarded an extra $50 million in penalties because the state had breached the contract. That case stayed in the courts for about eight years and in the end it did turn around and the Supreme Court found that IBM was in breach and gave $150 million back to the state.
But the reality is that this assumption that we had to trim already very lean roles produced a system that denied so many people rights that it had to be canceled. And the cancellation actually cost the state a lot of money, both in the money they had already spent, and in the eight years of legal battles around whose fault it was that a million applications were denied.
So the irony here is that assuming austerity tends to reproduce austerity, right. It’s actually very expensive to profile, police, and punish poor and working families. And we’ll talk a bit more about that in a minute.
So, I’m going to talk now about the Allegheny county algorithm. And I hope we’ll have time to talk about Los Angeles. But I’ll do bits and pieces of this and we can reengage in conversation if you feel like there’s anything I’ve missed.
So, the Allegheny Family Screening Tool is a statistical model that’s built on top of a data warehouse that was built in 1999 in Allegheny County. So the data warehouse receives regular data extracts from twenty-nine different agencies across the county. As of the writing of the book it held a billion records, more than 800 for every individual living in Allegheny County.
But it doesn’t actually collect information equally on all people. So the agency that it’s receiving data extracts from are primarily agencies that interact with poor and working families. So it’s juvenile and adult probation, the state Office of Income Maintenance or Pennsylvania’s welfare office, the county office of mental health services, the county office of addiction, drug and alcohol recovery, and I think twenty now public schools.
The limitations of that data set then have become a really important part of the tool that’s built on top of the data warehouse, which is the Allegheny Family Screening Tool. And I’m not gonna go into great technical depth on how that system works but I’m happy to talk about that a little bit later and get into the technical weeds, because I find them really interesting. But a couple of things that I think are really important to understand.
One is that it is not actually machine learning or artificial intelligence, though the county has recently moved to using some machine learning in their system. When I was reporting on the system it was a simple statistical regression. For that quant nerds in the room, it’s a stepwise probit regression, so a pretty standard regression that they ran against all the data that’s available in the data warehouse to pull out variables they believe correlate with future abuse or neglect. So, using historical validation data, not really training data because it’s not machine learning, but using their historical validation data.
The reality of experiencing this tool, though, from parents’ point of view, they feel very much like because of the limitations around the data set, because the data only collects information or primarily collects information on poor and working class families, they feel like they are part of a system of poverty profiling where because they are being… Because their data is in the system more than professional middle-class, or middle-class families, they are identified for possible abuse or neglect more, risk rated more highly. Which means they’re investigated more often. Which means they’re indicated more often. Which means that more of their data goes in the system, sort of creating this feedback loop that’s very similar to the kind of feedback loop that people talk about around predictive policing.
So the families that I spoke to very often said they felt like the system confused parenting while poor with poor parenting. So it’s a false positives problem, right. Seeing harm were no harm may actually exist.
Now, I also spent a lot of time with front line case workers in this system, particularly with intake call center workers. And intake call center workers are the folks who receive reports of abuse or neglect from the community, either over their their hotline or from mandated reporters in the community. And they make a really difficult decision. They make a decision about whether or not they should screen each case in for a full investigation or whether they should screen it out as not rising to the level of abuse or neglect, or as not having high enough risk or low enough safety to the children to rationalize running a full investigation.
And intake call center workers, interestingly, were concerned about the opposite problem but for the same reason. So they were concerned about false negatives problems. They were concerned about the system not seeing harm where harm might actually exist. So they explained to me that because the system doesn’t really collect information on professional and middle-class families… And you know, professional middle-class families need as much help with their parenting as everyone else. The difference is that they tend to pay for it with private sources. So, if you need help with childcare, you get a nanny or a babysitter, you pay out of pocket. If you need help with addiction recovery or with a mental health issue and you have private insurance, that information’s not going to end up in this data warehouse. Only the folks who go to county mental health services end up in the data warehouse, right.
So the intake call screeners were really concerned that some of the things that are really good indicators for abuse and neglect in professional middle-class families wouldn’t be covered in the data warehouse, so it wouldn’t be represented in the model. So for example, there’s some good evidence that geographic isolation actually is highly correlated with abuse or neglect, but folks who live in the suburbs or in isolated housing won’t show up in the data warehouse because they’re not the folks in Allegheny County who are getting county health. So they won’t end up in the data warehouse. So intake call screeners were also really concerned about the limitations of that data set, but they were concerned about it from the other side.
Also, I want to say another thing that’s important about this system is that you know, many of the administrators I spoke to spoke a lot about efficiency and cost savings as reasons for these tools. But that was only one reason. And another reason that was really important to them was to identify and mitigate bias in front line decisionmaking or in public service decisionmaking. I think it’s really really important to acknowledge that that bias exists. The human bias exists. Institutional bias exists in the system, and has for a really long time. So since the Social Security Act in the 1930s until the 1970s, black and Latino families were largely blocked from receiving public assistance by discriminatory eligibility rules that didn’t fall until they were directly challenged by the National Welfare Rights movement in the late 60s and early 70s. And that’s created all sorts of discretionary excesses in the system that are both human and institutional and really important to address.
It is also true in the child welfare services, although the problem in child welfare services tends not to be exclusion from the system but overinclusion in the system. So in forty-seven states across the United States, African-American children are in foster care at rates that far exceed their actual proportion of the population. It’s a problem called racial disproportionality. And Allegheny County like most counties has a problem with disproportionality. So at the time I was doing my reporting, 38% of children in foster care in Allegheny County were black or biracial, and they only made up 18% of the youth population, so that’s what like, twice…more than twice where they should be given their proportion of the population.
So the designers of this system were really excited to talk to me about the possibility of using the better data that they were gathering to identify where patterns of discriminatory decisionmaking might be entering the child welfare system. Now, the problem with that is that the county’s own research shows that the intake call screening is not actually the point at which discrimination is entering the system. In fact it’s entering much earlier. So it’s entering at the point at which families are referred to the system. So it’s entering at referral not at screening. The community refers black and biracial families either through mandated reports or through the hotline 350% more than—three and a half times as often—than they refer white families.
Once that case gets in the system, there is a tiny bit of disproportionality that’s added by the intake screening process. So intake screeners screen in 69% of black and biracial families, and only 65% of white families. But the difference there is like a 4% percentage difference or a 350% percentage difference.
And I think one of the really interesting questions this begs is, is the earlier problem a data-amenable problem? That referral bias, is that something we can attack or address or confront with automated systems? And my feeling is that that’s really a cultural issue not a data issue, although of course the two are deeply related. It’s an issue about who we as a country…what we see a good family looking like. And in the United States we see a good family as looking white and wealthy. And that has a profound impact on the kinds of impacts that the system can have moving forward.
One of my real concerns about this system is that we’re actually removing discretion from front line call center workers, at the point at which they may be pushing back against the discriminatory effects of referral bias. So we’re actually removing a possible stop to the amplification of bias in that system.
And I just want to mention that one of the things that these systems are really good at is identifying bias when it is individual and the result of irrational thinking. They are less good at identifying and addressing bias that is structural, systemic, and rational, right. And that’s something I want to talk a bit more about at the end. There’s also some proxies that we’re not gonna talk about.
Okay, last system that I want to talk about is the Los Angeles system, which is called the Coordinated Entry System. Referred to by its designers as the match.com of homeless services. What coordinated entry is supposed to do is basically rate unhoused people on a scale of vulnerability and then match them with the most appropriate available resources based on their vulnerability.
This isn’t unusual, at all. In fact Los Angeles county is just one of the many places that’s using coordinated entry. It’s become really standard across the country since I started the research. But one of the reasons to look at Los Angeles is because the scale of the housing crisis there is just so extraordinary. So as of the last point in time count, there are 58,000 unhoused people in Los Angeles county. I live in a small city in upstate New York called Troy. There’s just fewer than 50,000 people in Troy. So my entire city, plus 10,000 people is homeless in Los Angeles county, right, so just for a sense of the scale.
And something like 75% of the people who are unhoused in Los Angeles county are completely unsheltered. So they have no access to emergency shelter, living in tents or in cars, or in encampments. And so this is an absolutely critical humanitarian crisis in the United States.
So it totally makes sense, it completely make sense to me that folks, particularly front line case workers, want a little help making the incredibly difficult decision of who among the like hundred people they see every week gets access to the two or three resources they have at their disposal, right. It’s an incredibly difficult decision, and I absolutely understand the impulse to try to create a more efficient and more rational and more objective system for matching need to resource.
Now, what I heard from folks who are interacting with the system, though, who are targets of the system, folks in the unhoused community, was a little different. So, as of the writing of the book, they had managed to match… Let me tell you a little bit of how how it works first.
So, coordinated entry, there’s basically four pieces. The first piece is a very intensive survey called the VI-SPDAT, which is the Vulnerability Index and Service Prioritization Decision Assistance Tool. (Yes. It’s not my first time saying that out loud.) So there is this very intense survey called the VI-SPDAT that is given to unhoused folks either through street outreach or when they come in to organizations for help. That information gets input into their homeless management information system, which we’re not going to go into depth with. Just think of it as a database. It’s not quite true, but think of it as a database for now.
So that information goes into their HMIS, there’s an algorithm in the homeless management information system that then adds up folks vulnerability score, how high they are on the scale of being likely to experience the worst outcomes of being homeless, including emergency room visits, death, mental health crisis, violence, right. Really awful outcomes of being announced.
From the other side, there’s all this information about available resources entering the other side of the database. And the two meet in the middle, where there’s supposed to be an algorithm that matches unhoused people based on their vulnerability score with the most appropriate available resource, based on what’s available in the system.
The reality is…this isn’t even in the book so shh. The reality is that there’s…when I was reporting, at least, there’s no second algorithm. Actually it’s like Mechanical Turk; there’s like a guy in a room who’s matching the two… But it doesn’t actually really matter, overall, for the ways we we need to be thinking about this system.
So, the unhoused folks that I talked to, some of them I want to be clear thought this was the—you know, the best thing since sliced bread. Were very clear to say like, “I got housed through this system. It’s the best. It’s a gift from God. It’s the best Christmas present I ever got, absolutely.” And they have been able to match about 9,000 people with some kind of resource through this system. That doesn’t necessarily mean housing, that just means any kind of resource. It could be like a little help avoiding an eviction, or moving costs, or finding a new rental. But they have as of the writing of the book surveyed thirty-nine thousand people with the VI-SPDAT.
So what I thought was a really important question was talking to the folks who have been surveyed but haven’t gotten resources about their experience with the system. And what they told me is that they felt like they were being asked to potentially incriminate themselves in exchange for a slightly higher lottery number for housing. And why they believed that is because the VI-SPDAT actually asks some really intense and borderline invasive questions.
For example it asks “Are you currently trading sex for drugs? Does someone think you owe them money? Have you thought about harming yourself or someone else? Are there open warrants out for you? Are you having unprotected sex? Where can be found at different times of the day?” And, “Can we take your picture?”
And though folks fill out a really complete, informed consent form that lasts for seven years, many of them didn’t feel like they had truly free, voluntary consent in interacting with this process. Because coordinated entry has become the front door for pretty much all housing resources in Los Angeles County. So they felt like—particularly those folks who had taken the survey multiple times and never received any resources, they were beginning to view the system with some suspicion.
And it’s actually not a terrible analysis of the system. So, though you sign this really sort of intense informed consent that last a really long time, if you have questions about how your data is being shared, you actually have to go through another step and request that information be sent to you…? (Right, unhoused.) …request that information about where your data goes be sent to you. If you do request that information you get a list of 161 agencies who share this information, who share this data across their system.
And one of them, because of the federal data standards, is the Los Angeles Police Department. So, under current federal data standards, information that’s stored in an HMIS can be accessed by law enforcement within no warrant at all, no oversight process, no written record. Just a line officer can walk into a social service office and ask for information about unhoused people. They can’t get anything they want out of the system, and social service workers can say no (This is really important to know.) but they are allowed to get it and there’s no oversight process for that.
So what I want to do is talk about two things and I’m gonna wrap up in about three minutes and then we’re gonna have a larger conversation. Because I also want to point towards where the work has gone since the writing of the book. But I think one thing that’s really important to think through is…you know, I hear from folks when I do these talks a lot, like there’s a sense that, “Oh Virginia, you wrote the Frankenstein book.” Like you found the scariest systems you could and you wrote this really frightening book because scary stories sell books.
And the reality is that in Indiana it might be true. In Indiana it’s… Though I don’t know what was in Governor Daniel’s heart when he made the decisions he made to create the system, I do know, as one of the sources said, that if they had built a system on purpose to deny people access to public assistance it probably wouldn’t have worked any better. So we might be able to put a black hat on that system. But in Los Angeles and in Allegheny County? all of the designers and the policy makers and the administrators I talked to were very smart, very well-intentioned people who cared deeply about the folks their agency served.
And I actually think that sets up a better set of questions. So I didn’t write about the worst cases out there. In fact if I wanted to write a worst-case book it would’ve been a lot scarier than the one that I wrote. Because the systems in Allegheny County and in Los Angeles, actually the designers are doing just about everything that progressive critics of algorithmic decision-making ask them to do. They’ve been largely—not entirely, but largely transparent about how the systems work and what’s inside them. They hold these tools in public agencies or at least in public/private partnerships so there is some kind of democratic accountability around them. And both of them actually even engage in some kind of process of participatory design, or like human-centered design of the tools. And that’s really all the things we ever ask for in sort of progressive critiques of algorithmic decisionmaking.
So these are actually some of the best tools we have, not some of the worst. And I think that actually raises some really important questions. Which brings us all the way back to that story I told at the beginning about where the deep social programming of these tools comes from, and how we are often sort of invisibly carrying forward this decision we made 200 years ago that social service is more a moral thermometer than a universal floor.
And so I just want to point out that it’s less important I think to talk about the intent of the designers, though of course that’s interesting and important, than it is to talk about impacts on targets. And so that’s one of the sort of big picture things I’d like us to talk a little bit about, about how we can move the conversation away from intent and towards impact.
And finally I want to talk a little bit about solutions. So, I know that when I come and do talks like this, particularly for rooms that are technically sophisticated or policy sophisticated, that often what people want is sort of a five-point plan for building better technology. And I get it. And I’m sorry and you’re welcome that I’m gonna make you resist the urge for a simple solution to what is really a very very complicated problem.
So I believe we need to be doing three kinds of work simultaneously in order to really move the way the systems are working. And the first is narrative or cultural work. And that’s really about changing the story we tell about poverty. There’s a story in the United states that poverty is an aberration. That it’s something that happens only to a tiny minority of probably pathological people. And that’s simply not true. So if you look at Mark Rank’s really extraordinary life cycle research around poverty in the United States, 51% of us will be below the poverty line during our adult lives, between the years of 20 and 64. And almost two thirds of us, 64% of us, will access means-tested public assistance. So that’s straight welfare, that’s not reduced-price school lunches. That’s not Social Security. That’s not unemployment. That’s straight welfare.
So the story we tell that poverty is an aberration, is a rare thing, is just simply untrue, empirically. Poverty is actually a majority experience in the United States. That doesn’t mean we’re all equally vulnerable to it. That’s simply untrue as well. If you’re a person of color, if you’re born poor, if you’re caring for other people, if you have a physical disability or mental health issues, you’re more likely to be poor and it’s harder to escape once you’re there. But the reality is poverty is a majority experience in the US, not a minority experience.
I believe if we start to shift that narrative, if we start to shift that story, we’ll be able to imagine a different kind of politics that is more about building universal floors under all of us and distributing our shared wealth more evenly and more fairly, and less about deciding whether or not you are desperate enough and deserving enough to receive help. Because many of the conditions I talk about in the book, whether it’s a living on the sidewalk for a decade or more or losing a child to the foster care system because you can’t afford prescription medication, in other places in the world people see these as human rights violations. And that we see them here increasingly as systems engineering problems actually says something very deep and troubling about the state of our national soul. And I think we need to get our souls right around that in order to really move the needle on these problems.
And finally, in the meantime technology’s not going to just stop and wait for us to do this incredibly complicated and difficult work. And so my sort of final bit of advice is to designers. And it’s about not confusing designing a tool in neutral with designing it for justice and equity. And sort of to quote Paulo Freire the radical educator, he says neutral education is education for the status quo. And it’s the same around technologies. Neutral technologies just means technologies designed to protect and promote the status quo. If we want to actually address the very real landscape of inequality in the United States, we have to do it on purpose, from the beginning, every time.
So the metaphor I often use for folks is you know, think about this tool we’re using as a car. And think about the landscape of inequality we live in as being San Francisco, right. Very bumpy, very hilly, very Valley‑y, very full of twists and turns. Now, if you built your car with no gears, you should not then be surprised when it hurtles to the bottom of a hill and smashes to bits at the bottom. You have to build in gears to actually engage with the hills and the turns that exist in your landscape. And we have to do that when we’re building these systems as well. Equity and justice won’t happen by accident. We have to design it into all of our political tools, so that’s both our policies and our technologies, from the beginning, brick by brick and byte by byte.
Thank you so much for your time, for your attention. I’m really looking forward to this conversation. Thank you.
Amar Asher: Thank you so much Virginia. So much to dig into here, and I’m eager to get to question since we have limited time. I see a first hand.
Virginia Eubanks: Alright.
Audience 1: Hi. You had alluded to the work that you’re doing now after the book. Could you talk more about that?
Eubanks: Yeah, so one of the things—thank you for letting me put up my my last beautiful slide. So one of the things that’s been happening a lot since the book came out is that… One is that I’ve realized that books are a moment in time and not a final answer on anything. And that my own thinking in some ways has shifted since the book was published. And one of the ways my thinking has shifted was around who I think the audience for the book is.
So originally I really saw two audiences. One was folks who have experienced these systems as targets. Because I think it’s really important for those of us who are engaged in these systems to have confirmation of our stories. Because the way that stigma and poverty works in the United States makes us all feel like we’re the only person this has ever happened to. So sharing those stories is a really important part of that larger narrative work of telling a different story about poverty.
And then I also thought the book’s audience was mostly designers, and data scientists, and economists, and the folks who are building these models and these tools. And that’s true; I do think that I’ve been able to engage in some really good conversations with folks who design these systems.
But the audience that I didn’t think of when I was writing the book explicitly is folks who are on the ground in organizations who are seeing these tools roll out and who are actually often asked to consult about them by state agencies or local agencies and who I’m now increasingly getting a lot of phone calls from, just because they’ve seen the book or read the book who say like, “Hey, you know in New York City the Bronx Defenders called me and said the administration for children’s services in New York City is moving towards predictive analytics in child welfare. They want us to consult on the tool. We don’t even know how to frame the questions. Can you help?”
And so one of the things that’s happened since the book came out is that I think we’ve opened up this really interesting set of questions about like, how do organizations and advocates and you know, neighborhoods frame questions so that they sort of claim their space as experts at the table in this decisionmaking? Because I think too often these are exactly the people who aren’t in the room when we make these decisions. And if my book is any indication, we then frame the problems in ways that are not in the long run going to help us create more just, more fair systems.
So what’s come out of that is a set of questions that we’ve started to think about asking. And you know, the first one, sort of Step 0 for me is those things that I talked about earlier. So transparency, accountability, and participatory decisionmaking. Or participatory design. So for me that’s like bargain basement democracy? That’s like Floor 0. That’s like subbasement democracy. And everything should always be built on that foundation. But we need to be asking really different kinds of questions after that—and we’re not quite there yet in this space/
One I think—I’ll just share one or two. One that I think is really important is is the use of analytics accompanied by increased resources, or is it being deployed as a response to decreasing resources? because it is being deployed as a response to decreasing resources you can be pretty sure it’s going to act as a barrier and not as a facilitator of services.
And that was certainly true across the cases I looked at, but the best example of this would be… So Georgia State University in 2012 moved to a predictive analytics in their advising. They have, like many underresourced public universities that serve first-generation college students, they’ve had real issues retaining students. So they moved to predictive analytics in 2012 and they’ve been written about widely as this sort of huge success in using predictive analytics to do better advising to keep college students in school. Their retention rate went up something like 30%.
But the part of the story that gets buried over and over again every time that it’s written about is that at the same time they moved to predictive analytics, they went from doing 1,000 advising appointments a year to doing 52,000 advising appointments a year. They hired forty-two new full-time advisors. And that always ends up in paragraph 17 of these stories. So it’s like, “Predictive analytics wins! And [muffles voice with her hands] also huge amounts of resources.”
And so it feels to me like that story is actually the story of “adequate resources solve real problems,” not “predictive analytics wins.” I’m sure the predictive analytics helped them like, figure out where to send the massive wave of new resources. But I think it is…misleading to talk about those two things as separate from one another. So that’s a question you should ask. Like what’s the resource situation when you’re moving to analytics.
Another is, really do we have a right as a community to stop one of these tools, or from the very beginning to say no? So the ACLU in Washington has made some real inroads specifically around police surveillance technology and having sort of a community accountability board that the police department has to run any use of new surveillance technologies through this community group in order to get information about it, and before they start on deploying it. And I think one of the great questions they’re asking is not just can we stop it but can we say no from the beginning? And cCan we say no for reasons that are non-technical, Like if this doesn’t match our values and we don’t want it. Like are there ways that we can say no?
Or is there remedy, right? I think we’re just getting to this part in the conversation, which is if one of these tools harms you or harms your family, is there a way for you to get redress? That’s also a really important question, I think.
So that’s sort of where the work has been going, in collaboration with these organizations. It’s been thinking about what kind of questions do we want to ask in order to exert some control and power, and to bring the real, full breadth of expertise into the room when we’re making these kinds of decisions. Thank you for that question.
Audience 2: Hi Virginia. Back to the intent and impact and also to the soul-searching comment. So what do we do about the groups whose intentions are to keep people off and who for them, their justification’s that it’s… They did the soul-search and for them the justification is this is better for society and people shouldn’t be on benefits, etc. I think there are folks in this room that have had that argument as well. So what do we… So do we just like, not not work with those groups or what do we do with groups like that?
Eubanks: Yeah, so I think it’s a really crucial question for this political moment, right. So if you look at the 2019 Trump administration budget, it identifies… I may bet this figure not exactly right. But one of the things that budget promises is to save $188 billion over the next ten years by bringing these kinds of techniques to middle-class entitlement programs. To disability, to unemployment, to Social Security. And so one of the origin points for this book that I often share is a woman on public assistance I was working with in 2000, who she and I were talking about our electronic benefits transfer cards—it’s a long story, I won’t go into the whole story there. But one of the things that she said was, “Oh, Virginia. You all should pay attention to what’s happening to us,” like, folks on public assistance, “because they’re coming for you next.”
And I think both that was very generous of her, to care about the fact that as canaries in the coalmine that they have some responsibility to communicate to folks who are outside these systems. The other thing I think is really important she said that in 2000. She said that almost twenty years ago. And I think it’s another reason to be always starting this work from the folks who are most directly affected. Because we’re just going to learn more about these systems, and we’re going to be working in coalition with folks who are really invested in creating smart solutions when we do that.
So how to deal with the political moment that we’re having right now, around…you know, just to be honest we are in a moment where the country is trying to dismantle the social safety net entirely, right. So work requirements for Medicaid. The state of Mississippi is denying 98.6% of cash welfare applications—a rounding error of 100%. We’re starting to create ways of tracking people who receive disability help, right. We’re increasingly in the situation where just basics of the social safety net are really under threat.
I think the possible good news here…? It’s a real good/news bad news situation. But the possible good news here is that the very overreach of these system, and the very speed and scale of them really has the potential to touch a lot of people really quickly.
So in Indiana, part of what drove the pushback against that system was because it was affecting Medicaid it began to affect middle-class folks, like grandparents who are in nursing homes. And that was a sort of moment where public opinion changed really fast. And I think we’re awfully close to that moment right now? But I do really believe we need to be doing this sort of deep work to build the coalition, and to build the connection, and to build an analysis that we’ll have when one of these systems fails in a spectacular way that it impacts non-poor people. And that will create a sort of window to start to really rethink our use of these systems and what it means for our democracy and for the health and safety of our people.
I mean, from a moral point of view we should do it earlier than that. Because what happens to anyone happens to us all. But strategically and politically I think that’s going to be a moment that opens up a lot of possibility. Thank you for that.
Audience 3: On one of your slides, you listed a non-discriminatory data set. What is that and where is it?
Eubanks: Wait, which slide? Where do I have a non-discriminatory data set?
Audience 3: It had two curly…one curly up at the top, one curly at the bottom.
Eubanks: Ah!
Audience 3: Like, I want to know where the data set exist that’s not discriminatory.
Eubanks: Yeah. So that’s a fair question. So, the model inspection, that may just be a miscommunication between the lady who worked on my slides and me—Elvia, by the way, Vasconcelos, who’s a genius.
So, the idea here is that step one is to inspect the model for specific things. One is if the data set is…if and in what what ways the data set is discriminatory. And then looking at whether the outcome variables are actual measures of the thing you’re trying to affect or whether they’re proxies. And the third is seeing if there’s patterns of disproportionality among the predictive variables.
Audience 3: [inaudible]
A non-discriminatory data set. So, I have not, myself. I do know that there has been some experimentation with creating basically fake data sets to build machine learning on. I don’t know a ton about how that actually works, though I think it’s interesting. I believe there will probably be a different set of issues. Because you know, if you’re building a fake data set you’re still building a data set based on assumptions that, you know… And where’s it come from, and can your predictions then be valid if it’s based on fake data—right. But I don’t understand enough about how those systems work to say that for sure.
I think what your larger point is is really true, which is the data sets that we have, which are produced by say gang databases or produced by the child welfare system or produced by public assistance carry the legacy of the discriminatory data collection that we’ve engaged in in the past. And so it’s very hard to imagine that there would be a non-discriminatory data set, yeah. But it might be a question for the folks who are more on the machine learning side than me about how that might work. It’s a good question. But thanks. Appreciate it.
Audience 4: Thanks for this talk. I’m a data journalist from Germany, and I’m interested in the gears you were talking about. I’d love to hear more about that, because you already said that the algorithm you were talking about is a good example because it’s already transparent, and it’s held in a public/private partnership so you can control in some way how it works. So what else should you add to such a decisionmaking algorithm to make it more safe or more fair?
Eubanks: So I think the thing that’s hard about that question is it’s going to be different in every example. And it requires sort of knowing about how…how things actually happen on the ground around whatever agency you’re interacting with. But I can give you a really good concrete example around Allegheny County. And they have actually done this.
So, originally the Allegheny Family Screening Tool, because thankfully there’s not enough data on actual on physical harm to children to predict that actual outcome, they used two proxies for the outcome of actual maltreatment. And one of them was called “call re-referral.” And that just meant that there was a call on a family, it was screened out as not being serious or severe enough to be fully investigated, and then there was a second call about the same family within two years. So call re-referral; that’s one of the ways they defined that harm had actually happened, for the purposes of the model.
Now, the problem with that is that it’s really really common for people to engage in vendetta calling inside the child welfare system. So you have a fight with your neighbor, your neighbor calls CPS on you. Like, you are going through a bad break up, your partner calls CPS on you. And this is really really really really common. It happens a lot.
And one of the things I asked the designers when we were talking about the system is like, well you know, “If one of your proxies is call re-referral, and vendetta calling has happened, you see how that’s going to a bad outcome for folks.” Because it basically means if you call two or three times on your neighbors because you’re mad at them having a party, then it bumps up their risk score in CPS. And increases their likelihood of being investigated.
And so one of the equity gears in the Allegheny Family Screening Tool would then if you were going to use that proxy be a way to deal with vendetta calling, right. And it doesn’t seem like impossible to design that. It’d be like okay, if the calls come back to back for two weeks and there’s an investigation and nothing happens, then like maybe that’s a vendetta call. Or if it’s X person or whatever—it doesn’t seem like it would be impossible to build that in, though equally troubling as the other decisions that are made in that system.
I will say that they dropped that as a proxy since the book came out. I don’t know if there’s a direct relationship between those two things, but they’re no longer using that proxy. So I think that’s a concrete example of the sort of depth of knowledge you need to know about the domain in order to really build those equity gears in. That’s an important part of the process. Does that help?
Audience 4: [inaudible]
Eubanks: No. I don’t think so. So it’s less about the data and more about how the system itself works, right. So, many of the folks I spoke to about these models were incredibly smart about modeling, incredibly smart about data, but not very smart about the policy domain in which they were working, right. So, people who were very well-intentioned and trying to do the best they could would make assumptions about how things worked inside the system without really knowing. Like for example, if you know anything about the child protective system, you know not to use multiple calls as a proxy for anything, because that’s like, it’s stan— It’s like, I don’t know how you could talk to even two families who have gone through this process and not know about vendetta calling. So that’s surprising to me that they didn’t have a way of dealing with it. And so those are the kinds of equity gears we need.
And the long-run answer, really, is that building these systems well is incredibly hard and incredibly resource-intensive. And building them poorly is only cheaper and faster at first. And so I think we have a tendency to think about these tools as sort of naturally creating these efficiencies because the speed of the technology is such that it creates the appearance? of faster and easier. But in fact you really have to know a lot about how these systems work in order to build the tools for them, and to interrupt the patterns of inequity that we’re already seeing.
Audience 4: [inaudible]
Eubanks: Yeah. I think that’s a good way to put it. Yeah.
Audience 5: Hi. I wanted to ask about a tension that I think runs through the book and the actual nature of the problem and also some of the questions. Which is where the source of some of these challenges lies. And so in some of the cases it’s about the technology and it’s about the data in particular. So, for example if your target variables are correlated with membership in a sensitive group, you’ve got a problem. Or if you have to try and state a problem that is very complicated very precisely, similarly, you’ve got a problem. So that’s the AFST case.
But in other places it’s really about the sort of social inequality, the context of social inequalities, so such as in like the LA case it’s fundamentally there aren’t enough houses at a certain point
So clearly it’s both. And your argument is that it’s both, and they intersect in complicated ways. But I want to ask about the ways in which the technology itself does actually matter, and it is like, different. So the sort of two questions are, what are the specific challenges that you think making public decisions using lots of data, possibly machine learning…what’s different about those kind of challenges? (A.) And then sort of B, which of your solutions or the sort of approaches we should take specifically have to do, in your view, with that dimension of the challenge rather than the broader social context? Does that make sense?
Eubanks: Yeah, it makes perfect sense. Um…and I’m gonna give you one of those like, frustratingly big-picture answers? Because I think the fundamental difference in these systems from the kinds of tools that came before is that we pretend that these are just administrative changes. That we’re not making like, deep-seated political decisions. And it obscures the fact that we’re making really profound political decisions through these systems.
And I think that is the biggest challenge, actually, is the impulse to keep trying to separate the technology and the politics. Because like, that’s why I start with the poorhouse, is to say like you know, our politics have always been built into our tools, and they’re built into our tools today. But they’re built in in a way that…are faster. That scale more quickly. That impact networks of people rather than individuals and families, in ways that can really profoundly impact communities. And also that don’t provide the same kind of space for resistance, right.
So one of things that’s really interesting about poorhouses is that one of the reasons they did— So, we were supposed to have one in every county in the United States; we only ended up with about a thousand of them—that’s still a lot. But we didn’t get one in every county in the United States. Part of the reason is that they ended up being really expensive. And that’s a lesson we should learn. They thought they were going to be cheaper, too. And it didn’t work out that way.
And the other reason that they didn’t spread across the country is that all of a sudden, people living in this…it’s like a shared space, eating over a shared table, living in dorms, taking care of each other’s kids, caring for each other when you die…like, started to care about each other and started to use poorhouses as a way of resistance, sort of building resistance in poorhouses.
And so one of my real concerns about these systems is that they seem to me profoundly isolating, right. That they reinforce this narrative that poverty’s an aberration. That you’ve done something wrong. And that you should just shut up about it and not sort of push back against the system. So I’m really concerned about the ways it removes established rights from people. Like their their rights to fair hearings. I tell a story about that in the book. And I’m really concerned about it removing a public space of gathering, where we can come together, talk about our experiences and realize we’re not alone.
And I’ll just say, as a welfare rights organizer for many years, we organized in the welfare office all the time. Because people had a lot of time. They were there with their whole family. And they were mad. And so it was a really great place to organize um…until you got thrown out. So I am really concerned about the sort of larger thread of this. Which I think it’s true around prisons as well. The move from prisons to ankle shackles I think creates some similar issues of increasing isolation—no less punishment but more isolation. Or a different kind of punishment and isolation, to be more clear.
So I think the primary issue is this issue around not seeing these as political decisions. And the solution, I’d just take you back to that earlier stuff, is about telling stories in a different way. And this may be because I’m really invested right now in being a writer, so I’m really invested in storytelling and in learning how to do good storytelling. I think there’s a zillion ways to actually address the story and the politics of poverty in the United States, and some of it’s policy work and some of it’s organizing work and some of it’s storytelling. For me that’s the one that I’m most invested in right now. And so it’s the one that I’m taking on. But there’s plenty of room. There’s a lot of room to do work around economic and racial inequality in the United States. You’ll have good company. You’ll never be bored in that work.
Audience 5: Thank you so much for your presentation, it’s been really fascinating. If I may, I would like to very kindly ask you to revisit a theme that I heard around also in other questions, the theme of neutrality. But this time with a focus on technology itself. The system itself, not the designers of the system. Because we heard earlier the question, or the notion of a data set being discriminatory. But then that entails that the data set is unfair. So by having this narrative it means that we’re kind of insinuating that there is a certain normativity to these systems themselves, whereas… I think there was an earlier event I think a week ago on public interest technology, and a lot of speakers had the shared opinion that technology in itself cannot be good or evil. But it’s just a tool and then it depends on the intentions with which it’s going to be applied.
I think that also the example that you were mentioning, when you have a system that is designed to take into account some factors that will definitely create a biased outcome, then that’s also a poorly-designed frame work but not necessarily a system in itself.
So I was wondering how you see the paradigm under the third part, you were mentioning the idea of having good technology. How does that work in practice?
Eubanks: Yeah, so there’s a couple of things I want to address. But keep me on track if I don’t get right back to the how’s that look in practice piece, because I hear that piece.
Okay, so I think it’s really important to address this like…tools aren’t for—you know, “tools are neutral” idea. So, part of the way that I make my living is as a brick mason. And I specialize in historic brick repair. And I’m very much an amateur. But I’m a talented amateur. And I always find it really funny when people say that tools are neutral, because it feels like you don’t actually spend a lot of— Not you personally, but folks who say that don’t spend a lot of time with tools, right. Because I am like I said an amateur at masonry but I have six different trowels. Because you can’t use a quarter-inch repointing trowel to do what a carrying trowel does, right? So the trowels are big and flat and you use them to move material. A quarter-inch reporting trowel shoves mortar into quarter-inch cracks. I can’t even use my quarter-inch reporting trowel for a three-eights-inch gap. Like I actually need another tool for that.
So I think the lesson here is the tools are never neutral. Tools are designed over time to do specific purposes. And yes, you can use a hammer to paint a barn…? But you’re going to do a terrible job. Like a really bad job.
So, I think it’s really really important to address that idea that tools are neutral or blank, because they’re just not. I’ve never seen a blank tool in my life. I’ve never seen a tool that’s not designed for a specific purpose. That doesn’t mean you can’t use it against its purpose, but it’s hard to. Their valenced. They’re directed in certain ways. They’re not totally determined but they’re directed.
And so I think this idea that the tool doesn’t matter it’s the intentions that matter is just false. I don’t think that’s true at all. I think the intentions are built into the tools, from the beginning, across time, over their development. So that’s how you get a tuckpointing trowel and a triangle trowel and why they’re different.
So, what I’d like to see us move from is this idea that neutrality is the same thing as fairness, to the idea that justice means choosing certain values over others. So the values that we’re currently designing with like invisibly, are efficiency, cost savings, and sometimes anti-fraud. And all of those things should actually be part of our political system. I’m not saying like, throw efficiency out. But I do think there are other values that we need to design from that we’re not acknowledging in as direct a way. So fairness, dignity, self-determination, equity. And we have to do that on purpose in the same way we’re designing for efficiency and cost savings on purpose.
And sometimes those values will be directly in conflict with one another. And then we have to have political ways to make decisions over what values we care more about. And I think efficiency’s important, but I think democracy is more important, right. I think cost savings is important, but I think people not dying from starvation in the United States is more important. I think fraud is important, but I think it’s actually more important in the way that people escape paying taxes by moving their money offshore than it is in the welfare system where it’s like literally pennies and less than 5% of the system.
So we have to start from a different set of values if we’re going to get to systems that work better, based on the world we actually live in. So that I think is the best answer I can give to that, yeah. Thanks.
Audience 6: [Beginning of question is inaudible] …I think you already gave us an example of this. Things that you think should— Tasks or sub-tasks that you think just should not be automated. So that’s a way of getting at this problem of automation and the problem of [indistinct] in this context.
And then zooming out a little bit from that, there are now all these initiatives. [Seemingly some examples here, but indistinct; mention of “ethics in AI.”] So if you could recommend what you think should be a in curriculum for those sorts of things, I’d be very interested to hear [?] your recommendations.
Eubanks: So there’s two different questions. One question is what should never be automated? And that’s a super good question that I’ve never gotten before. In ten months I can’t believe no one’s asked me that. So I have to like, ponder that for a second.
And then the second thing is what should these new folks be looking at?
Audience 6: [inaudible]
Eubanks: [laughs] Oh my god what a great question. You know, I think that we have so many people who are really smart about their domain, working in the budding world of AI and ethics. I think the big piece that’s missing is talking to people outside your domain. I really feel like this conversation is incredibly autopoetic, right, that we sort of turn back in on ourselves…in a way that’s not gonna serve us or the expressed intent of increasing justice and fairness.
So I really think actually most of the work has to be methodological, has to be like how do we work with directly-impacted communities in ways that we can actually hear their questions and concerns, and not just be coming to them after the fact and were like, “We’re going to predictive analytics in child welfare. What what do you think? We have ten days for public comment: go.” So it has to really be built in from the very beginning. So maybe Paulo Freire and other people are good for helping people get to a place where they recognize the sort of extraordinary expertise of folks outside their professional lives? Maybe that’s a place to start. But I really feel like the place start is less in theory and less in framing and more in method, more in how do we work with other people in the world. That feels really important to me.
And in terms of the systems that should never be automated… I don’t know, what do you guys think? Seriously, what do you think? Do you guys think there’s anything that should not be automated?
[inaudible]
Yeah! That’s a good one. Yeah, absolutely. [crosstalk] I’d buy that.
Audience Member: The domain of social services.
[indistinct]
Eubanks: In general. Yeah, I don’t talk at all about military stuff. I just heard…oh, what’s her name, Lucy Suchman is doing some of that work, talking about automated military technology—
Audience 7: Could I just come in on that, because you said at one point when something got automated that used to be done by a human being then a certain check on a type of bias was gone. A buffer was gone, you said at one point.
Eubanks: Yeah.
Audience 7: So is that a sort of systemic feature you could say well, here’s a type of situation where if you interact with a human being that at least they could—of course, they could do the wrong thing, they could do the right thing.
Eubanks: Yeah.
So… And here’s the challenge in that. So discretion… And I know we’re out of time so I want to wrap up quickly. But there’s two key tensions that go through the work that don’t have the easy answers.
One is the tension of integration. Integrating systems can lower the barriers for folks on public assistance who have to fill out 900 different applications for five different services and sit all day in an office and it takes forever. That can really be a step forward in making it easier to get the resources that you need and that you are entitled to and deserve. But, under a system that criminalizes poverty? integration also means that you can be tracked through all these different systems and criminalized, imprisoned, taken on for fraud, right. So that’s an unreconcilable tension in some ways.
The other is discretion. So, front line case worker discretion can be the worst thing that happens to you in the public service system. It can also be the only thing that gets you out of that system successfully. And so it is also I think an irreconcilable tension, in that… The reality is, part of the intention of these systems…part of the built in politics of these systems is the idea that fairness is applying the rules in the same way every time. And in unequal systems, applying the same rules in the same way every time doesn’t actually produce equality. It produces more inequality. And so at this point, I’m willing to bet on the human decisionmaker having discretion that can be interrupted and be pushed back on in ways these systems can’t. But people of good faith disagree with me on that. And I can accept that. I think that’s one of the central tensions of this work.
But the way to think about it I think is that, I have a smart political scientists friend named Joe Soss, and he says discretion is like energy. It’s never created or destroyed; it’s only moved. So when we say we’re removing discretion from these systems what we’re actually doing is moving it from one group of people to another group of people. So in Allegheny County we’re moving it from the intake call center workers and giving it to the economists and the data scientists who built that model. And that’s I think a better kind of question to think about is like, who do we think is close enough to the problem to understand the problem? To have the kind of knowledge they need to make good decisions? And I’d say in that case it’s the intake call center workers, who’re the most diverse part of the social service workforce in that agency. Their the most working-class, they’re the most female, they’re the closest to the situations on the ground. And I trust them more to make those kinds of decisions.
But yeah, those are two really important tensions, and I think they’re really hard and will continue to be really hard. Thank you so much for the question.
Asher: I know there’s so much to still talk about but please join me in thanking Virginia.