Ravi Shroff: So, here at Data & Society, my fel­low­ship project is to under­stand the devel­op­ment and imple­men­ta­tion of pre­dic­tive mod­els for decision-making in city and state gov­ern­ment. So specif­i­cal­ly I’m inter­est­ed in apply­ing and devel­op­ing sta­tis­ti­cal and com­pu­ta­tion­al meth­ods to improve decision-making in police depart­ments, in the courts, and in child wel­fare agencies.

Now, the ques­tions that I work on gen­er­al­ly have a large tech­ni­cal com­po­nent. But I want to men­tion that oth­er aspects or pol­i­cy con­cerns, eth­i­cal con­cerns, legal and prac­ti­cal con­straints, are just as chal­leng­ing to deal with. So in this Databite I’m going to briefly describe two exam­ples, one from crim­i­nal jus­tice and one from child wel­fare, along with some ques­tions that’ve been inspired by my inter­ac­tions with peo­ple at Data & Society over the past year.

So before I jump in I just want to talk about why an empir­i­cal approach to ques­tions in these areas is hard. So first of all, there are often com­pli­cat­ed eth­i­cal issues at play. So ten­sions between fair­ness and effec­tive­ness. Tensions between par­ents rights and a child’s best inter­ests. Racially dis­crim­i­na­to­ry practices. 

Moreover, tack­ling these issues gen­er­al­ly involves exten­sive domain knowl­edge, in par­tic­u­lar to under­stand the intri­cate process­es by which data is gen­er­at­ed. And so in prac­tice this means that I gen­er­al­ly col­lab­o­rate with domain experts. The data that’s avail­able is often obser­va­tion­al, and simul­ta­ne­ous­ly high­ly sen­si­tive and of poor qual­i­ty. And so this can make eval­u­at­ing solu­tions par­tic­u­lar­ly chal­leng­ing. And in the con­text of work­ing with city and state agen­cies, there are often exter­nal pres­sures like request from the Mayor’s office, and prac­ti­cal con­straints that can make imple­men­ta­tion difficult.

So I work with a vari­ety of city, state, and non­prof­it agen­cies. Here are some of them. And most of them are head­quar­tered in or around New York City. 

So I’m going jump into the first exam­ple, which is pre­tri­al deten­tion deci­sions made by judges. So in the US, short­ly after arrest sus­pects are usu­al­ly arraigned in court, where a pros­e­cu­tor reads a list of charges against them. And so at arraign­ment, judges have to decide which defen­dants await­ing tri­al are going to be released (or RoR’d—released on their own recog­ni­zance) or which are sub­ject to mon­e­tary bail.

Now in prac­tice, if bail is set for defen­dants they often await tri­al in jail because they can’t afford bail, or end up pay­ing hefty fees to bail bonds­men. But judges on the oth­er hand are legal­ly oblig­at­ed to secure a defendant’s appear­ance at tri­al. So judges’ pre­tri­al release deci­sions have to simul­ta­ne­ous­ly bal­ance the bur­den of bail on the defen­dant with the risk that he or she fails to appear for trial.

Now, two oth­er things I just want to quick­ly men­tion is that there are oth­er con­di­tions of bail besides mon­ey bail. But in the juris­dic­tion that I con­sid­er, mon­ey bail and RoR are the two most com­mon out­comes. And the oth­er the oth­er thing is that in many juris­dic­tions judges are legal­ly oblig­at­ed to con­sid­er pub­lic safe­ty risk of releas­ing a defen­dant pre­tri­al. But in the juris­dic­tion that I’m going to focus on judges are legal­ly oblig­at­ed to only con­sid­er the like­li­hood that a defen­dant fails to appear.

So, judges can be incon­sis­tent. They’re humans. They get hun­gry and tired. They get upset when their favorite foot­ball team los­es a game. And there’s research that in fact sug­gests that judges make harsh­er deci­sions in those cir­cum­stances. And judges are also like us. They can be biased. They can be biased implic­it­ly or explic­it­ly. And also, when a judge looks at a defen­dant and hears what the defense coun­sel has to say and what the pros­e­cu­tor says, they take all these fac­tors into account and then in their head they make some deci­sion, and then we see that deci­sion. So a judge’s head is a black box. It’s opaque.

And I should men­tion that the pri­vate sec­tor has also attempt­ed to aid judges’ decision-making by pro­duc­ing tools, but these have their own issues. And so I want to read an excerpt from an op-ed that appeared in The New York Times yes­ter­day by Rebecca Wexler, who is fel­low here it at Data & Society who also gave a Databite talk last week. And she writes,

The root of the prob­lem is that auto­mat­ed crim­i­nal jus­tice tech­nolo­gies are large­ly pri­vate­ly owned and sold for prof­it. The devel­op­ers tend to view their tech­nolo­gies as trade secrets. As a result, they often refuse to dis­close details about how their tools work, even to crim­i­nal defen­dants and their attor­neys, even under a pro­tec­tive order, even in the con­trolled con­text of a crim­i­nal pro­ceed­ing or parole hearing.
Rebecca Wexler, When a Computer Program Keeps You in Jail [orig­i­nal­ly How Computers are Harming Criminal Justice”]

So this rais­es the ques­tion, can we design a, con­sis­tent trans­par­ent rule for releas­ing (non­vi­o­lent, mis­de­meanor) pre­tri­al defen­dants? I should also men­tion that some work is going on that’s fund­ed by foun­da­tions in this area. 

And we can. And this is the rule that my col­lab­o­ra­tors and I came up with. It’s a sim­ple two-item check­list. It just takes into account two attrib­ut­es of a defen­dant: the defendant’s age, and the defendant’s pri­or his­to­ry of fail­ing to appear for court.

So the way this could work is sup­pose you have some thresh­old, let’s say 10. Then you take a defen­dant, maybe the defendant’s 50 years old and has one pri­or fail­ure to appear. So the defendant’s score would be 2 for their age, and 6 for their pri­or his­to­ry of fail­ing to appear. The total would be 8. That’s less than the thresh­old of 10, so they would be rec­om­mend­ed to be released. And if it exceed­ed 10, the rec­om­men­da­tion would be that you set bail. 

And what we find is that if you fol­low this rule with a thresh­old of 10, we esti­mate that you would set bail for half as many defen­dants with­out increas­ing the pro­por­tion that failed to appear in court. And that’s rel­a­tive to cur­rent judge prac­tice. Moreover, and this is I think maybe the more sur­pris­ing aspect, this rule per­forms com­pared to more com­pli­cat­ed machine learn­ing approach­es. Which begs the ques­tion, if a super-simple check­list which only uses two attrib­ut­es of a defen­dant can per­form the same as some­thing much much more com­pli­cat­ed, well why not use the sim­ple approach?

So, in 2014 the Attorney General at the time, Eric Holder, referred to these risk assess­ment tools in a speech he gave, where he said, Equal jus­tice can only mean indi­vid­u­al­ized jus­tice, with charges, con­vic­tions, and sen­tences befit­ting the con­duct of each defen­dant and the par­tic­u­lar crime he or she commits.”

So this rais­es anoth­er ques­tion, which is how can you bal­ance this idea of indi­vid­u­al­ized jus­tice with con­sis­ten­cy? So think about the check­list, for exam­ple. It’s cer­tain­ly con­sis­tent, right? If you’re in a par­tic­u­lar age buck­et and you have a par­tic­u­lar num­ber of pre­vi­ous fail­ures to appear, it’s going to rec­om­mend either that you’re released or that bail is set for you. And it’s cer­tain­ly not indi­vid­u­al­ized. Because beyond age and your pri­or his­to­ry of fail­ure to appear, it doesn’t take into account any­thing else.

So the usu­al answer is you say well, I’m going to use a check­list or a sta­tis­ti­cal rule to aid a judge’s deci­sion, not to replace it. So a judge would see the rec­om­men­da­tion and then the judge could choose to fol­low it or to do some­thing else. And so in prac­tice, bal­anc­ing indi­vid­u­al­ized jus­tice and con­sis­ten­cy is tough, but I think a good first step is to focus on transparency.

Now anoth­er ques­tion is well, why not just release all (non­vi­o­lent, mis­de­meanor) defen­dants before tri­al? So this is sort of think­ing out­side the box, right. I mean, this check­list is sort of a sta­tis­ti­cal­ly designed pro­ce­dure to opti­mize who you release and who you set bail for. But let’s ask a dif­fer­ent ques­tion, which is why don’t you just let every­body out? And if you did, what would happen?

And so we esti­mate that in fact if you were to release all non­vi­o­lent, mis­de­meanor defen­dants in our juris­dic­tion you would see a mod­est increase in the per­cent­age that failed to appear, but not very much. It would go from 13% to 18%. And so I feel like this is a ques­tion as a soci­ety that we need to ask. Which is you know, are the bur­dens on all those peo­ple who are not RoR’d, is that out­weighed by hav­ing a slight­ly low­er fail­ure to appear rate?

Okay. So I’m going to go to my sec­ond exam­ple, which has to do with children’s ser­vices in New York City. So, New York City’s Administration for Children’s Services han­dles about 55,000 inves­ti­ga­tions a year of abuse or neglect, and is respon­si­ble for rough­ly 9,000 chil­dren, and it’s a big agency. It has an annu­al bud­get of about $3 billion.

So ACS recent­ly had an ini­tia­tive to use the data that they col­lect on chil­dren and fam­i­lies to improve the lev­el of ser­vice that they pro­vide. And a part of this ini­tia­tive which I was involved with is to use data to under­stand which chil­dren cur­rent­ly in an inves­ti­ga­tion of abuse or neglect are like­ly to be involved in anoth­er inves­ti­ga­tion of abuse or neglect with­in six months’ time.

And I should men­tion that this is hap­pen­ing at a sort of high­ly charged time for ACS. There were a num­ber of high-profile fatal­i­ties in the last year of chil­dren who were under inves­ti­ga­tion by ACS. And that could add urgency to the desire to use data and ana­lyt­ic tools to improve ser­vices for children.

So I want to raise a ques­tion, which is what is the inter­ven­tion? And what I mean by this is sup­pose that you could accu­rate­ly or rea­son­ably accu­rate­ly pre­dict the like­li­hood, the prob­a­bil­i­ty, that a child is going to be involved in anoth­er inves­ti­ga­tion with­in six months. Suppose you see it’s like 99%. Well, what would you do? 

So, you could remove the child from the par­ent. You could also say, These are chal­leng­ing cas­es. I’m going to pri­or­i­tize their review. I’m going to have man­agers or more expe­ri­enced case­work­ers deal with these cases.”

Or you could say, instead of allo­cat­ing sanc­tions to a fam­i­ly like remov­ing a child, I’m going to allo­cate ben­e­fits like pre­ven­tive ser­vices. I’m going to flood the fam­i­ly with ser­vices to try and reduce that likelihood.”

And I’m going to men­tion that ACS has been very clear in say­ing that they will not use these algo­rithms to make removal deci­sions but instead to pri­or­i­tize case review and to match chil­dren and fam­i­lies to the ser­vices that they need.

Another ques­tion is how can ACS actu­al­ly insure that these ana­lyt­ic meth­ods are going to be used appro­pri­ate­ly? So, children’s ser­vices is in the process of build­ing an exter­nal ethics advi­so­ry board com­prised of racial­ly and pro­fes­sion­al­ly diverse stake­hold­ers, who are sup­posed to over­see the way that pre­dic­tive tech­niques are being used to inform decision-making.

And danah boyd, founder of Data & Society, has this say­ing that I like where she says, Ethics is a process.” And I feel like that sort of sug­gests that some­times a solu­tion to these prob­lems is to cre­ate an insti­tu­tion which is in charge of super­vis­ing that process.

And so final­ly I just want to pose the ques­tion of, how will case­work­ers actu­al­ly use the out­put of these pre­dic­tive algo­rithms? You could also ask the same ques­tion in the pre­tri­al release con­text. How will judges actu­al­ly use the out­put of risk assess­ment tools? So, will they gen­er­al­ly fol­low the rec­om­men­da­tions of the tool, or will they make dif­fer­ent deci­sions in a man­ner which has unex­pect­ed con­se­quences? It’s an active area of research. And I’ll say that a for­mer Data & Society fel­low has also inves­ti­gat­ed this spe­cif­ic ques­tion in the con­text of crim­i­nal justice.

So I just want to wrap up by say­ing that under­stand­ing this inter­ac­tion between human deci­sion­mak­ers and algo­rith­mic rec­om­men­da­tions is real­ly essen­tial to ensure that they func­tion as intend­ed. So thank you very much, and thanks to every­body at Data & Society, and my collaborators:

Bail: Jongbin Jung, Connor Concannon, Sharad Goel, Daniel G. Goldstein
ACS: Diane Depanfilis, Teresa De Candia, Allon Yaroni
[pre­sen­ta­tion slide]

Further Reference

Event page


Help Support Open Transcripts

If you found this useful or interesting, please consider supporting the project monthly at Patreon or once via Square Cash, or even just sharing the link. Thanks.