Wednesday, November 15, 2017

Equality Under the Law, via Algorithm?

This summer a bill was introduced in the U.S. Senate that had a very noble goal - to reduce bias and discrimination in the criminal justice system by reforming how bail is determined.

Sponsored by Senators Kamala Harris (D-CA) and Rand Paul (R-KY), the Pretrial Integrity and Safety Act attempts to address the problem as described here in the New York Times...

  • 450,000 Americans sit in jail today awaiting trial because they cannot afford to pay bail. 
  • Nine out of 10 defendants who are detained cannot afford to post bail, which can exceed $20,000 even for minor crimes like stealing $105 in clothing.
  • Whether someone stays in jail or not is far too often determined by wealth or social connections, even though just a few days behind bars can cost people their job, home, custody of their children — or their life.
  • Excessive bail disproportionately harms people from low-income communities and communities of color.
  • Black and Latino men respectively are asked to pay 35 percent and 19 percent higher bail than white men.
  • Bail is supposed to ensure that the accused appear at trial and don’t commit other offenses in the meantime. But research has shown that low-risk defendants who are detained more than 24 hours and then released are actually less likely to show up in court than those who are detained less than a day.
  • Our bail system does not keep us safer. In a study of two large jurisdictions, nearly half of the defendants considered “high risk” were released simply because they could afford to post bail.

Clearly, the current bail system has major issues.  However, the Kamala-Paul bill seeks to remedy those issues by replacing the money bail system with risk-assessment scores - and that raises a host of new issues in its stead.

A system determining whether a defendant should be released before trial based on individualized risk-assessment scores essentially works like the system for how banks decide who to give loans to based on credit scores.  An algorithm factors inputs like criminal history, substance abuse, etc., to "predict" the likelihood of flight and threat to the community.

Ideally, such a system would replace judicial bias in determining bail amounts by using agnostic data instead.

But data is not agnostic!  It draws on decades of historical records in which bias was in effect.  In this way, the danger is that, rather than reducing or even eliminating bias in the criminal justice system, it would actually further entrench it.  And algorithms are only as good as the data they're built upon.

And this doesn't even get to the question of whether or not the algorithms will be transparent to the public.  Nor does it raise any flags about the likelihood that such algorithms are usually, at least in the private sector, also influenced by aggregate population inputs that have nothing to do with the individual.  For instance, would it be acceptable for someone's "predicted" risk-assessment score to be higher based on what neighborhood they lived in, whether their parents were single, married, or divorced, or what educational institutions they attended?

Don't get me wrong.  Both Sens. Harris and Paul should absolutely be commended for trying to fix an unfair system, and for doing so in a nonpartisan manner.  But the real question is whether an algorithmically-driven system would be better than the current one.

If your livelihood was on the line, would you feel more comfortable with your fate decided by a judge or an algorithm?

It's an uncomfortable question, but it's not hypothetical or futuristic.  Again, this is a bill that's already been sent to committee in the U.S. Senate.

Someone I met from the Brennan Center from Justice, Vienna Thompkins, emailed me these insights as well:

"The press that I've seen on the bill so far barely mentions the issues with bias in risk assessment tools, or the proprietary nature of algorithms when third parties are involved in the process. I've had trouble in the past digging up data that confirms the current prevalence of "human" or interview-based risk assessments versus algorithmic assessments across states, so it's difficult to gauge what the impact of this incentivization might be. I imagine, though, that private organizations that provide risk assessment services would jump at the opportunity to promote the bill and expand their customer base."

It's wise of the bill's co-sponsors to roll this out first on the state-level - a federalism-based "policy laboratory" approach.  My takeaway is that the algorithms producing the individualized risk-assessment scores are going to be imperfect, to say the least, and this proposed system is likely to create as many problems as it solves.  The bottom line is that caution should be exercised, and if the bill becomes law, we should scrutinize the results with a healthy dose of skepticism.  On the other hand, if viewed simply as a first draft, it's a step in the right direction.



Post a Comment

<< Home