Algorithms and Bias in the Criminal Justice System
As technologists, we’re all aware of the pervasive use of algorithms in the modern world.
Most of these uses, while not directly observable, are well known. Our ability to obtain credit and loans is determined by algorithms run by banks and lenders; the insurance premiums on our cars, homes, and life insurance policies are set by algorithms; the advertisements we see when we’re online are determined by algorithms.
Virtually all aspects of our lives are determined by what an algorithm says about our financial and medical health, our trustworthiness, our propensity to buy.
We have come to accept this “algorithmic world” in part because we have no choice, and in part because it appears to work well enough that we are willing to overlook the lack of transparency often accompanying its use.
And perhaps that’s as it should be–though I would argue (and have in other postings) that the bias inherent in most algorithmic implementations should give us pause.
(A good introduction to how bias can be unknowingly introduced into algorithms can be found here.)
Certainly, if bias is inherent in most of the systems we interact with on a regular basis, that isn’t good news. We may not be able to get a credit card when we should be able to obtain one. The interest rate on our mortgage may be higher than it should be, if the data were looked at in a more fair manner.
These examples, while of some importance to our lives, are minor compared to one area in which algorithmic bias can have life-changing consequences: the criminal justice system.
In the United States, courts have begun to use algorithms to help determine sentences for convictions in a range of crimes. One system in particular, COMPAS, is widely used by state courts and provides a suggested “risk score” for the defendant. This “risk score” is intended to provide guidance on the likelihood of re-offending, the serious of such a re-offense, and the “value” to society of keeping the offender behind bars.
In courts where COMPAS is used, the judge has the flexibility to decide whether to use the “risk score” in the sentencing phase. Many judges do base their decisions on COMPAS recommendations believing (incorrectly as it turns out) that the algorithm is fair.
An excellent article in the New York Times speaks well to this issue, noting that even the US Supreme Court has refused to rule against the use of such systems.
The fundamental issue with COMPAS–and algorithms in use in almost all aspects of our lives–is a lack of transparency. From the same NYT article:
No one knows exactly how COMPAS works; its manufacturer refuses to disclose the proprietary algorithm. We only know the final risk assessment score it spits out, which judges may consider at sentencing.
The belief that computers are inherently more fair than individuals is ingrained in our society. It’s certainly believed by those states that require use of systems like COMPAS by judges.
States trust that even if they cannot themselves unpack proprietary algorithms, computers will be less biased than even the most well-meaning humans.
But that confidence is clearly misplaced.
A ProPublica study found that COMPAS predicts black defendants will have higher risks of recidivism than they actually do, while white defendants are predicted to have lower rates than they actually do. (Northpointe Inc., the company that produces the algorithm, disputes this analysis.) The computer is worse than the human. It is not simply parroting back to us our own biases, it is exacerbating them.
When someone’s potential for prison time (its length or whether it’s applied at all) is dependent on black-box algorithms which may have been trained with data that is irrelevant, and which assume only one-direction causation, this should be a cause of concern for all of us. Whether we are ever actually ourselves the subject of such sentencing or not, if we care about justice we should be very concerned.
As a society we must be willing to acknowledge that computer algorithms are only as good as what we “feed” them, and that it is in our interest (as citizens who care about a fair society, and to promote confidence in the criminal justice system) to lean towards algorithmic transparency through regulatory controls.
If we do not undertake to understand and deal with this problem as a society, we risk giving up on our vision of a society that behaves fairly to all its citizens.
About Richard Fall
I am currently the National Solution Architect, Digital Platforms and IoT for Sogeti, working from the Des Moines, Iowa office. My interests lie in the areas of micro-services, SaaS, and IoT systems.
More on Richard Fall.
[…] Richard Fall reports on the evidence of bias in the proprietary algorithms in COMPAS, a program used by judges that recommends criminal sentences. 3 minutes to read. […]