AI In Prison Sentencing
- Nikita Silaech
- 5 days ago
- 2 min read

There is a piece of software used in courts across the United States called COMPAS. It is supposed to predict whether someone will commit a crime again. A judge uses this risk score to decide bail, probation, or how long someone serves in prison.
The reasoning behind the tool is straightforward. Human judges carry biases. They make decisions based on race, class, and other factors that should not matter in law. A mathematical model trained on data could be more objective.
In 2016, journalists at ProPublica analyzed over 10,000 criminal defendants in Florida to see how accurate COMPAS was.
The overall accuracy was about 61 percent. But when they looked at the errors, something unexpected emerged. The algorithm made mistakes in very different ways depending on the race of the defendant.
Black defendants who did not re-offend were labeled high risk almost twice as often as white defendants who did not re-offend. Meanwhile, white defendants who did re-offend were labeled low risk almost twice as often as black defendants who did re-offend.
When researchers controlled for prior crimes, future recidivism, age, and gender, black defendants were still 45 percent more likely to be assigned higher risk scores than white defendants (ProPublica, 2016).
The algorithm was not directly considering race. It did not have access to that information. But it was using other variables as proxies. ZIP code. Socioeconomic background. Employment history. These variables carry the historical patterns of how the criminal justice system has treated different groups of people (Technology Review, 2020).
What happened next reveals an even bigger issue. A 2024 study from Tulane University examined over 50,000 sentencing decisions in Virginia where judges used AI to assess risk.
The AI recommendations did help reduce sentences for low-risk offenders. People received on average one month less jail time when judges followed the algorithm (Tulane, 2024).
But judges made a specific kind of mistake. When the AI recommended probation for low-risk offenders, judges disproportionately rejected that recommendation for Black defendants.
Black defendants with identical risk scores received longer sentences and fewer alternatives to incarceration than white defendants.
The algorithm did not fix the bias, but simply transformed it. Since the bias was now mathematical and presented in numbers, it looked like it came from somewhere other than human judgment.
Also, the company that owns COMPAS, does not release the full source code. Neither judges nor defendants can see exactly how the algorithm reaches its conclusions (ProPublica, 2016). The system operates as a black box that nobody can fully understand or challenge.
Some people argue that we need more AI in criminal justice to move beyond human bias. Others say we should not use AI at all because it will automate the existing inequities faster and with more legitimacy.
The core issue is simpler than either side acknowledges. The algorithm was trained on decades of biased policing, biased prosecution, and biased sentencing. It learned exactly what it was taught and made that pattern repeatable at scale
The algorithm succeeded at what it was asked to do. It learned the bias and made it feel like something other than a choice.



Comments