Cast your vote

Posted 8/7/2025

In several U.S. states, judges are using algorithmic instruments akin to COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) to aid them in sentencing... Click Here.


These Artificial Intelligence systems evaluate risk scores by predicting how likely a defendant is to re-offend based on the defendant's prior history of crimes, age, gender, and other determinants. Advocates assert that these tools create more predictable "offender" outcomes, reduce human errors and biases, and alleviate backlogged court systems. Yet, civil rights organizations, data scientists, and defense attorneys contend that these algorithms are non-neutral instruments. Research on COMPAS and other similar instruments shows that these algorithms can reproduce and even amplify racial and socioeconomic biases, in particular towards Black and Latino defendants. The arguments against these AI tools point out that AI relies on problematic data, much of which is rooted in decades of biased policing and social disparities. One of the key concerns is transparency. Many court-approved AI instruments are proprietary, meaning that the algorithmic code may be hidden from view. Thus, when a defense attorney challenges the validity of these AI tools in court, there is no code or data reference to evaluate if that data remains reliable. AI cannot read or evaluate the nuance, regret, or change of an individual the things a judge can. Supporters argue that blaming AI for replicating an already biased system is misplaced. If designed and monitored adequately, we could better manage subjective judicial errors, differences in sentences, and provide more data-driven bail and parole decisions. The ethical ambivalence lies in who writes the algorithm, what data is used, and the weight in the final decision. If acting specifically as a guide, not a judge, it will be an aid to human decision-making. However, if relied upon too much, it could potentially make the justice system a vending machine. As courts evolve, AI will ultimately be a part of the future system; whether it will help or hinder justice depends heavily on its use and regulation.


Should we allow algorithms to help decide someone's fate in court?

  • Yes
  • No

Your comments

You have to be logged in to vote
You can't comment until you're logged in