What if Algorithms Worked For Accused People, Instead of Against Them?

What if Algorithms Worked For Accused People, Instead of Against Them?

Throughout the U.S., judges, prosecutors, and parole boards are given algorithms to guide life-altering decisions about the liberty of the people before them, based mainly on perceived risks to “public safety.” At the same time, people accused and convicted of crimes are given little support. With underfunded public defense in most of these contexts, and no right to counsel in others (e.g., in parole decisions), the system is stacked against them. We wanted to find out what would happen if we flipped the script and used algorithms to benefit people entangled in the legal system, rather than those who wield power against them.

In a recent peer-reviewed study, the ACLU and collaborators at Carnegie Mellon and the University of Pennsylvania asked a simple question: Can one predict the risk of the criminal justice system to the people accused by it, instead of the risks posed by the people themselves?

The answer seems to be yes, and the process of creating a tool like this helps lay bare broader issues in the logic of existing risk assessment tools. While traditional risk assessment tools consider risks to the public such as the likelihood of reoffending, the criminal legal system itself poses a host of risks to the people ensnared in it, many of which extend to their families and communities and have long-term repercussions. These include being denied pretrial release, receiving a sentence disproportionately lengthy for the given conviction, being wrongfully convicted, being saddled with a record that makes it impossible to obtain housing or employment, and more.

Read the full story at ACLU.