Uncertainty in Criminal Justice Algorithms: simulation studies of the
Pennsylvania Additive Classification Tool
- URL: http://arxiv.org/abs/2112.00301v1
- Date: Wed, 1 Dec 2021 06:27:24 GMT
- Title: Uncertainty in Criminal Justice Algorithms: simulation studies of the
Pennsylvania Additive Classification Tool
- Authors: Swarup Dhar, Vanessa Massaro, Darakhshan Mir, Nathan C. Ryan
- Abstract summary: We study the Pennsylvania Additive Classification Tool (PACT) that assigns custody levels to incarcerated individuals.
We analyze the PACT in ways that criminal justice algorithms are often analyzed.
We propose and carry out some new ways to study such algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Much attention has been paid to algorithms related to sentencing, the setting
of bail, parole decisions and recidivism while less attention has been paid to
carceral algorithms, those algorithms used to determine an incarcerated
individual's lived experience. In this paper we study one such algorithm, the
Pennsylvania Additive Classification Tool (PACT) that assigns custody levels to
incarcerated individuals. We analyze the PACT in ways that criminal justice
algorithms are often analyzed: namely, we train an accurate machine learning
model for the PACT; we study its fairness across sex, age and race; and we
determine which features are most important. In addition to these conventional
computations, we propose and carry out some new ways to study such algorithms.
Instead of focusing on the outcomes themselves, we propose shifting our
attention to the variability in the outcomes, especially because many carceral
algorithms are used repeatedly and there can be a propagation of uncertainty.
By carrying out several simulations of assigning custody levels, we shine light
on problematic aspects of tools like the PACT.
Related papers
- Algorithms, Incentives, and Democracy [0.0]
We show how optimal classification by an algorithm designer can affect the distribution of behavior in a population.
We then look at the effect of democratizing the rewards and punishments, or stakes, to the algorithmic classification to consider how a society can potentially stem (or facilitate!) predatory classification.
arXiv Detail & Related papers (2023-07-05T14:22:01Z) - A Gold Standard Dataset for the Reviewer Assignment Problem [117.59690218507565]
"Similarity score" is a numerical estimate of the expertise of a reviewer in reviewing a paper.
Our dataset consists of 477 self-reported expertise scores provided by 58 researchers.
For the task of ordering two papers in terms of their relevance for a reviewer, the error rates range from 12%-30% in easy cases to 36%-43% in hard cases.
arXiv Detail & Related papers (2023-03-23T16:15:03Z) - Selective Credit Assignment [57.41789233550586]
We describe a unified view on temporal-difference algorithms for selective credit assignment.
We present insights into applying weightings to value-based learning and planning algorithms.
arXiv Detail & Related papers (2022-02-20T00:07:57Z) - Analyzing a Carceral Algorithm used by the Pennsylvania Department of
Corrections [0.0]
This paper is focused on the Pennsylvania Additive Classification Tool (PACT) used to classify prisoners' custody levels while they are incarcerated.
The algorithm in this case determines the likelihood a person would endure additional disciplinary actions, can complete required programming, and gain experiences that, among other things, are distilled into variables feeding into the parole algorithm.
arXiv Detail & Related papers (2021-12-06T18:47:31Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Deep Interpretable Criminal Charge Prediction and Algorithmic Bias [2.3347476425292717]
This paper addresses bias issues with post-hoc explanations to provide a trustable prediction of whether a person will receive future criminal charges.
Our approach shows consistent and reliable prediction precision and recall on a real-life dataset.
arXiv Detail & Related papers (2021-06-25T07:00:13Z) - Pursuing Open-Source Development of Predictive Algorithms: The Case of
Criminal Sentencing Algorithms [0.0]
We argue that open-source algorithm development should be the standard in highly consequential contexts.
We suggest these issues are exacerbated by the proprietary and expensive nature of virtually all widely used criminal sentencing algorithms.
arXiv Detail & Related papers (2020-11-12T14:53:43Z) - Efficient Computation of Expectations under Spanning Tree Distributions [67.71280539312536]
We propose unified algorithms for the important cases of first-order expectations and second-order expectations in edge-factored, non-projective spanning-tree models.
Our algorithms exploit a fundamental connection between gradients and expectations, which allows us to derive efficient algorithms.
arXiv Detail & Related papers (2020-08-29T14:58:26Z) - Run2Survive: A Decision-theoretic Approach to Algorithm Selection based
on Survival Analysis [75.64261155172856]
survival analysis (SA) naturally supports censored data and offers appropriate ways to use such data for learning distributional models of algorithm runtime.
We leverage such models as a basis of a sophisticated decision-theoretic approach to algorithm selection, which we dub Run2Survive.
In an extensive experimental study with the standard benchmark ASlib, our approach is shown to be highly competitive and in many cases even superior to state-of-the-art AS approaches.
arXiv Detail & Related papers (2020-07-06T15:20:17Z) - Meta-learning with Stochastic Linear Bandits [120.43000970418939]
We consider a class of bandit algorithms that implement a regularized version of the well-known OFUL algorithm, where the regularization is a square euclidean distance to a bias vector.
We show both theoretically and experimentally, that when the number of tasks grows and the variance of the task-distribution is small, our strategies have a significant advantage over learning the tasks in isolation.
arXiv Detail & Related papers (2020-05-18T08:41:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.