Analyzing a Carceral Algorithm used by the Pennsylvania Department of
Corrections
- URL: http://arxiv.org/abs/2112.03240v1
- Date: Mon, 6 Dec 2021 18:47:31 GMT
- Title: Analyzing a Carceral Algorithm used by the Pennsylvania Department of
Corrections
- Authors: Vanessa Massaro, Swarup Dhar, Darakhshan Mir, and Nathan C. Ryan
- Abstract summary: This paper is focused on the Pennsylvania Additive Classification Tool (PACT) used to classify prisoners' custody levels while they are incarcerated.
The algorithm in this case determines the likelihood a person would endure additional disciplinary actions, can complete required programming, and gain experiences that, among other things, are distilled into variables feeding into the parole algorithm.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Scholars have focused on algorithms used during sentencing, bail, and parole,
but little work explores what we call carceral algorithms that are used during
incarceration. This paper is focused on the Pennsylvania Additive
Classification Tool (PACT) used to classify prisoners' custody levels while
they are incarcerated. Algorithms that are used during incarceration warrant
deeper attention by scholars because they have the power to enact the lived
reality of the prisoner. The algorithm in this case determines the likelihood a
person would endure additional disciplinary actions, can complete required
programming, and gain experiences that, among other things, are distilled into
variables feeding into the parole algorithm. Given such power, examining
algorithms used on people currently incarcerated offers a unique analytic view
to think about the dialectic relationship between data and algorithms. Our
examination of the PACT is two-fold and complementary. First, our qualitative
overview of the historical context surrounding PACT reveals that it is designed
to prioritize incapacitation and control over rehabilitation. While it closely
informs prisoner rehabilitation plans and parole considerations, it is rooted
in population management for prison securitization. Second, on analyzing data
for 146,793 incarcerated people in PA, along with associated metadata related
to the PACT, we find it is replete with racial bias as well as errors,
omissions, and inaccuracies. Our findings to date further caution against
data-driven criminal justice reforms that rely on pre-existing data
infrastructures and expansive, uncritical, data-collection routines.
Related papers
- A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent [57.64826450787237]
We show how to analyze the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions.
We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm.
Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.
arXiv Detail & Related papers (2024-07-19T08:29:12Z) - Evidence of What, for Whom? The Socially Contested Role of Algorithmic Bias in a Predictive Policing Tool [0.9821874476902969]
We show that stakeholders from different groups articulate diverse problem diagnoses of the tool's algorithmic bias.
We find that stakeholders use evidence of algorithmic bias to reform the policies around police patrol allocation.
We identify the implicit assumptions and scope of these varied uses of algorithmic bias as evidence.
arXiv Detail & Related papers (2024-05-13T13:03:33Z) - A Gold Standard Dataset for the Reviewer Assignment Problem [117.59690218507565]
"Similarity score" is a numerical estimate of the expertise of a reviewer in reviewing a paper.
Our dataset consists of 477 self-reported expertise scores provided by 58 researchers.
For the task of ordering two papers in terms of their relevance for a reviewer, the error rates range from 12%-30% in easy cases to 36%-43% in hard cases.
arXiv Detail & Related papers (2023-03-23T16:15:03Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Algorithmic Fairness Datasets: the Story so Far [68.45921483094705]
Data-driven algorithms are studied in diverse domains to support critical decisions, directly impacting people's well-being.
A growing community of researchers has been investigating the equity of existing algorithms and proposing novel ones, advancing the understanding of risks and opportunities of automated decision-making for historically disadvantaged populations.
Progress in fair Machine Learning hinges on data, which can be appropriately used only if adequately documented.
Unfortunately, the algorithmic fairness community suffers from a collective data documentation debt caused by a lack of information on specific resources (opacity) and scatteredness of available information (sparsity)
arXiv Detail & Related papers (2022-02-03T17:25:46Z) - Analysis of the Pennsylvania Additive Classification Tool: Biases and
Important Features [0.0]
The Pennsylvania Additive Classification Tool (PACT) is used to determine the security level for an incarcerated person in the state's prison system.
For a newly incarcerated person it is used in their initial classification.
An incarcerated person is reclassified annually using a variant of the PACT and this reclassification can be overridden.
arXiv Detail & Related papers (2021-12-10T23:00:10Z) - Uncertainty in Criminal Justice Algorithms: simulation studies of the
Pennsylvania Additive Classification Tool [0.0]
We study the Pennsylvania Additive Classification Tool (PACT) that assigns custody levels to incarcerated individuals.
We analyze the PACT in ways that criminal justice algorithms are often analyzed.
We propose and carry out some new ways to study such algorithms.
arXiv Detail & Related papers (2021-12-01T06:27:24Z) - The effect of differential victim crime reporting on predictive policing
systems [84.86615754515252]
We show how differential victim crime reporting rates can lead to outcome disparities in common crime hot spot prediction models.
Our results suggest that differential crime reporting rates can lead to a displacement of predicted hotspots from high crime but low reporting areas to high or medium crime and high reporting areas.
arXiv Detail & Related papers (2021-01-30T01:57:22Z) - Rethinking recidivism through a causal lens [0.0]
We look at the effect of incarceration (prison time) on recidivism using a well-known dataset from North Carolina.
We find that incarceration has a detrimental effect on recidivism, i.e., longer prison sentences make it more likely that individuals will re-offend after release.
arXiv Detail & Related papers (2020-11-19T04:15:41Z) - Pursuing Open-Source Development of Predictive Algorithms: The Case of
Criminal Sentencing Algorithms [0.0]
We argue that open-source algorithm development should be the standard in highly consequential contexts.
We suggest these issues are exacerbated by the proprietary and expensive nature of virtually all widely used criminal sentencing algorithms.
arXiv Detail & Related papers (2020-11-12T14:53:43Z) - The Paradigm Discovery Problem [121.79963594279893]
We formalize the paradigm discovery problem and develop metrics for judging systems.
We report empirical results on five diverse languages.
Our code and data are available for public use.
arXiv Detail & Related papers (2020-05-04T16:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.