Flipping the Script on Criminal Justice Risk Assessment: An actuarial
model for assessing the risk the federal sentencing system poses to
defendants
- URL: http://arxiv.org/abs/2205.13505v2
- Date: Wed, 13 Jul 2022 21:56:07 GMT
- Title: Flipping the Script on Criminal Justice Risk Assessment: An actuarial
model for assessing the risk the federal sentencing system poses to
defendants
- Authors: Mikaela Meyer, Aaron Horowitz, Erica Marshall, and Kristian Lum
- Abstract summary: algorithmic risk assessment instruments are used to predict the risk a defendant poses to society.
We develop a risk assessment instrument that "flips the script"
Our instrument achieves comparable predictive accuracy to risk assessment instruments used in pretrial and parole contexts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In the criminal justice system, algorithmic risk assessment instruments are
used to predict the risk a defendant poses to society; examples include the
risk of recidivating or the risk of failing to appear at future court dates.
However, defendants are also at risk of harm from the criminal justice system.
To date, there exists no risk assessment instrument that considers the risk the
system poses to the individual. We develop a risk assessment instrument that
"flips the script." Using data about U.S. federal sentencing decisions, we
build a risk assessment instrument that predicts the likelihood an individual
will receive an especially lengthy sentence given factors that should be
legally irrelevant to the sentencing decision. To do this, we develop a
two-stage modeling approach. Our first-stage model is used to determine which
sentences were "especially lengthy." We then use a second-stage model to
predict the defendant's risk of receiving a sentence that is flagged as
especially lengthy given factors that should be legally irrelevant. The factors
that should be legally irrelevant include, for example, race, court location,
and other socio-demographic information about the defendant. Our instrument
achieves comparable predictive accuracy to risk assessment instruments used in
pretrial and parole contexts. We discuss the limitations of our modeling
approach and use the opportunity to highlight how traditional risk assessment
instruments in various criminal justice settings also suffer from many of the
same limitations and embedded value systems of their creators.
Related papers
- Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - Towards Probing Speech-Specific Risks in Large Multimodal Models: A Taxonomy, Benchmark, and Insights [50.89022445197919]
We propose a speech-specific risk taxonomy, covering 8 risk categories under hostility (malicious sarcasm and threats), malicious imitation (age, gender, ethnicity), and stereotypical biases (age, gender, ethnicity)
Based on the taxonomy, we create a small-scale dataset for evaluating current LMMs capability in detecting these categories of risk.
arXiv Detail & Related papers (2024-06-25T10:08:45Z) - Auditing Fairness under Unobserved Confounding [56.61738581796362]
We show that we can still give meaningful bounds on treatment rates to high-risk individuals, even when entirely eliminating or relaxing the assumption that all relevant risk factors are observed.
This result is of immediate practical interest: we can audit unfair outcomes of existing decision-making systems in a principled manner.
arXiv Detail & Related papers (2024-03-18T21:09:06Z) - Diagnosis Uncertain Models For Medical Risk Prediction [80.07192791931533]
We consider a patient risk model which has access to vital signs, lab values, and prior history but does not have access to a patient's diagnosis.
We show that such all-cause' risk models have good generalization across diagnoses but have a predictable failure mode.
We propose a fix for this problem by explicitly modeling the uncertainty in risk prediction coming from uncertainty in patient diagnoses.
arXiv Detail & Related papers (2023-06-29T23:36:04Z) - The Progression of Disparities within the Criminal Justice System:
Differential Enforcement and Risk Assessment Instruments [26.018802058292614]
Algorithmic risk assessment instruments (RAIs) increasingly inform decision-making in criminal justice.
Problematically, the extent to which arrests reflect overall offending can vary with the person's characteristics.
We examine how the disconnect between crime and arrest rates impacts RAIs and their evaluation.
arXiv Detail & Related papers (2023-05-12T16:06:40Z) - On (assessing) the fairness of risk score models [2.0646127669654826]
Risk models are of interest for a number of reasons, including the fact that they communicate uncertainty about the potential outcomes to users.
We identify the provision of similar value to different groups as a key desideratum for risk score fairness.
We introduce a novel calibration error metric that is less sample size-biased than previously proposed metrics.
arXiv Detail & Related papers (2023-02-17T12:45:51Z) - Boosting the interpretability of clinical risk scores with intervention
predictions [59.22442473992704]
We propose a joint model of intervention policy and adverse event risk as a means to explicitly communicate the model's assumptions about future interventions.
We show how combining typical risk scores, such as the likelihood of mortality, with future intervention probability scores leads to more interpretable clinical predictions.
arXiv Detail & Related papers (2022-07-06T19:49:42Z) - Improving Fairness in Criminal Justice Algorithmic Risk Assessments
Using Conformal Prediction Sets [0.0]
We adopt a framework from conformal prediction sets to remove unfairness from risk algorithms.
From a sample of 300,000 offenders at their arraignments, we construct a confusion table and its derived measures of fairness.
We see our work as a demonstration of concept for application in a wide variety of criminal justice decisions.
arXiv Detail & Related papers (2020-08-26T16:47:02Z) - Compounding Injustice: History and Prediction in Carceral
Decision-Making [0.0]
This thesis explores how algorithmic decision-making in criminal policy can exhibit feedback effects.
We find evidence of a criminogenic effect of incarceration, even controlling for existing determinants of 'criminal risk'
We explore the theoretical implications of compounding effects in repeated carceral decisions.
arXiv Detail & Related papers (2020-05-18T14:51:50Z) - A Risk Assessment of a Pretrial Risk Assessment Tool: Tussles,
Mitigation Strategies, and Inherent Limits [0.0]
We perform a risk assessment of the Public Safety Assessment (PSA), a software used in San Francisco and other jurisdictions to assist judges in deciding whether defendants need to be detained before their trial.
We articulate benefits and limitations of the PSA solution, as well as suggest mitigation strategies.
We then draft the Handoff Tree, a novel algorithmic approach to pretrial justice that accommodates some of the inherent limitations of risk assessment tools by design.
arXiv Detail & Related papers (2020-05-14T23:56:57Z) - Fairness Evaluation in Presence of Biased Noisy Labels [84.12514975093826]
We propose a sensitivity analysis framework for assessing how assumptions on the noise across groups affect the predictive bias properties of the risk assessment model.
Our experimental results on two real world criminal justice data sets demonstrate how even small biases in the observed labels may call into question the conclusions of an analysis based on the noisy outcome.
arXiv Detail & Related papers (2020-03-30T20:47:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.