A Risk Assessment of a Pretrial Risk Assessment Tool: Tussles,
Mitigation Strategies, and Inherent Limits
- URL: http://arxiv.org/abs/2005.07299v1
- Date: Thu, 14 May 2020 23:56:57 GMT
- Title: A Risk Assessment of a Pretrial Risk Assessment Tool: Tussles,
Mitigation Strategies, and Inherent Limits
- Authors: Marc Faddoul, Henriette Ruhrmann and Joyce Lee
- Abstract summary: We perform a risk assessment of the Public Safety Assessment (PSA), a software used in San Francisco and other jurisdictions to assist judges in deciding whether defendants need to be detained before their trial.
We articulate benefits and limitations of the PSA solution, as well as suggest mitigation strategies.
We then draft the Handoff Tree, a novel algorithmic approach to pretrial justice that accommodates some of the inherent limitations of risk assessment tools by design.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We perform a risk assessment of the Public Safety Assessment (PSA), a
software used in San Francisco and other jurisdictions to assist judges in
deciding whether defendants need to be detained before their trial. With a
mixed-methods approach including stakeholder interviews and the use of
theoretical frameworks, we lay out the values at play as pretrial justice is
automated. After identifying value implications of delegating decision making
to technology, we articulate benefits and limitations of the PSA solution, as
well as suggest mitigation strategies. We then draft the Handoff Tree, a novel
algorithmic approach to pretrial justice that accommodates some of the inherent
limitations of risk assessment tools by design. The model pairs every
prediction with an associated error rate, and hands off the decision to the
judge if the uncertainty is too high. By explicitly stating error rate, the
Handoff Tree aims both to limit the impact of predictive disparity between race
and gender, and to prompt judges to be more critical of retention
recommendations, given the high rate of false positives they often entail.
Related papers
- Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - Trust or Escalate: LLM Judges with Provable Guarantees for Human Agreement [49.15348173246146]
We present a principled approach to provide LLM-based evaluation with a rigorous guarantee of human agreement.
We first propose that a reliable evaluation method should not uncritically rely on model preferences for pairwise evaluation.
We then show that under this selective evaluation framework, human agreement can be provably guaranteed.
arXiv Detail & Related papers (2024-07-25T20:04:59Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Deconfounding Legal Judgment Prediction for European Court of Human
Rights Cases Towards Better Alignment with Experts [1.252149409594807]
This work demonstrates that Legal Judgement Prediction systems without expert-informed adjustments can be vulnerable to shallow, distracting surface signals.
To mitigate this, we use domain expertise to strategically identify statistically predictive but legally irrelevant information.
arXiv Detail & Related papers (2022-10-25T08:37:25Z) - Algorithmic Assistance with Recommendation-Dependent Preferences [2.864550757598007]
We consider the effect and design of algorithmic recommendations when they affect choices.
We show that recommendation-dependent preferences create inefficiencies where the decision-maker is overly responsive to the recommendation.
arXiv Detail & Related papers (2022-08-16T09:24:47Z) - Uncertainty-Driven Action Quality Assessment [67.20617610820857]
We propose a novel probabilistic model, named Uncertainty-Driven AQA (UD-AQA), to capture the diversity among multiple judge scores.
We generate the estimation of uncertainty for each prediction, which is employed to re-weight AQA regression loss.
Our proposed method achieves competitive results on three benchmarks including the Olympic events MTL-AQA and FineDiving, and the surgical skill JIGSAWS datasets.
arXiv Detail & Related papers (2022-07-29T07:21:15Z) - Flipping the Script on Criminal Justice Risk Assessment: An actuarial
model for assessing the risk the federal sentencing system poses to
defendants [0.0]
algorithmic risk assessment instruments are used to predict the risk a defendant poses to society.
We develop a risk assessment instrument that "flips the script"
Our instrument achieves comparable predictive accuracy to risk assessment instruments used in pretrial and parole contexts.
arXiv Detail & Related papers (2022-05-26T17:17:13Z) - Experimental Evaluation of Algorithm-Assisted Human Decision-Making:
Application to Pretrial Public Safety Assessment [0.8749675983608171]
We develop a statistical methodology for experimentally evaluating the causal impacts of algorithmic recommendations on human decisions.
We apply the proposed methodology to preliminary data from the first-ever randomized controlled trial.
We find that providing the PSA to the judge has little overall impact on the judge's decisions and subsequent arrestee behavior.
arXiv Detail & Related papers (2020-12-04T20:48:44Z) - Improving Fairness in Criminal Justice Algorithmic Risk Assessments
Using Conformal Prediction Sets [0.0]
We adopt a framework from conformal prediction sets to remove unfairness from risk algorithms.
From a sample of 300,000 offenders at their arraignments, we construct a confusion table and its derived measures of fairness.
We see our work as a demonstration of concept for application in a wide variety of criminal justice decisions.
arXiv Detail & Related papers (2020-08-26T16:47:02Z) - Learning Bounds for Risk-sensitive Learning [86.50262971918276]
In risk-sensitive learning, one aims to find a hypothesis that minimizes a risk-averse (or risk-seeking) measure of loss.
We study the generalization properties of risk-sensitive learning schemes whose optimand is described via optimized certainty equivalents.
arXiv Detail & Related papers (2020-06-15T05:25:02Z) - Fairness Evaluation in Presence of Biased Noisy Labels [84.12514975093826]
We propose a sensitivity analysis framework for assessing how assumptions on the noise across groups affect the predictive bias properties of the risk assessment model.
Our experimental results on two real world criminal justice data sets demonstrate how even small biases in the observed labels may call into question the conclusions of an analysis based on the noisy outcome.
arXiv Detail & Related papers (2020-03-30T20:47:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.