A Comparative User Study of Human Predictions in Algorithm-Supported
Recidivism Risk Assessment
- URL: http://arxiv.org/abs/2201.11080v2
- Date: Thu, 27 Jan 2022 08:12:58 GMT
- Title: A Comparative User Study of Human Predictions in Algorithm-Supported
Recidivism Risk Assessment
- Authors: Manuel Portela, Carlos Castillo, Song\"ul Tolan, Marzieh
Karimi-Haghighi, Antonio Andres Pueyo
- Abstract summary: We study the effects of using an algorithm-based risk assessment instrument to support the prediction of risk of criminalrecidivism.
The task is to predict whether a person who has been released from prison will commit a new crime, leading to re-incarceration.
- Score: 2.097880645003119
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we study the effects of using an algorithm-based risk
assessment instrument to support the prediction of risk of criminalrecidivism.
The instrument we use in our experiments is a machine learning version
ofRiskEval(name changed for double-blindreview), which is the main risk
assessment instrument used by the Justice Department ofCountry(omitted for
double-blind review).The task is to predict whether a person who has been
released from prison will commit a new crime, leading to
re-incarceration,within the next two years. We measure, among other variables,
the accuracy of human predictions with and without algorithmicsupport. This
user study is done with (1)generalparticipants from diverse backgrounds
recruited through a crowdsourcing platform,(2)targetedparticipants who are
students and practitioners of data science, criminology, or social work and
professionals who workwithRiskEval. Among other findings, we observe that
algorithmic support systematically leads to more accurate predictions fromall
participants, but that statistically significant gains are only seen in the
performance of targeted participants with respect to thatof crowdsourced
participants. We also run focus groups with participants of the targeted study
to interpret the quantitative results,including people who useRiskEvalin a
professional capacity. Among other comments, professional participants indicate
that theywould not foresee using a fully-automated system in criminal risk
assessment, but do consider it valuable for training, standardization,and to
fine-tune or double-check their predictions on particularly difficult cases.
Related papers
- Leveraging Human Feedback to Scale Educational Datasets: Combining
Crowdworkers and Comparative Judgement [0.0]
This paper reports on two experiments investigating using non-expert crowdworkers and comparative judgement to evaluate student data.
We found that using comparative judgement substantially improved inter-rater reliability on both tasks.
arXiv Detail & Related papers (2023-05-22T10:22:14Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Individually Fair Learning with One-Sided Feedback [15.713330010191092]
We consider an online learning problem with one-sided feedback, in which the learner is able to observe the true label only for positively predicted instances.
On each round, $k$ instances arrive and receive classification outcomes according to a randomized policy deployed by the learner.
We then construct an efficient reduction from our problem of online learning with one-sided feedback and a panel reporting fairness violations to the contextual semi-bandit problem.
arXiv Detail & Related papers (2022-06-09T12:59:03Z) - Learning Predictions for Algorithms with Predictions [49.341241064279714]
We introduce a general design approach for algorithms that learn predictors.
We apply techniques from online learning to learn against adversarial instances, tune robustness-consistency trade-offs, and obtain new statistical guarantees.
We demonstrate the effectiveness of our approach at deriving learning algorithms by analyzing methods for bipartite matching, page migration, ski-rental, and job scheduling.
arXiv Detail & Related papers (2022-02-18T17:25:43Z) - The Impact of Algorithmic Risk Assessments on Human Predictions and its
Analysis via Crowdsourcing Studies [79.66833203975729]
We conduct a vignette study in which laypersons are tasked with predicting future re-arrests.
Our key findings are as follows: Participants often predict that an offender will be rearrested even when they deem the likelihood of re-arrest to be well below 50%.
Judicial decisions, unlike participants' predictions, depend in part on factors that are to the likelihood of re-arrest.
arXiv Detail & Related papers (2021-09-03T11:09:10Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Decision-makers Processing of AI Algorithmic Advice: Automation Bias
versus Selective Adherence [0.0]
Key concern is that human overreliance on algorithms introduces new biases in the human-algorithm interaction.
A second concern regards decision-makers inclination to selectively adopt algorithmic advice when it matches their pre-existing beliefs and stereotypes.
We assess these via two studies simulating the use of algorithmic advice in decisions pertaining to the employment of school teachers in the Netherlands.
Our findings of selective, biased adherence belie the promise of neutrality that has propelled algorithm use in the public sector.
arXiv Detail & Related papers (2021-03-03T13:10:50Z) - Feedback Effects in Repeat-Use Criminal Risk Assessments [0.0]
We show that risk can propagate over sequential decisions in ways that are not captured by one-shot tests.
Risk assessment tools operate in a highly complex and path-dependent process, fraught with historical inequity.
arXiv Detail & Related papers (2020-11-28T06:40:05Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Fairness Evaluation in Presence of Biased Noisy Labels [84.12514975093826]
We propose a sensitivity analysis framework for assessing how assumptions on the noise across groups affect the predictive bias properties of the risk assessment model.
Our experimental results on two real world criminal justice data sets demonstrate how even small biases in the observed labels may call into question the conclusions of an analysis based on the noisy outcome.
arXiv Detail & Related papers (2020-03-30T20:47:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.