The Impact of Algorithmic Risk Assessments on Human Predictions and its
Analysis via Crowdsourcing Studies
- URL: http://arxiv.org/abs/2109.01443v1
- Date: Fri, 3 Sep 2021 11:09:10 GMT
- Title: The Impact of Algorithmic Risk Assessments on Human Predictions and its
Analysis via Crowdsourcing Studies
- Authors: Riccardo Fogliato, Alexandra Chouldechova, Zachary Lipton
- Abstract summary: We conduct a vignette study in which laypersons are tasked with predicting future re-arrests.
Our key findings are as follows: Participants often predict that an offender will be rearrested even when they deem the likelihood of re-arrest to be well below 50%.
Judicial decisions, unlike participants' predictions, depend in part on factors that are to the likelihood of re-arrest.
- Score: 79.66833203975729
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As algorithmic risk assessment instruments (RAIs) are increasingly adopted to
assist decision makers, their predictive performance and potential to promote
inequity have come under scrutiny. However, while most studies examine these
tools in isolation, researchers have come to recognize that assessing their
impact requires understanding the behavior of their human interactants. In this
paper, building off of several recent crowdsourcing works focused on criminal
justice, we conduct a vignette study in which laypersons are tasked with
predicting future re-arrests. Our key findings are as follows: (1) Participants
often predict that an offender will be rearrested even when they deem the
likelihood of re-arrest to be well below 50%; (2) Participants do not anchor on
the RAI's predictions; (3) The time spent on the survey varies widely across
participants and most cases are assessed in less than 10 seconds; (4) Judicial
decisions, unlike participants' predictions, depend in part on factors that are
orthogonal to the likelihood of re-arrest. These results highlight the
influence of several crucial but often overlooked design decisions and concerns
around generalizability when constructing crowdsourcing studies to analyze the
impacts of RAIs.
Related papers
- Controlling Risk of Retrieval-augmented Generation: A Counterfactual Prompting Framework [77.45983464131977]
We focus on how likely it is that a RAG model's prediction is incorrect, resulting in uncontrollable risks in real-world applications.
Our research identifies two critical latent factors affecting RAG's confidence in its predictions.
We develop a counterfactual prompting framework that induces the models to alter these factors and analyzes the effect on their answers.
arXiv Detail & Related papers (2024-09-24T14:52:14Z) - (De)Noise: Moderating the Inconsistency Between Human Decision-Makers [15.291993233528526]
We study whether algorithmic decision aids can be used to moderate the degree of inconsistency in human decision-making in the context of real estate appraisal.
We find that both (i) asking respondents to review their estimates in a series of algorithmically chosen pairwise comparisons and (ii) providing respondents with traditional machine advice are effective strategies for influencing human responses.
arXiv Detail & Related papers (2024-07-15T20:24:36Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Ground(less) Truth: A Causal Framework for Proxy Labels in
Human-Algorithm Decision-Making [29.071173441651734]
We identify five sources of target variable bias that can impact the validity of proxy labels in human-AI decision-making tasks.
We develop a causal framework to disentangle the relationship between each bias.
We conclude by discussing opportunities to better address target variable bias in future research.
arXiv Detail & Related papers (2023-02-13T16:29:11Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Homophily and Incentive Effects in Use of Algorithms [17.55279695774825]
We present a crowdsourcing vignette study designed to assess the impacts of two plausible factors on AI-informed decision-making.
First, we examine homophily -- do people defer more to models that tend to agree with them?
Second, we consider incentives -- how do people incorporate a (known) cost structure in the hybrid decision-making setting?
arXiv Detail & Related papers (2022-05-19T17:11:04Z) - A Comparative User Study of Human Predictions in Algorithm-Supported
Recidivism Risk Assessment [2.097880645003119]
We study the effects of using an algorithm-based risk assessment instrument to support the prediction of risk of criminalrecidivism.
The task is to predict whether a person who has been released from prison will commit a new crime, leading to re-incarceration.
arXiv Detail & Related papers (2022-01-26T17:40:35Z) - Feedback Effects in Repeat-Use Criminal Risk Assessments [0.0]
We show that risk can propagate over sequential decisions in ways that are not captured by one-shot tests.
Risk assessment tools operate in a highly complex and path-dependent process, fraught with historical inequity.
arXiv Detail & Related papers (2020-11-28T06:40:05Z) - Counterfactual Predictions under Runtime Confounding [74.90756694584839]
We study the counterfactual prediction task in the setting where all relevant factors are captured in the historical data.
We propose a doubly-robust procedure for learning counterfactual prediction models in this setting.
arXiv Detail & Related papers (2020-06-30T15:49:05Z) - Fairness Evaluation in Presence of Biased Noisy Labels [84.12514975093826]
We propose a sensitivity analysis framework for assessing how assumptions on the noise across groups affect the predictive bias properties of the risk assessment model.
Our experimental results on two real world criminal justice data sets demonstrate how even small biases in the observed labels may call into question the conclusions of an analysis based on the noisy outcome.
arXiv Detail & Related papers (2020-03-30T20:47:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.