Decision-makers Processing of AI Algorithmic Advice: Automation Bias
versus Selective Adherence
- URL: http://arxiv.org/abs/2103.02381v1
- Date: Wed, 3 Mar 2021 13:10:50 GMT
- Title: Decision-makers Processing of AI Algorithmic Advice: Automation Bias
versus Selective Adherence
- Authors: Saar Alon-Barkat and Madalina Busuioc
- Abstract summary: Key concern is that human overreliance on algorithms introduces new biases in the human-algorithm interaction.
A second concern regards decision-makers inclination to selectively adopt algorithmic advice when it matches their pre-existing beliefs and stereotypes.
We assess these via two studies simulating the use of algorithmic advice in decisions pertaining to the employment of school teachers in the Netherlands.
Our findings of selective, biased adherence belie the promise of neutrality that has propelled algorithm use in the public sector.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Artificial intelligence algorithms are increasingly adopted as decisional
aides by public organisations, with the promise of overcoming biases of human
decision-makers. At the same time, the use of algorithms may introduce new
biases in the human-algorithm interaction. A key concern emerging from
psychology studies regards human overreliance on algorithmic advice even in the
face of warning signals and contradictory information from other sources
(automation bias). A second concern regards decision-makers inclination to
selectively adopt algorithmic advice when it matches their pre-existing beliefs
and stereotypes (selective adherence). To date, we lack rigorous empirical
evidence about the prevalence of these biases in a public sector context. We
assess these via two pre-registered experimental studies (N=1,509), simulating
the use of algorithmic advice in decisions pertaining to the employment of
school teachers in the Netherlands. In study 1, we test automation bias by
exploring participants adherence to a prediction of teachers performance, which
contradicts additional evidence, while comparing between two types of
predictions: algorithmic v. human-expert. We do not find evidence for
automation bias. In study 2, we replicate these findings, and we also test
selective adherence by manipulating the teachers ethnic background. We find a
propensity for adherence when the advice predicts low performance for a teacher
of a negatively stereotyped ethnic minority, with no significant differences
between algorithmic and human advice. Overall, our findings of selective,
biased adherence belie the promise of neutrality that has propelled algorithm
use in the public sector.
Related papers
- Does AI help humans make better decisions? A statistical evaluation framework for experimental and observational studies [0.43981305860983716]
We show how to compare the performance of three alternative decision-making systems--human-alone, human-with-AI, and AI-alone.
We find that the risk assessment recommendations do not improve the classification accuracy of a judge's decision to impose cash bail.
arXiv Detail & Related papers (2024-03-18T01:04:52Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Homophily and Incentive Effects in Use of Algorithms [17.55279695774825]
We present a crowdsourcing vignette study designed to assess the impacts of two plausible factors on AI-informed decision-making.
First, we examine homophily -- do people defer more to models that tend to agree with them?
Second, we consider incentives -- how do people incorporate a (known) cost structure in the hybrid decision-making setting?
arXiv Detail & Related papers (2022-05-19T17:11:04Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Biased Programmers? Or Biased Data? A Field Experiment in
Operationalizing AI Ethics [6.946103498518291]
We evaluate 8.2 million algorithmic predictions of math performance from $approx$400 AI engineers.
We find that biased predictions are mostly caused by biased training data.
One-third of the benefit of better training data comes through a novel economic mechanism.
arXiv Detail & Related papers (2020-12-04T04:12:33Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.