Taking Advice from (Dis)Similar Machines: The Impact of Human-Machine
Similarity on Machine-Assisted Decision-Making
- URL: http://arxiv.org/abs/2209.03821v1
- Date: Thu, 8 Sep 2022 13:50:35 GMT
- Title: Taking Advice from (Dis)Similar Machines: The Impact of Human-Machine
Similarity on Machine-Assisted Decision-Making
- Authors: Nina Grgi\'c-Hla\v{c}a, Claude Castelluccia, Krishna P. Gummadi
- Abstract summary: We study how the similarity of human and machine errors influences human perceptions of and interactions with algorithmic decision aids.
We find that (i) people perceive more similar decision aids as more useful, accurate, and predictable, and that (ii) people are more likely to take opposing advice from more similar decision aids.
- Score: 11.143223527623821
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning algorithms are increasingly used to assist human
decision-making. When the goal of machine assistance is to improve the accuracy
of human decisions, it might seem appealing to design ML algorithms that
complement human knowledge. While neither the algorithm nor the human are
perfectly accurate, one could expect that their complementary expertise might
lead to improved outcomes. In this study, we demonstrate that in practice
decision aids that are not complementary, but make errors similar to human ones
may have their own benefits.
In a series of human-subject experiments with a total of 901 participants, we
study how the similarity of human and machine errors influences human
perceptions of and interactions with algorithmic decision aids. We find that
(i) people perceive more similar decision aids as more useful, accurate, and
predictable, and that (ii) people are more likely to take opposing advice from
more similar decision aids, while (iii) decision aids that are less similar to
humans have more opportunities to provide opposing advice, resulting in a
higher influence on people's decisions overall.
Related papers
- Human Decision-making is Susceptible to AI-driven Manipulation [87.24007555151452]
AI systems may exploit users' cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes.
This study examined human susceptibility to such manipulation in financial and emotional decision-making contexts.
arXiv Detail & Related papers (2025-02-11T15:56:22Z) - Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary [19.884253335528317]
Recent advances in AI models have increased the integration of AI-based decision aids into the human decision making process.
To fully unlock the potential of AI-assisted decision making, researchers have computationally modeled how humans incorporate AI recommendations into their final decisions.
Providing AI explanations to human decision makers to help them rely on AI recommendations more appropriately has become a common practice.
arXiv Detail & Related papers (2024-11-02T18:33:28Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Does AI help humans make better decisions? A statistical evaluation framework for experimental and observational studies [0.43981305860983716]
We show how to compare the performance of three alternative decision-making systems--human-alone, human-with-AI, and AI-alone.
We find that the risk assessment recommendations do not improve the classification accuracy of a judge's decision to impose cash bail.
arXiv Detail & Related papers (2024-03-18T01:04:52Z) - Decoding AI's Nudge: A Unified Framework to Predict Human Behavior in
AI-assisted Decision Making [24.258056813524167]
We propose a computational framework that can provide an interpretable characterization of the influence of different forms of AI assistance on decision makers.
By conceptualizing AI assistance as the em nudge'' in human decision making processes, our approach centers around modelling how different forms of AI assistance modify humans' strategy in weighing different information in making their decisions.
arXiv Detail & Related papers (2024-01-11T11:22:36Z) - The Impact of Imperfect XAI on Human-AI Decision-Making [8.305869611846775]
We evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task.
Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance.
arXiv Detail & Related papers (2023-07-25T15:19:36Z) - Doubting AI Predictions: Influence-Driven Second Opinion Recommendation [92.30805227803688]
We propose a way to augment human-AI collaboration by building on a common organizational practice: identifying experts who are likely to provide complementary opinions.
The proposed approach aims to leverage productive disagreement by identifying whether some experts are likely to disagree with an algorithmic assessment.
arXiv Detail & Related papers (2022-04-29T20:35:07Z) - Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted
Decision-making [46.625616262738404]
We use knowledge from the field of cognitive science to account for cognitive biases in the human-AI collaborative decision-making setting.
We focus specifically on anchoring bias, a bias commonly encountered in human-AI collaboration.
arXiv Detail & Related papers (2020-10-15T22:25:41Z) - Learning to Complement Humans [67.38348247794949]
A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
arXiv Detail & Related papers (2020-05-01T20:00:23Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.