Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted
Decision-making
- URL: http://arxiv.org/abs/2010.07938v2
- Date: Mon, 4 Apr 2022 22:42:04 GMT
- Title: Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted
Decision-making
- Authors: Charvi Rastogi, Yunfeng Zhang, Dennis Wei, Kush R. Varshney, Amit
Dhurandhar, Richard Tomsett
- Abstract summary: We use knowledge from the field of cognitive science to account for cognitive biases in the human-AI collaborative decision-making setting.
We focus specifically on anchoring bias, a bias commonly encountered in human-AI collaboration.
- Score: 46.625616262738404
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Several strands of research have aimed to bridge the gap between artificial
intelligence (AI) and human decision-makers in AI-assisted decision-making,
where humans are the consumers of AI model predictions and the ultimate
decision-makers in high-stakes applications. However, people's perception and
understanding are often distorted by their cognitive biases, such as
confirmation bias, anchoring bias, availability bias, to name a few. In this
work, we use knowledge from the field of cognitive science to account for
cognitive biases in the human-AI collaborative decision-making setting, and
mitigate their negative effects on collaborative performance. To this end, we
mathematically model cognitive biases and provide a general framework through
which researchers and practitioners can understand the interplay between
cognitive biases and human-AI accuracy. We then focus specifically on anchoring
bias, a bias commonly encountered in human-AI collaboration. We implement a
time-based de-anchoring strategy and conduct our first user experiment that
validates its effectiveness in human-AI collaborative decision-making. With
this result, we design a time allocation strategy for a resource-constrained
setting that achieves optimal human-AI collaboration under some assumptions.
We, then, conduct a second user experiment which shows that our time allocation
strategy with explanation can effectively de-anchor the human and improve
collaborative performance when the AI model has low confidence and is
incorrect.
Related papers
- Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Unexploited Information Value in Human-AI Collaboration [23.353778024330165]
How to improve performance of a human-AI team is often not clear without knowing what particular information and strategies each agent employs.
We propose a model based in statistically decision theory to analyze human-AI collaboration.
arXiv Detail & Related papers (2024-11-03T01:34:45Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Towards Effective Human-AI Decision-Making: The Role of Human Learning
in Appropriate Reliance on AI Advice [3.595471754135419]
We show the relationship between learning and appropriate reliance in an experiment with 100 participants.
This work provides fundamental concepts for analyzing reliance and derives implications for the effective design of human-AI decision-making.
arXiv Detail & Related papers (2023-10-03T14:51:53Z) - The Impact of Imperfect XAI on Human-AI Decision-Making [8.305869611846775]
We evaluate how incorrect explanations influence humans' decision-making behavior in a bird species identification task.
Our findings reveal the influence of imperfect XAI and humans' level of expertise on their reliance on AI and human-AI team performance.
arXiv Detail & Related papers (2023-07-25T15:19:36Z) - BO-Muse: A human expert and AI teaming framework for accelerated
experimental design [58.61002520273518]
Our algorithm lets the human expert take the lead in the experimental process.
We show that our algorithm converges sub-linearly, at a rate faster than the AI or human alone.
arXiv Detail & Related papers (2023-03-03T02:56:05Z) - Learning Complementary Policies for Human-AI Teams [22.13683008398939]
We propose a framework for a novel human-AI collaboration for selecting advantageous course of action.
Our solution aims to exploit the human-AI complementarity to maximize decision rewards.
arXiv Detail & Related papers (2023-02-06T17:22:18Z) - Advancing Human-AI Complementarity: The Impact of User Expertise and
Algorithmic Tuning on Joint Decision Making [10.890854857970488]
Many factors can impact success of Human-AI teams, including a user's domain expertise, mental models of an AI system, trust in recommendations, and more.
Our study examined user performance in a non-trivial blood vessel labeling task where participants indicated whether a given blood vessel was flowing or stalled.
Our results show that while recommendations from an AI-Assistant can aid user decision making, factors such as users' baseline performance relative to the AI and complementary tuning of AI error types significantly impact overall team performance.
arXiv Detail & Related papers (2022-08-16T21:39:58Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.