Should I Follow AI-based Advice? Measuring Appropriate Reliance in
Human-AI Decision-Making
- URL: http://arxiv.org/abs/2204.06916v1
- Date: Thu, 14 Apr 2022 12:18:51 GMT
- Title: Should I Follow AI-based Advice? Measuring Appropriate Reliance in
Human-AI Decision-Making
- Authors: Max Schemmer, Patrick Hemmer, Niklas K\"uhl, Carina Benz, Gerhard
Satzger
- Abstract summary: We aim to enable humans not to rely on AI advice blindly but rather to distinguish its quality and act upon it to make better decisions.
Current research lacks a metric for appropriate reliance (AR) on AI advice on a case-by-case basis.
We propose to view AR as a two-dimensional construct that measures the ability to discriminate advice quality and behave accordingly.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Many important decisions in daily life are made with the help of advisors,
e.g., decisions about medical treatments or financial investments. Whereas in
the past, advice has often been received from human experts, friends, or
family, advisors based on artificial intelligence (AI) have become more and
more present nowadays. Typically, the advice generated by AI is judged by a
human and either deemed reliable or rejected. However, recent work has shown
that AI advice is not always beneficial, as humans have shown to be unable to
ignore incorrect AI advice, essentially representing an over-reliance on AI.
Therefore, the aspired goal should be to enable humans not to rely on AI advice
blindly but rather to distinguish its quality and act upon it to make better
decisions. Specifically, that means that humans should rely on the AI in the
presence of correct advice and self-rely when confronted with incorrect advice,
i.e., establish appropriate reliance (AR) on AI advice on a case-by-case basis.
Current research lacks a metric for AR. This prevents a rigorous evaluation of
factors impacting AR and hinders further development of human-AI
decision-making. Therefore, based on the literature, we derive a measurement
concept of AR. We propose to view AR as a two-dimensional construct that
measures the ability to discriminate advice quality and behave accordingly. In
this article, we derive the measurement concept, illustrate its application and
outline potential future research.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - A Survey of AI Reliance [1.6124402884077915]
Current shortcomings in the literature include unclear influences on AI reliance, lack of external validity, conflicting approaches to measuring reliance, and disregard for a change in reliance over time.
In conclusion, we present a morphological box that serves as a guide for research on AI reliance.
arXiv Detail & Related papers (2024-07-22T09:34:58Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Beyond Recommender: An Exploratory Study of the Effects of Different AI
Roles in AI-Assisted Decision Making [48.179458030691286]
We examine three AI roles: Recommender, Analyzer, and Devil's Advocate.
Our results show each role's distinct strengths and limitations in task performance, reliance appropriateness, and user experience.
These insights offer valuable implications for designing AI assistants with adaptive functional roles according to different situations.
arXiv Detail & Related papers (2024-03-04T07:32:28Z) - Learning to Make Adherence-Aware Advice [8.419688203654948]
This paper presents a sequential decision-making model that takes into account the human's adherence level.
We provide learning algorithms that learn the optimal advice policy and make advice only at critical time stamps.
arXiv Detail & Related papers (2023-10-01T23:15:55Z) - Uncalibrated Models Can Improve Human-AI Collaboration [10.106324182884068]
We show that presenting AI models as more confident than they actually are can improve human-AI performance.
We first learn a model for how humans incorporate AI advice using data from thousands of human interactions.
arXiv Detail & Related papers (2022-02-12T04:51:00Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI
Interactions [8.785345834486057]
We characterize how humans use AI suggestions relative to equivalent suggestions from a group of peer humans.
We find that participants' beliefs about the human versus AI performance on a given task affects whether or not they heed the advice.
arXiv Detail & Related papers (2021-07-14T21:33:14Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - To Trust or to Think: Cognitive Forcing Functions Can Reduce
Overreliance on AI in AI-assisted Decision-making [4.877174544937129]
People supported by AI-powered decision support tools frequently overrely on the AI.
Adding explanations to the AI decisions does not appear to reduce the overreliance.
Our research suggests that human cognitive motivation moderates the effectiveness of explainable AI solutions.
arXiv Detail & Related papers (2021-02-19T00:38:53Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.