Understanding the Effects of Miscalibrated AI Confidence on User Trust, Reliance, and Decision Efficacy
- URL: http://arxiv.org/abs/2402.07632v4
- Date: Fri, 26 Sep 2025 07:48:29 GMT
- Title: Understanding the Effects of Miscalibrated AI Confidence on User Trust, Reliance, and Decision Efficacy
- Authors: Jingshu Li, Yitian Yang, Renwen Zhang, Q. Vera Liao, Tianqi Song, Zhengtao Xu, Yi-chieh Lee,
- Abstract summary: Miscalibrated AI confidence impairs users' appropriate reliance and reduces AI-assisted decision-making efficacy.<n>We find that communicating AI confidence calibration levels helps users to detect AI miscalibration.<n>However, since such communication decreases users' trust in uncalibrated AI, leading to high under-reliance, it does not improve the decision efficacy.
- Score: 38.39755953750018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Providing well-calibrated AI confidence can help promote users' appropriate trust in and reliance on AI, which are essential for AI-assisted decision-making. However, calibrating AI confidence -- providing confidence score that accurately reflects the true likelihood of AI being correct -- is known to be challenging. To understand the effects of AI confidence miscalibration, we conducted our first experiment. The results indicate that miscalibrated AI confidence impairs users' appropriate reliance and reduces AI-assisted decision-making efficacy, and AI miscalibration is difficult for users to detect. Then, in our second experiment, we examined whether communicating AI confidence calibration levels could mitigate the above issues. We find that it helps users to detect AI miscalibration. Nevertheless, since such communication decreases users' trust in uncalibrated AI, leading to high under-reliance, it does not improve the decision efficacy. We discuss design implications based on these findings and future directions to address risks and ethical concerns associated with AI miscalibration.
Related papers
- Beyond Awareness: Investigating How AI and Psychological Factors Shape Human Self-Confidence Calibration [0.2578242050187029]
We show the importance of human self-confidence calibration and psychological traits when designing AI-assisted decision systems.<n>We propose design recommendations to address the challenge of calibrating self-confidence and supporting tailored, user-centric AI.
arXiv Detail & Related papers (2025-10-04T08:42:57Z) - The Impact and Feasibility of Self-Confidence Shaping for AI-Assisted Decision-Making [6.852960508141108]
This paper presents an intervention for self-confidence shaping, designed to calibrate self-confidence at a targeted level.
We show that self-confidence shaping can improve human-AI team performance by nearly 50% by mitigating both over- and under-reliance on AI.
The observed relationship between sentiment and self-confidence suggests that modifying sentiment could be a viable strategy for shaping self-confidence.
arXiv Detail & Related papers (2025-02-20T06:55:41Z) - Human-Alignment Influences the Utility of AI-assisted Decision Making [16.732483972136418]
We investigate what extent the degree of alignment actually influences the utility of AI-assisted decision making.
Our results show a positive association between the degree of alignment and the utility of AI-assisted decision making.
arXiv Detail & Related papers (2025-01-23T19:01:47Z) - As Confidence Aligns: Exploring the Effect of AI Confidence on Human Self-confidence in Human-AI Decision Making [37.192236418976265]
In human-AI decision-making, users' self-confidence aligns with AI confidence and such alignment can persist even after AI ceases to be involved.
The presence of real-time correctness feedback of decisions reduced the degree of alignment.
arXiv Detail & Related papers (2025-01-22T13:25:14Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems [2.444630714797783]
We review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias.
We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of human-centric decision making.
arXiv Detail & Related papers (2024-08-28T06:04:25Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Knowing About Knowing: An Illusion of Human Competence Can Hinder
Appropriate Reliance on AI Systems [13.484359389266864]
This paper addresses whether the Dunning-Kruger Effect (DKE) can hinder appropriate reliance on AI systems.
DKE is a metacognitive bias due to which less-competent individuals overestimate their own skill and performance.
We found that participants who overestimate their performance tend to exhibit under-reliance on AI systems.
arXiv Detail & Related papers (2023-01-25T14:26:10Z) - Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness
Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making [36.50604819969994]
In AI-assisted decision-making, it is critical for human decision-makers to know when to trust AI and when to trust themselves.
We modeled humans' CL by approximating their decision-making models and computing their potential performance in similar instances.
We proposed three CL exploitation strategies to calibrate users' trust explicitly/implicitly in the AI-assisted decision-making process.
arXiv Detail & Related papers (2023-01-14T02:51:01Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.