The Impact and Feasibility of Self-Confidence Shaping for AI-Assisted Decision-Making
- URL: http://arxiv.org/abs/2502.14311v1
- Date: Thu, 20 Feb 2025 06:55:41 GMT
- Title: The Impact and Feasibility of Self-Confidence Shaping for AI-Assisted Decision-Making
- Authors: Takehiro Takayanagi, Ryuji Hashimoto, Chung-Chi Chen, Kiyoshi Izumi,
- Abstract summary: This paper presents an intervention for self-confidence shaping, designed to calibrate self-confidence at a targeted level.
We show that self-confidence shaping can improve human-AI team performance by nearly 50% by mitigating both over- and under-reliance on AI.
The observed relationship between sentiment and self-confidence suggests that modifying sentiment could be a viable strategy for shaping self-confidence.
- Score: 6.852960508141108
- License:
- Abstract: In AI-assisted decision-making, it is crucial but challenging for humans to appropriately rely on AI, especially in high-stakes domains such as finance and healthcare. This paper addresses this problem from a human-centered perspective by presenting an intervention for self-confidence shaping, designed to calibrate self-confidence at a targeted level. We first demonstrate the impact of self-confidence shaping by quantifying the upper-bound improvement in human-AI team performance. Our behavioral experiments with 121 participants show that self-confidence shaping can improve human-AI team performance by nearly 50% by mitigating both over- and under-reliance on AI. We then introduce a self-confidence prediction task to identify when our intervention is needed. Our results show that simple machine-learning models achieve 67% accuracy in predicting self-confidence. We further illustrate the feasibility of such interventions. The observed relationship between sentiment and self-confidence suggests that modifying sentiment could be a viable strategy for shaping self-confidence. Finally, we outline future research directions to support the deployment of self-confidence shaping in a real-world scenario for effective human-AI collaboration.
Related papers
- Human Decision-making is Susceptible to AI-driven Manipulation [71.20729309185124]
AI systems may exploit users' cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes.
This study examined human susceptibility to such manipulation in financial and emotional decision-making contexts.
arXiv Detail & Related papers (2025-02-11T15:56:22Z) - Human-Alignment Influences the Utility of AI-assisted Decision Making [16.732483972136418]
We investigate what extent the degree of alignment actually influences the utility of AI-assisted decision making.
Our results show a positive association between the degree of alignment and the utility of AI-assisted decision making.
arXiv Detail & Related papers (2025-01-23T19:01:47Z) - As Confidence Aligns: Exploring the Effect of AI Confidence on Human Self-confidence in Human-AI Decision Making [37.192236418976265]
In human-AI decision-making, users' self-confidence aligns with AI confidence and such alignment can persist even after AI ceases to be involved.
The presence of real-time correctness feedback of decisions reduced the degree of alignment.
arXiv Detail & Related papers (2025-01-22T13:25:14Z) - Overconfident and Unconfident AI Hinder Human-AI Collaboration [5.480154202794587]
This study examines the effects of uncalibrated AI confidence on users' trust in AI, AI advice adoption, and collaboration outcomes.
Deficiency of trust calibration support exacerbates this issue by making it harder to detect uncalibrated confidence.
Our findings highlight the importance of AI confidence calibration for enhancing human-AI collaboration.
arXiv Detail & Related papers (2024-02-12T13:16:30Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Knowing About Knowing: An Illusion of Human Competence Can Hinder
Appropriate Reliance on AI Systems [13.484359389266864]
This paper addresses whether the Dunning-Kruger Effect (DKE) can hinder appropriate reliance on AI systems.
DKE is a metacognitive bias due to which less-competent individuals overestimate their own skill and performance.
We found that participants who overestimate their performance tend to exhibit under-reliance on AI systems.
arXiv Detail & Related papers (2023-01-25T14:26:10Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - AvE: Assistance via Empowerment [77.08882807208461]
We propose a new paradigm for assistance by instead increasing the human's ability to control their environment.
This task-agnostic objective preserves the person's autonomy and ability to achieve any eventual state.
arXiv Detail & Related papers (2020-06-26T04:40:11Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.