Beyond Awareness: Investigating How AI and Psychological Factors Shape Human Self-Confidence Calibration
- URL: http://arxiv.org/abs/2511.17509v1
- Date: Sat, 04 Oct 2025 08:42:57 GMT
- Title: Beyond Awareness: Investigating How AI and Psychological Factors Shape Human Self-Confidence Calibration
- Authors: Federico Maria Cau, Lucio Davide Spano,
- Abstract summary: We show the importance of human self-confidence calibration and psychological traits when designing AI-assisted decision systems.<n>We propose design recommendations to address the challenge of calibrating self-confidence and supporting tailored, user-centric AI.
- Score: 0.2578242050187029
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Human-AI collaboration outcomes depend strongly on human self-confidence calibration, which drives reliance or resistance toward AI's suggestions. This work presents two studies examining whether calibration of self-confidence before decision tasks, low versus high levels of Need for Cognition (NFC), and Actively Open-Minded Thinking (AOT), leads to differences in decision accuracy, self-confidence appropriateness during the tasks, and metacognitive perceptions (global and affective). The first study presents strategies to identify well-calibrated users, also comparing decision accuracy and the appropriateness of self-confidence across NFC and AOT levels. The second study investigates the effects of calibrated self-confidence in AI-assisted decision-making (no AI, two-stage AI, and personalized AI), also considering different NFC and AOT levels. Our results show the importance of human self-confidence calibration and psychological traits when designing AI-assisted decision systems. We further propose design recommendations to address the challenge of calibrating self-confidence and supporting tailored, user-centric AI that accounts for individual traits.
Related papers
- Beyond Accuracy: How AI Metacognitive Sensitivity improves AI-assisted Decision Making [3.0493183668102293]
In settings where human decision-making relies on AI input, both the predictive accuracy of the AI system and the reliability of its confidence estimates influence decision quality.<n>We highlight the role of AI metacognitive sensitivity -- its ability to assign confidence scores that accurately distinguish correct from incorrect predictions.
arXiv Detail & Related papers (2025-07-30T04:05:50Z) - The Impact and Feasibility of Self-Confidence Shaping for AI-Assisted Decision-Making [6.852960508141108]
This paper presents an intervention for self-confidence shaping, designed to calibrate self-confidence at a targeted level.<n>We show that self-confidence shaping can improve human-AI team performance by nearly 50% by mitigating both over- and under-reliance on AI.<n>The observed relationship between sentiment and self-confidence suggests that modifying sentiment could be a viable strategy for shaping self-confidence.
arXiv Detail & Related papers (2025-02-20T06:55:41Z) - Human Decision-making is Susceptible to AI-driven Manipulation [87.24007555151452]
AI systems may exploit users' cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes.<n>This study examined human susceptibility to such manipulation in financial and emotional decision-making contexts.
arXiv Detail & Related papers (2025-02-11T15:56:22Z) - As Confidence Aligns: Exploring the Effect of AI Confidence on Human Self-confidence in Human-AI Decision Making [37.192236418976265]
In human-AI decision-making, users' self-confidence aligns with AI confidence and such alignment can persist even after AI ceases to be involved.<n>The presence of real-time correctness feedback of decisions reduced the degree of alignment.
arXiv Detail & Related papers (2025-01-22T13:25:14Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Understanding the Effects of Miscalibrated AI Confidence on User Trust, Reliance, and Decision Efficacy [38.39755953750018]
Miscalibrated AI confidence impairs users' appropriate reliance and reduces AI-assisted decision-making efficacy.<n>We find that communicating AI confidence calibration levels helps users to detect AI miscalibration.<n>However, since such communication decreases users' trust in uncalibrated AI, leading to high under-reliance, it does not improve the decision efficacy.
arXiv Detail & Related papers (2024-02-12T13:16:30Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.