Beyond Accuracy: How AI Metacognitive Sensitivity improves AI-assisted Decision Making
- URL: http://arxiv.org/abs/2507.22365v1
- Date: Wed, 30 Jul 2025 04:05:50 GMT
- Title: Beyond Accuracy: How AI Metacognitive Sensitivity improves AI-assisted Decision Making
- Authors: ZhaoBin Li, Mark Steyvers,
- Abstract summary: In settings where human decision-making relies on AI input, both the predictive accuracy of the AI system and the reliability of its confidence estimates influence decision quality.<n>We highlight the role of AI metacognitive sensitivity -- its ability to assign confidence scores that accurately distinguish correct from incorrect predictions.
- Score: 3.0493183668102293
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In settings where human decision-making relies on AI input, both the predictive accuracy of the AI system and the reliability of its confidence estimates influence decision quality. We highlight the role of AI metacognitive sensitivity -- its ability to assign confidence scores that accurately distinguish correct from incorrect predictions -- and introduce a theoretical framework for assessing the joint impact of AI's predictive accuracy and metacognitive sensitivity in hybrid decision-making settings. Our analysis identifies conditions under which an AI with lower predictive accuracy but higher metacognitive sensitivity can enhance the overall accuracy of human decision making. Finally, a behavioral experiment confirms that greater AI metacognitive sensitivity improves human decision performance. Together, these findings underscore the importance of evaluating AI assistance not only by accuracy but also by metacognitive sensitivity, and of optimizing both to achieve superior decision outcomes.
Related papers
- When Models Know More Than They Can Explain: Quantifying Knowledge Transfer in Human-AI Collaboration [79.69935257008467]
We introduce Knowledge Integration and Transfer Evaluation (KITE), a conceptual and experimental framework for Human-AI knowledge transfer capabilities.<n>We conduct the first large-scale human study (N=118) explicitly designed to measure it.<n>In our two-phase setup, humans first ideate with an AI on problem-solving strategies, then independently implement solutions, isolating model explanations' influence on human understanding.
arXiv Detail & Related papers (2025-06-05T20:48:16Z) - Exploring the Impact of Explainable AI and Cognitive Capabilities on Users' Decisions [1.1049608786515839]
Personality traits like the Need for Cognition (NFC) can lead to different decision-making outcomes among low and high NFC individuals.<n>We investigated how presenting AI information affects accuracy, reliance on AI, and cognitive load in a loan application scenario.<n>We found no significant differences between low and high NFC groups in accuracy or cognitive load, raising questions about the role of personality traits in AI-assisted decision-making.
arXiv Detail & Related papers (2025-05-02T11:30:53Z) - On Benchmarking Human-Like Intelligence in Machines [77.55118048492021]
We argue that current AI evaluation paradigms are insufficient for assessing human-like cognitive capabilities.<n>We identify a set of key shortcomings: a lack of human-validated labels, inadequate representation of human response variability and uncertainty, and reliance on simplified and ecologically-invalid tasks.
arXiv Detail & Related papers (2025-02-27T20:21:36Z) - Human Decision-making is Susceptible to AI-driven Manipulation [87.24007555151452]
AI systems may exploit users' cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes.<n>This study examined human susceptibility to such manipulation in financial and emotional decision-making contexts.
arXiv Detail & Related papers (2025-02-11T15:56:22Z) - Human-Alignment Influences the Utility of AI-assisted Decision Making [16.732483972136418]
We investigate what extent the degree of alignment actually influences the utility of AI-assisted decision making.<n>Our results show a positive association between the degree of alignment and the utility of AI-assisted decision making.
arXiv Detail & Related papers (2025-01-23T19:01:47Z) - As Confidence Aligns: Exploring the Effect of AI Confidence on Human Self-confidence in Human-AI Decision Making [37.192236418976265]
In human-AI decision-making, users' self-confidence aligns with AI confidence and such alignment can persist even after AI ceases to be involved.<n>The presence of real-time correctness feedback of decisions reduced the degree of alignment.
arXiv Detail & Related papers (2025-01-22T13:25:14Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Overconfident and Unconfident AI Hinder Human-AI Collaboration [5.480154202794587]
This study examines the effects of uncalibrated AI confidence on users' trust in AI, AI advice adoption, and collaboration outcomes.
Deficiency of trust calibration support exacerbates this issue by making it harder to detect uncalibrated confidence.
Our findings highlight the importance of AI confidence calibration for enhancing human-AI collaboration.
arXiv Detail & Related papers (2024-02-12T13:16:30Z) - AI Reliance and Decision Quality: Fundamentals, Interdependence, and the Effects of Interventions [6.356355538824237]
We argue that reliance and decision quality are often inappropriately conflated in the current literature on AI-assisted decision-making.<n>Our research highlights the importance of distinguishing between reliance behavior and decision quality in AI-assisted decision-making.
arXiv Detail & Related papers (2023-04-18T08:08:05Z) - Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted
Decision-making [46.625616262738404]
We use knowledge from the field of cognitive science to account for cognitive biases in the human-AI collaborative decision-making setting.
We focus specifically on anchoring bias, a bias commonly encountered in human-AI collaboration.
arXiv Detail & Related papers (2020-10-15T22:25:41Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.