Uncertainty Awareness and Trust in Explainable AI- On Trust Calibration using Local and Global Explanations
- URL: http://arxiv.org/abs/2509.08989v1
- Date: Wed, 10 Sep 2025 20:37:39 GMT
- Title: Uncertainty Awareness and Trust in Explainable AI- On Trust Calibration using Local and Global Explanations
- Authors: Carina Newen, Daniel Bodemer, Sonja Glantz, Emmanuel Müller, Magdalena Wischnewski, Lenka Schnaubert,
- Abstract summary: We focus on uncertainty explanations and consider global explanations, which are often left out of general guidelines for XAI schemes.<n>We chose an algorithm that covers various concepts simultaneously, such as uncertainty, robustness, and global XAI, and tested its ability to calibrate trust.<n>We then checked whether an algorithm that aims to provide more of an intuitive visual understanding, despite being complicated to understand, can provide higher user satisfaction and human interpretability.
- Score: 6.420315506131893
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable AI has become a common term in the literature, scrutinized by computer scientists and statisticians and highlighted by psychological or philosophical researchers. One major effort many researchers tackle is constructing general guidelines for XAI schemes, which we derived from our study. While some areas of XAI are well studied, we focus on uncertainty explanations and consider global explanations, which are often left out. We chose an algorithm that covers various concepts simultaneously, such as uncertainty, robustness, and global XAI, and tested its ability to calibrate trust. We then checked whether an algorithm that aims to provide more of an intuitive visual understanding, despite being complicated to understand, can provide higher user satisfaction and human interpretability.
Related papers
- Beyond Explainable AI (XAI): An Overdue Paradigm Shift and Post-XAI Research Directions [95.59915390053588]
This study focuses on Explainable Artificial Intelligence (XAI) approaches-focusing on deep neural networks (DNNs) and large language models (LLMs)<n>We discuss critical symptoms that stem from deeper root causes (i.e., two paradoxes, two conceptual confusions, and five false assumptions)<n>To move beyond XAI's limitations, we propose a four-pronged paradigm shift toward reliable and certified AI development.
arXiv Detail & Related papers (2026-02-27T16:58:27Z) - A Comprehensive Perspective on Explainable AI across the Machine Learning Workflow [1.269939585263915]
Holistic Explainable Artificial Intelligence (HXAI) is a user-centric framework that embeds explanation into every stage of the data-analysis workflow.<n>HXAI unifies six components (data, analysis set-up, learning process, model output, model quality, communication channel) into a single taxonomy.<n>A 112-item question bank covers these needs; our survey of contemporary tools highlights critical coverage gaps.
arXiv Detail & Related papers (2025-08-15T15:15:25Z) - Beyond Statistical Learning: Exact Learning Is Essential for General Intelligence [59.07578850674114]
Sound deductive reasoning is an indisputably desirable aspect of general intelligence.<n>It is well-documented that even the most advanced frontier systems regularly and consistently falter on easily-solvable reasoning tasks.<n>We argue that their unsound behavior is a consequence of the statistical learning approach powering their development.
arXiv Detail & Related papers (2025-06-30T14:37:50Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Why is plausibility surprisingly problematic as an XAI criterion? [38.0428570713717]
Plausibility is usually quantified by metrics of feature localization or feature correlation.<n>Our examination shows that plausibility is invalid to measure explainability, and human explanations are not the ground truth for XAI.<n>Due to the invalidity of measurements and the unethical issues, this paper argues that the community should stop using plausibility as a criterion for the evaluation and optimization of XAI algorithms.
arXiv Detail & Related papers (2023-03-30T20:59:44Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Explainable AI and Adoption of Algorithmic Advisors: an Experimental
Study [0.6875312133832077]
We develop an experimental methodology where participants play a web-based game, during which they receive advice from either a human or an algorithmic advisor.
We evaluate whether the different types of explanations affect the readiness to adopt, willingness to pay and trust a financial AI consultant.
We find that the types of explanations that promote adoption during first encounter differ from those that are most successful following failure or when cost is involved.
arXiv Detail & Related papers (2021-01-05T09:34:38Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Deceptive AI Explanations: Creation and Detection [3.197020142231916]
We investigate how AI models can be used to create and detect deceptive explanations.
As an empirical evaluation, we focus on text classification and alter the explanations generated by GradCAM.
We evaluate the effect of deceptive explanations on users in an experiment with 200 participants.
arXiv Detail & Related papers (2020-01-21T16:41:22Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.