Robust Explanations Through Uncertainty Decomposition: A Path to Trustworthier AI
- URL: http://arxiv.org/abs/2507.12913v1
- Date: Thu, 17 Jul 2025 09:00:05 GMT
- Title: Robust Explanations Through Uncertainty Decomposition: A Path to Trustworthier AI
- Authors: Chenrui Zhu, Louenas Bounia, Vu Linh Nguyen, Sébastien Destercke, Arthur Hoarau,
- Abstract summary: We propose leveraging prediction uncertainty as a complementary approach to classical explainability methods.<n>Epistemic uncertainty serves as a rejection criterion for unreliable explanations.<n>Our experiments demonstrate the impact of this uncertainty-aware approach on the robustness and attainability of explanations.
- Score: 4.1942958779358674
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in machine learning have emphasized the need for transparency in model predictions, particularly as interpretability diminishes when using increasingly complex architectures. In this paper, we propose leveraging prediction uncertainty as a complementary approach to classical explainability methods. Specifically, we distinguish between aleatoric (data-related) and epistemic (model-related) uncertainty to guide the selection of appropriate explanations. Epistemic uncertainty serves as a rejection criterion for unreliable explanations and, in itself, provides insight into insufficient training (a new form of explanation). Aleatoric uncertainty informs the choice between feature-importance explanations and counterfactual explanations. This leverages a framework of explainability methods driven by uncertainty quantification and disentanglement. Our experiments demonstrate the impact of this uncertainty-aware approach on the robustness and attainability of explanations in both traditional machine learning and deep learning scenarios.
Related papers
- Uncertainty Propagation in XAI: A Comparison of Analytical and Empirical Estimators [1.0855602842179624]
Understanding uncertainty in Explainable AI (XAI) is crucial for building trust.<n>This paper introduces a unified framework for quantifying and interpreting Uncertainty in XAI.<n>By using both analytical and empirical estimates of explanation variance, we provide a systematic means of assessing the impact uncertainty on explanations.
arXiv Detail & Related papers (2025-04-01T07:06:31Z) - Conceptualizing Uncertainty [45.370565359867534]
Uncertainty in machine learning refers to the degree of confidence or lack thereof in a model's predictions.<n>We propose to explain uncertainty in high-dimensional data classification settings by means of concept activation vectors.<n>We demonstrate the utility of the generated explanations by leveraging them to refine and improve our model.
arXiv Detail & Related papers (2025-03-05T12:24:12Z) - Probabilistic Modeling of Disparity Uncertainty for Robust and Efficient Stereo Matching [61.73532883992135]
We propose a new uncertainty-aware stereo matching framework.<n>We adopt Bayes risk as the measurement of uncertainty and use it to separately estimate data and model uncertainty.
arXiv Detail & Related papers (2024-12-24T23:28:20Z) - Ensured: Explanations for Decreasing the Epistemic Uncertainty in Predictions [1.2289361708127877]
Epistem uncertainty adds a crucial dimension to explanation quality.
We introduce new types of explanations that specifically target this uncertainty.
We introduce a new metric, ensured ranking, designed to help users identify the most reliable explanations.
arXiv Detail & Related papers (2024-10-07T20:21:51Z) - On Generating Monolithic and Model Reconciling Explanations in Probabilistic Scenarios [46.752418052725126]
We propose a novel framework for generating probabilistic monolithic explanations and model reconciling explanations.
For monolithic explanations, our approach integrates uncertainty by utilizing probabilistic logic to increase the probability of the explanandum.
For model reconciling explanations, we propose a framework that extends the logic-based variant of the model reconciliation problem to account for probabilistic human models.
arXiv Detail & Related papers (2024-05-29T16:07:31Z) - Investigating the Impact of Model Instability on Explanations and Uncertainty [43.254616360807496]
We simulate uncertainty in text input by introducing noise at inference time.
We find that high uncertainty doesn't necessarily imply low explanation plausibility.
This suggests that noise-augmented models may be better at identifying salient tokens when uncertain.
arXiv Detail & Related papers (2024-02-20T13:41:21Z) - Explaining Predictive Uncertainty by Exposing Second-Order Effects [13.83164409095901]
We present a new method for explaining predictive uncertainty based on second-order effects.
Our method is generally applicable, allowing for turning common attribution techniques into powerful second-order uncertainty explainers.
arXiv Detail & Related papers (2024-01-30T21:02:21Z) - Identifying Drivers of Predictive Aleatoric Uncertainty [2.5311562666866494]
We propose a straightforward approach to explain predictive aleatoric uncertainties.<n>We estimate uncertainty in regression as predictive variance by adapting a neural network with a Gaussian output distribution.<n>This approach can explain uncertainty influences more reliably than complex published approaches.
arXiv Detail & Related papers (2023-12-12T13:28:53Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.