Gradient-based Uncertainty Attribution for Explainable Bayesian Deep
Learning
- URL: http://arxiv.org/abs/2304.04824v1
- Date: Mon, 10 Apr 2023 19:14:15 GMT
- Title: Gradient-based Uncertainty Attribution for Explainable Bayesian Deep
Learning
- Authors: Hanjing Wang, Dhiraj Joshi, Shiqiang Wang, Qiang Ji
- Abstract summary: Predictions made by deep learning models are prone to data perturbations, adversarial attacks, and out-of-distribution inputs.
We propose to develop explainable and actionable Bayesian deep learning methods to perform accurate uncertainty quantification.
- Score: 38.34033824352067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predictions made by deep learning models are prone to data perturbations,
adversarial attacks, and out-of-distribution inputs. To build a trusted AI
system, it is therefore critical to accurately quantify the prediction
uncertainties. While current efforts focus on improving uncertainty
quantification accuracy and efficiency, there is a need to identify uncertainty
sources and take actions to mitigate their effects on predictions. Therefore,
we propose to develop explainable and actionable Bayesian deep learning methods
to not only perform accurate uncertainty quantification but also explain the
uncertainties, identify their sources, and propose strategies to mitigate the
uncertainty impacts. Specifically, we introduce a gradient-based uncertainty
attribution method to identify the most problematic regions of the input that
contribute to the prediction uncertainty. Compared to existing methods, the
proposed UA-Backprop has competitive accuracy, relaxed assumptions, and high
efficiency. Moreover, we propose an uncertainty mitigation strategy that
leverages the attribution results as attention to further improve the model
performance. Both qualitative and quantitative evaluations are conducted to
demonstrate the effectiveness of our proposed methods.
Related papers
- On Information-Theoretic Measures of Predictive Uncertainty [5.8034373350518775]
Despite its significance, a consensus on the correct measurement of predictive uncertainty remains elusive.
Our proposed framework categorizes predictive uncertainty measures according to two factors: (I) The predicting model (II) The approximation of the true predictive distribution.
We empirically evaluate these measures in typical uncertainty estimation settings, such as misclassification detection, selective prediction, and out-of-distribution detection.
arXiv Detail & Related papers (2024-10-14T17:52:18Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent
Representations [28.875819909902244]
Uncertainty estimation aims to evaluate the confidence of a trained deep neural network.
Existing uncertainty estimation approaches rely on low-dimensional distributional assumptions.
We propose a new framework using data-adaptive high-dimensional hypothesis testing for uncertainty estimation.
arXiv Detail & Related papers (2023-10-25T12:22:18Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - On Attacking Out-Domain Uncertainty Estimation in Deep Neural Networks [11.929914721626849]
We show that state-of-the-art uncertainty estimation algorithms could fail catastrophically under our proposed adversarial attack.
In particular, we aim at attacking the out-domain uncertainty estimation.
arXiv Detail & Related papers (2022-10-03T23:33:38Z) - Adversarial Attack for Uncertainty Estimation: Identifying Critical
Regions in Neural Networks [0.0]
We propose a novel method to capture data points near decision boundary in neural network that are often referred to a specific type of uncertainty.
Uncertainty estimates are derived from the input perturbations, unlike previous studies that provide perturbations on the model's parameters.
We show that the proposed method has revealed a significant outperformance over other methods and provided less risk to capture model uncertainty in machine learning.
arXiv Detail & Related papers (2021-07-15T21:30:26Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.