How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
Neural Networks
- URL: http://arxiv.org/abs/2006.09000v1
- Date: Tue, 16 Jun 2020 08:54:42 GMT
- Title: How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
Neural Networks
- Authors: Kirill Bykov, Marina M.-C. H\"ohne, Klaus-Robert M\"uller, Shinichi
Nakajima, Marius Kloft
- Abstract summary: Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks.
We propose a new framework that allows to convert any arbitrary explanation method for neural networks into an explanation method for Bayesian neural networks.
We demonstrate the effectiveness and usefulness of our approach extensively in various experiments.
- Score: 19.648814035399013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable AI (XAI) aims to provide interpretations for predictions made by
learning machines, such as deep neural networks, in order to make the machines
more transparent for the user and furthermore trustworthy also for applications
in e.g. safety-critical areas. So far, however, no methods for quantifying
uncertainties of explanations have been conceived, which is problematic in
domains where a high confidence in explanations is a prerequisite. We therefore
contribute by proposing a new framework that allows to convert any arbitrary
explanation method for neural networks into an explanation method for Bayesian
neural networks, with an in-built modeling of uncertainties. Within the
Bayesian framework a network's weights follow a distribution that extends
standard single explanation scores and heatmaps to distributions thereof, in
this manner translating the intrinsic network model uncertainties into a
quantification of explanation uncertainties. This allows us for the first time
to carve out uncertainties associated with a model explanation and subsequently
gauge the appropriate level of explanation confidence for a user (using
percentiles). We demonstrate the effectiveness and usefulness of our approach
extensively in various experiments, both qualitatively and quantitatively.
Related papers
- Towards Modeling Uncertainties of Self-explaining Neural Networks via
Conformal Prediction [34.87646720253128]
We propose a novel uncertainty modeling framework for self-explaining neural networks.
We show it provides strong distribution-free uncertainty modeling performance for the generated explanations.
It also excels in producing efficient and effective prediction sets for the final predictions.
arXiv Detail & Related papers (2024-01-03T05:51:49Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Identifying Drivers of Predictive Aleatoric Uncertainty [2.5311562666866494]
We present a simple approach to explain predictive aleatoric uncertainties.
We estimate uncertainty as predictive variance by adapting a neural network with a Gaussian output distribution.
We quantify our findings with a nuanced benchmark analysis that includes real-world datasets.
arXiv Detail & Related papers (2023-12-12T13:28:53Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Analytic Mutual Information in Bayesian Neural Networks [0.8122270502556371]
Mutual information is an example of an uncertainty measure in a Bayesian neural network to quantify uncertainty.
We derive the analytical formula of the mutual information between model parameters and the predictive output by leveraging the notion of the point process entropy.
As an application, we discuss the estimation of the Dirichlet parameters and show its practical application in the active learning uncertainty measures.
arXiv Detail & Related papers (2022-01-24T17:30:54Z) - Bayesian Attention Belief Networks [59.183311769616466]
Attention-based neural networks have achieved state-of-the-art results on a wide range of tasks.
This paper introduces Bayesian attention belief networks, which construct a decoder network by modeling unnormalized attention weights.
We show that our method outperforms deterministic attention and state-of-the-art attention in accuracy, uncertainty estimation, generalization across domains, and adversarial attacks.
arXiv Detail & Related papers (2021-06-09T17:46:22Z) - Multivariate Deep Evidential Regression [77.34726150561087]
A new approach with uncertainty-aware neural networks shows promise over traditional deterministic methods.
We discuss three issues with a proposed solution to extract aleatoric and epistemic uncertainties from regression-based neural networks.
arXiv Detail & Related papers (2021-04-13T12:20:18Z) - Bayesian Neural Networks [0.0]
We show how errors in prediction by neural networks can be obtained in principle, and provide the two favoured methods for characterising these errors.
We will also describe how both of these methods have substantial pitfalls when put into practice.
arXiv Detail & Related papers (2020-06-02T09:43:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.