Explaining Bayesian Neural Networks
- URL: http://arxiv.org/abs/2108.10346v1
- Date: Mon, 23 Aug 2021 18:09:41 GMT
- Title: Explaining Bayesian Neural Networks
- Authors: Kirill Bykov, Marina M.-C. H\"ohne, Adelaida Creosteanu, Klaus-Robert
M\"uller, Frederick Klauschen, Shinichi Nakajima, Marius Kloft
- Abstract summary: XAI aims to make advanced learning machines such as Deep Neural Networks (DNNs) more transparent in decision making.
BNNs so far have a limited form of transparency (model transparency) already built-in through their prior weight distribution.
In this work, we bring together these two perspectives of transparency into a holistic explanation framework for explaining BNNs.
- Score: 11.296451806040796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To make advanced learning machines such as Deep Neural Networks (DNNs) more
transparent in decision making, explainable AI (XAI) aims to provide
interpretations of DNNs' predictions. These interpretations are usually given
in the form of heatmaps, each one illustrating relevant patterns regarding the
prediction for a given instance. Bayesian approaches such as Bayesian Neural
Networks (BNNs) so far have a limited form of transparency (model transparency)
already built-in through their prior weight distribution, but notably, they
lack explanations of their predictions for given instances. In this work, we
bring together these two perspectives of transparency into a holistic
explanation framework for explaining BNNs. Within the Bayesian framework, the
network weights follow a probability distribution. Hence, the standard
(deterministic) prediction strategy of DNNs extends in BNNs to a predictive
distribution, and thus the standard explanation extends to an explanation
distribution. Exploiting this view, we uncover that BNNs implicitly employ
multiple heterogeneous prediction strategies. While some of these are inherited
from standard DNNs, others are revealed to us by considering the inherent
uncertainty in BNNs. Our quantitative and qualitative experiments on
toy/benchmark data and real-world data from pathology show that the proposed
approach of explaining BNNs can lead to more effective and insightful
explanations.
Related papers
- Explainable Graph Neural Networks Under Fire [69.15708723429307]
Graph neural networks (GNNs) usually lack interpretability due to their complex computational behavior and the abstract nature of graphs.
Most GNN explanation methods work in a post-hoc manner and provide explanations in the form of a small subset of important edges and/or nodes.
In this paper we demonstrate that these explanations can unfortunately not be trusted, as common GNN explanation methods turn out to be highly susceptible to adversarial perturbations.
arXiv Detail & Related papers (2024-06-10T16:09:16Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Towards Modeling Uncertainties of Self-explaining Neural Networks via
Conformal Prediction [34.87646720253128]
We propose a novel uncertainty modeling framework for self-explaining neural networks.
We show it provides strong distribution-free uncertainty modeling performance for the generated explanations.
It also excels in producing efficient and effective prediction sets for the final predictions.
arXiv Detail & Related papers (2024-01-03T05:51:49Z) - Incorporating Unlabelled Data into Bayesian Neural Networks [48.25555899636015]
We introduce Self-Supervised Bayesian Neural Networks, which use unlabelled data to learn models with suitable prior predictive distributions.
We show that the prior predictive distributions of self-supervised BNNs capture problem semantics better than conventional BNN priors.
Our approach offers improved predictive performance over conventional BNNs, especially in low-budget regimes.
arXiv Detail & Related papers (2023-04-04T12:51:35Z) - On Structural Explanation of Bias in Graph Neural Networks [40.323880315453906]
Graph Neural Networks (GNNs) have shown satisfying performance in various graph analytical problems.
GNNs could yield biased results against certain demographic subgroups.
We study a novel research problem of structural explanation of bias in GNNs.
arXiv Detail & Related papers (2022-06-24T06:49:21Z) - On Consistency in Graph Neural Network Interpretation [34.25952902469481]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, but most of them formalize this task by searching the minimal subgraph.
We propose a simple yet effective countermeasure by aligning embeddings.
arXiv Detail & Related papers (2022-05-27T02:58:07Z) - Explainable Artificial Intelligence for Bayesian Neural Networks:
Towards trustworthy predictions of ocean dynamics [0.0]
The trustworthiness of neural networks is often challenged because they lack the ability to express uncertainty and explain their skill.
This can be problematic given the increasing use of neural networks in high stakes decision-making such as in climate change applications.
We address both issues by successfully implementing a Bayesian Neural Network (BNN), where parameters are distributions rather than deterministic, and applying novel implementations of explainable AI (XAI) techniques.
arXiv Detail & Related papers (2022-04-30T08:35:57Z) - Towards the Explanation of Graph Neural Networks in Digital Pathology
with Information Flows [67.23405590815602]
Graph Neural Networks (GNNs) are widely adopted in digital pathology.
Existing explainers discover an explanatory subgraph relevant to the prediction.
An explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions.
We propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs.
arXiv Detail & Related papers (2021-12-18T10:19:01Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Towards Fully Interpretable Deep Neural Networks: Are We There Yet? [17.88784870849724]
Deep Neural Networks (DNNs) behave as black-boxes hindering user trust in Artificial Intelligence (AI) systems.
This paper provides a review of existing methods to develop DNNs with intrinsic interpretability.
arXiv Detail & Related papers (2021-06-24T16:37:34Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.