Explainable Artificial Intelligence for Bayesian Neural Networks:
Towards trustworthy predictions of ocean dynamics
- URL: http://arxiv.org/abs/2205.00202v1
- Date: Sat, 30 Apr 2022 08:35:57 GMT
- Title: Explainable Artificial Intelligence for Bayesian Neural Networks:
Towards trustworthy predictions of ocean dynamics
- Authors: Mariana C. A. Clare and Maike Sonnewald and Redouane Lguensat and
Julie Deshayes and Venkatramani Balaji
- Abstract summary: The trustworthiness of neural networks is often challenged because they lack the ability to express uncertainty and explain their skill.
This can be problematic given the increasing use of neural networks in high stakes decision-making such as in climate change applications.
We address both issues by successfully implementing a Bayesian Neural Network (BNN), where parameters are distributions rather than deterministic, and applying novel implementations of explainable AI (XAI) techniques.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The trustworthiness of neural networks is often challenged because they lack
the ability to express uncertainty and explain their skill. This can be
problematic given the increasing use of neural networks in high stakes
decision-making such as in climate change applications. We address both issues
by successfully implementing a Bayesian Neural Network (BNN), where parameters
are distributions rather than deterministic, and applying novel implementations
of explainable AI (XAI) techniques. The uncertainty analysis from the BNN
provides a comprehensive overview of the prediction more suited to
practitioners' needs than predictions from a classical neural network. Using a
BNN means we can calculate the entropy (i.e. uncertainty) of the predictions
and determine if the probability of an outcome is statistically significant. To
enhance trustworthiness, we also spatially apply the two XAI techniques of
Layer-wise Relevance Propagation (LRP) and SHapley Additive exPlanation (SHAP)
values. These XAI methods reveal the extent to which the BNN is suitable and/or
trustworthy. Using two techniques gives a more holistic view of BNN skill and
its uncertainty, as LRP considers neural network parameters, whereas SHAP
considers changes to outputs. We verify these techniques using comparison with
intuition from physical theory. The differences in explanation identify
potential areas where new physical theory guided studies are needed.
Related papers
- Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Bayesian Neural Networks with Domain Knowledge Priors [52.80929437592308]
We propose a framework for integrating general forms of domain knowledge into a BNN prior.
We show that BNNs using our proposed domain knowledge priors outperform those with standard priors.
arXiv Detail & Related papers (2024-02-20T22:34:53Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Sparsifying Bayesian neural networks with latent binary variables and
normalizing flows [10.865434331546126]
We will consider two extensions to the latent binary Bayesian neural networks (LBBNN) method.
Firstly, by using the local reparametrization trick (LRT) to sample the hidden units directly, we get a more computationally efficient algorithm.
More importantly, by using normalizing flows on the variational posterior distribution of the LBBNN parameters, the network learns a more flexible variational posterior distribution than the mean field Gaussian.
arXiv Detail & Related papers (2023-05-05T09:40:28Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - An out-of-distribution discriminator based on Bayesian neural network
epistemic uncertainty [0.19573380763700712]
Bayesian neural networks (BNNs) are an important type of neural network with built-in capability for quantifying uncertainty.
This paper discusses aleatoric and epistemic uncertainty in BNNs and how they can be calculated.
arXiv Detail & Related papers (2022-10-18T21:15:33Z) - Linear Leaky-Integrate-and-Fire Neuron Model Based Spiking Neural
Networks and Its Mapping Relationship to Deep Neural Networks [7.840247953745616]
Spiking neural networks (SNNs) are brain-inspired machine learning algorithms with merits such as biological plausibility and unsupervised learning capability.
This paper establishes a precise mathematical mapping between the biological parameters of the Linear Leaky-Integrate-and-Fire model (LIF)/SNNs and the parameters of ReLU-AN/Deep Neural Networks (DNNs)
arXiv Detail & Related papers (2022-05-31T17:02:26Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Explaining Bayesian Neural Networks [11.296451806040796]
XAI aims to make advanced learning machines such as Deep Neural Networks (DNNs) more transparent in decision making.
BNNs so far have a limited form of transparency (model transparency) already built-in through their prior weight distribution.
In this work, we bring together these two perspectives of transparency into a holistic explanation framework for explaining BNNs.
arXiv Detail & Related papers (2021-08-23T18:09:41Z) - Bayesian Neural Networks [0.0]
We show how errors in prediction by neural networks can be obtained in principle, and provide the two favoured methods for characterising these errors.
We will also describe how both of these methods have substantial pitfalls when put into practice.
arXiv Detail & Related papers (2020-06-02T09:43:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.