Investigating the fidelity of explainable artificial intelligence
methods for applications of convolutional neural networks in geoscience
- URL: http://arxiv.org/abs/2202.03407v1
- Date: Mon, 7 Feb 2022 18:47:15 GMT
- Title: Investigating the fidelity of explainable artificial intelligence
methods for applications of convolutional neural networks in geoscience
- Authors: Antonios Mamalakis, Elizabeth A. Barnes and Imme Ebert-Uphoff
- Abstract summary: Methods of explainable artificial intelligence (XAI) are gaining popularity as a means to explain CNN decision-making strategy.
Here, we establish an intercomparison of some of the most popular XAI methods and investigate their fidelity in explaining CNN decisions for geoscientific applications.
- Score: 0.02578242050187029
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional neural networks (CNNs) have recently attracted great attention
in geoscience due to their ability to capture non-linear system behavior and
extract predictive spatiotemporal patterns. Given their black-box nature
however, and the importance of prediction explainability, methods of
explainable artificial intelligence (XAI) are gaining popularity as a means to
explain the CNN decision-making strategy. Here, we establish an intercomparison
of some of the most popular XAI methods and investigate their fidelity in
explaining CNN decisions for geoscientific applications. Our goal is to raise
awareness of the theoretical limitations of these methods and gain insight into
the relative strengths and weaknesses to help guide best practices. The
considered XAI methods are first applied to an idealized attribution benchmark,
where the ground truth of explanation of the network is known a priori, to help
objectively assess their performance. Secondly, we apply XAI to a
climate-related prediction setting, namely to explain a CNN that is trained to
predict the number of atmospheric rivers in daily snapshots of climate
simulations. Our results highlight several important issues of XAI methods
(e.g., gradient shattering, inability to distinguish the sign of attribution,
ignorance to zero input) that have previously been overlooked in our field and,
if not considered cautiously, may lead to a distorted picture of the CNN
decision-making strategy. We envision that our analysis will motivate further
investigation into XAI fidelity and will help towards a cautious implementation
of XAI in geoscience, which can lead to further exploitation of CNNs and deep
learning for prediction problems.
Related papers
- Representation Engineering: A Top-Down Approach to AI Transparency [132.0398250233924]
We identify and characterize the emerging area of representation engineering (RepE)
RepE places population-level representations, rather than neurons or circuits, at the center of analysis.
We showcase how these methods can provide traction on a wide range of safety-relevant problems.
arXiv Detail & Related papers (2023-10-02T17:59:07Z) - Deep Learning-based Analysis of Basins of Attraction [49.812879456944984]
This research addresses the challenge of characterizing the complexity and unpredictability of basins within various dynamical systems.
The main focus is on demonstrating the efficiency of convolutional neural networks (CNNs) in this field.
arXiv Detail & Related papers (2023-09-27T15:41:12Z) - Adversarial Attacks on the Interpretation of Neuron Activation
Maximization [70.5472799454224]
Activation-maximization approaches are used to interpret and analyze trained deep-learning models.
In this work, we consider the concept of an adversary manipulating a model for the purpose of deceiving the interpretation.
arXiv Detail & Related papers (2023-06-12T19:54:33Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Carefully choose the baseline: Lessons learned from applying XAI
attribution methods for regression tasks in geoscience [0.02578242050187029]
Methods of eXplainable Artificial Intelligence (XAI) are used in geoscientific applications to gain insights into the decision-making strategy of Neural Networks (NNs)
Here, we discuss our lesson learned that the task of attributing attributing a prediction to the input does not have a single solution.
We show that attributions differ substantially when considering different baselines, as they correspond to answering different science questions.
arXiv Detail & Related papers (2022-08-19T17:54:24Z) - Explainable Artificial Intelligence for Bayesian Neural Networks:
Towards trustworthy predictions of ocean dynamics [0.0]
The trustworthiness of neural networks is often challenged because they lack the ability to express uncertainty and explain their skill.
This can be problematic given the increasing use of neural networks in high stakes decision-making such as in climate change applications.
We address both issues by successfully implementing a Bayesian Neural Network (BNN), where parameters are distributions rather than deterministic, and applying novel implementations of explainable AI (XAI) techniques.
arXiv Detail & Related papers (2022-04-30T08:35:57Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Neural Network Attribution Methods for Problems in Geoscience: A Novel
Synthetic Benchmark Dataset [0.05156484100374058]
We provide a framework to generate attribution benchmark datasets for regression problems in the geosciences.
We train a fully-connected network to learn the underlying function that was used for simulation.
We compare estimated attribution heatmaps from different XAI methods to the ground truth in order to identify examples where specific XAI methods perform well or poorly.
arXiv Detail & Related papers (2021-03-18T03:39:17Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Opportunities and Challenges in Explainable Artificial Intelligence
(XAI): A Survey [2.7086321720578623]
Black-box nature of deep neural networks challenges its use in mission critical applications.
XAI promotes a set of tools, techniques, and algorithms that can generate high-quality interpretable, intuitive, human-understandable explanations of AI decisions.
arXiv Detail & Related papers (2020-06-16T02:58:10Z) - Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI [12.680653816836541]
We propose a ground truth based evaluation framework for XAI methods based on the CLEVR visual question answering task.
Our framework provides a (1) selective, (2) controlled and (3) realistic testbed for the evaluation of neural network explanations.
arXiv Detail & Related papers (2020-03-16T14:43:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.