Carefully choose the baseline: Lessons learned from applying XAI
attribution methods for regression tasks in geoscience
- URL: http://arxiv.org/abs/2208.09473v1
- Date: Fri, 19 Aug 2022 17:54:24 GMT
- Title: Carefully choose the baseline: Lessons learned from applying XAI
attribution methods for regression tasks in geoscience
- Authors: Antonios Mamalakis, Elizabeth A. Barnes, Imme Ebert-Uphoff
- Abstract summary: Methods of eXplainable Artificial Intelligence (XAI) are used in geoscientific applications to gain insights into the decision-making strategy of Neural Networks (NNs)
Here, we discuss our lesson learned that the task of attributing attributing a prediction to the input does not have a single solution.
We show that attributions differ substantially when considering different baselines, as they correspond to answering different science questions.
- Score: 0.02578242050187029
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Methods of eXplainable Artificial Intelligence (XAI) are used in
geoscientific applications to gain insights into the decision-making strategy
of Neural Networks (NNs) highlighting which features in the input contribute
the most to a NN prediction. Here, we discuss our lesson learned that the task
of attributing a prediction to the input does not have a single solution.
Instead, the attribution results and their interpretation depend greatly on the
considered baseline (sometimes referred to as reference point) that the XAI
method utilizes; a fact that has been overlooked so far in the literature. This
baseline can be chosen by the user or it is set by construction in the method s
algorithm, often without the user being aware of that choice. We highlight that
different baselines can lead to different insights for different science
questions and, thus, should be chosen accordingly. To illustrate the impact of
the baseline, we use a large ensemble of historical and future climate
simulations forced with the SSP3-7.0 scenario and train a fully connected NN to
predict the ensemble- and global-mean temperature (i.e., the forced global
warming signal) given an annual temperature map from an individual ensemble
member. We then use various XAI methods and different baselines to attribute
the network predictions to the input. We show that attributions differ
substantially when considering different baselines, as they correspond to
answering different science questions. We conclude by discussing some important
implications and considerations about the use of baselines in XAI research.
Related papers
- Bayesian Neural Networks with Domain Knowledge Priors [52.80929437592308]
We propose a framework for integrating general forms of domain knowledge into a BNN prior.
We show that BNNs using our proposed domain knowledge priors outperform those with standard priors.
arXiv Detail & Related papers (2024-02-20T22:34:53Z) - On Discprecncies between Perturbation Evaluations of Graph Neural
Network Attributions [49.8110352174327]
We assess attribution methods from a perspective not previously explored in the graph domain: retraining.
The core idea is to retrain the network on important (or not important) relationships as identified by the attributions.
We run our analysis on four state-of-the-art GNN attribution methods and five synthetic and real-world graph classification datasets.
arXiv Detail & Related papers (2024-01-01T02:03:35Z) - Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science [2.8877394238963214]
We introduce XAI evaluation in the climate context and discuss different desired explanation properties.
We find that XAI methods Integrated Gradients, layer-wise relevance propagation, and input times gradients exhibit considerable robustness, faithfulness, and complexity.
We find architecture-dependent performance differences regarding robustness, complexity and localization skills of different XAI methods.
arXiv Detail & Related papers (2023-03-01T16:54:48Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Do Deep Neural Networks Always Perform Better When Eating More Data? [82.6459747000664]
We design experiments from Identically Independent Distribution(IID) and Out of Distribution(OOD)
Under IID condition, the amount of information determines the effectivity of each sample, the contribution of samples and difference between classes determine the amount of class information.
Under OOD condition, the cross-domain degree of samples determine the contributions, and the bias-fitting caused by irrelevant elements is a significant factor of cross-domain.
arXiv Detail & Related papers (2022-05-30T15:40:33Z) - Investigating the fidelity of explainable artificial intelligence
methods for applications of convolutional neural networks in geoscience [0.02578242050187029]
Methods of explainable artificial intelligence (XAI) are gaining popularity as a means to explain CNN decision-making strategy.
Here, we establish an intercomparison of some of the most popular XAI methods and investigate their fidelity in explaining CNN decisions for geoscientific applications.
arXiv Detail & Related papers (2022-02-07T18:47:15Z) - Neural Network Attribution Methods for Problems in Geoscience: A Novel
Synthetic Benchmark Dataset [0.05156484100374058]
We provide a framework to generate attribution benchmark datasets for regression problems in the geosciences.
We train a fully-connected network to learn the underlying function that was used for simulation.
We compare estimated attribution heatmaps from different XAI methods to the ground truth in order to identify examples where specific XAI methods perform well or poorly.
arXiv Detail & Related papers (2021-03-18T03:39:17Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - A Heterogeneous Graph with Factual, Temporal and Logical Knowledge for
Question Answering Over Dynamic Contexts [81.4757750425247]
We study question answering over a dynamic textual environment.
We develop a graph neural network over the constructed graph, and train the model in an end-to-end manner.
arXiv Detail & Related papers (2020-04-25T04:53:54Z) - Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI [12.680653816836541]
We propose a ground truth based evaluation framework for XAI methods based on the CLEVR visual question answering task.
Our framework provides a (1) selective, (2) controlled and (3) realistic testbed for the evaluation of neural network explanations.
arXiv Detail & Related papers (2020-03-16T14:43:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.