Neural Network Attribution Methods for Problems in Geoscience: A Novel
Synthetic Benchmark Dataset
- URL: http://arxiv.org/abs/2103.10005v1
- Date: Thu, 18 Mar 2021 03:39:17 GMT
- Title: Neural Network Attribution Methods for Problems in Geoscience: A Novel
Synthetic Benchmark Dataset
- Authors: Antonios Mamalakis, Imme Ebert-Uphoff and Elizabeth A. Barnes
- Abstract summary: We provide a framework to generate attribution benchmark datasets for regression problems in the geosciences.
We train a fully-connected network to learn the underlying function that was used for simulation.
We compare estimated attribution heatmaps from different XAI methods to the ground truth in order to identify examples where specific XAI methods perform well or poorly.
- Score: 0.05156484100374058
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the increasingly successful application of neural networks to many
problems in the geosciences, their complex and nonlinear structure makes the
interpretation of their predictions difficult, which limits model trust and
does not allow scientists to gain physical insights about the problem at hand.
Many different methods have been introduced in the emerging field of
eXplainable Artificial Intelligence (XAI), which aim at attributing the
network's prediction to specific features in the input domain. XAI methods are
usually assessed by using benchmark datasets (like MNIST or ImageNet for image
classification), or through deletion/insertion techniques. In either case,
however, an objective, theoretically-derived ground truth for the attribution
is lacking, making the assessment of XAI in many cases subjective. Also,
benchmark datasets for problems in geosciences are rare. Here, we provide a
framework, based on the use of additively separable functions, to generate
attribution benchmark datasets for regression problems for which the ground
truth of the attribution is known a priori. We generate a long benchmark
dataset and train a fully-connected network to learn the underlying function
that was used for simulation. We then compare estimated attribution heatmaps
from different XAI methods to the ground truth in order to identify examples
where specific XAI methods perform well or poorly. We believe that attribution
benchmarks as the ones introduced herein are of great importance for further
application of neural networks in the geosciences, and for accurate
implementation of XAI methods, which will increase model trust and assist in
discovering new science.
Related papers
- Explainable AI for Comparative Analysis of Intrusion Detection Models [20.683181384051395]
This research analyzes various machine learning models to the tasks of binary and multi-class classification for intrusion detection from network traffic.
We trained all models to the accuracy of 90% on the UNSW-NB15 dataset.
We also discover that Random Forest provides the best performance in terms of accuracy, time efficiency and robustness.
arXiv Detail & Related papers (2024-06-14T03:11:01Z) - On Discprecncies between Perturbation Evaluations of Graph Neural
Network Attributions [49.8110352174327]
We assess attribution methods from a perspective not previously explored in the graph domain: retraining.
The core idea is to retrain the network on important (or not important) relationships as identified by the attributions.
We run our analysis on four state-of-the-art GNN attribution methods and five synthetic and real-world graph classification datasets.
arXiv Detail & Related papers (2024-01-01T02:03:35Z) - Finding the right XAI method -- A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science [2.8877394238963214]
We introduce XAI evaluation in the climate context and discuss different desired explanation properties.
We find that XAI methods Integrated Gradients, layer-wise relevance propagation, and input times gradients exhibit considerable robustness, faithfulness, and complexity.
We find architecture-dependent performance differences regarding robustness, complexity and localization skills of different XAI methods.
arXiv Detail & Related papers (2023-03-01T16:54:48Z) - Revisit the Algorithm Selection Problem for TSP with Spatial Information
Enhanced Graph Neural Networks [4.084365114504618]
This paper revisits the algorithm selection problem for Euclidean Traveling Salesman Problem (TSP)
We propose a novel Graph Neural Network (GNN), called GINES.
GINES takes the coordinates of cities and distances between cities as input.
It is better than the traditional handcrafted feature-based approach on one dataset.
arXiv Detail & Related papers (2023-02-08T13:14:20Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Carefully choose the baseline: Lessons learned from applying XAI
attribution methods for regression tasks in geoscience [0.02578242050187029]
Methods of eXplainable Artificial Intelligence (XAI) are used in geoscientific applications to gain insights into the decision-making strategy of Neural Networks (NNs)
Here, we discuss our lesson learned that the task of attributing attributing a prediction to the input does not have a single solution.
We show that attributions differ substantially when considering different baselines, as they correspond to answering different science questions.
arXiv Detail & Related papers (2022-08-19T17:54:24Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI [12.680653816836541]
We propose a ground truth based evaluation framework for XAI methods based on the CLEVR visual question answering task.
Our framework provides a (1) selective, (2) controlled and (3) realistic testbed for the evaluation of neural network explanations.
arXiv Detail & Related papers (2020-03-16T14:43:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.