MEGAN: Multi-Explanation Graph Attention Network
- URL: http://arxiv.org/abs/2211.13236v2
- Date: Thu, 25 May 2023 15:48:01 GMT
- Title: MEGAN: Multi-Explanation Graph Attention Network
- Authors: Jonas Teufel, Luca Torresi, Patrick Reiser, Pascal Friederich
- Abstract summary: We propose a multi-explanation graph attention network (MEGAN)
Unlike existing graph explainability methods, our network can produce node and edge attributional explanations along multiple channels.
Our attention-based network is fully differentiable and explanations can actively be trained in an explanation-supervised manner.
- Score: 1.1470070927586016
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a multi-explanation graph attention network (MEGAN). Unlike
existing graph explainability methods, our network can produce node and edge
attributional explanations along multiple channels, the number of which is
independent of task specifications. This proves crucial to improve the
interpretability of graph regression predictions, as explanations can be split
into positive and negative evidence w.r.t to a reference value. Additionally,
our attention-based network is fully differentiable and explanations can
actively be trained in an explanation-supervised manner. We first validate our
model on a synthetic graph regression dataset with known ground-truth
explanations. Our network outperforms existing baseline explainability methods
for the single- as well as the multi-explanation case, achieving near-perfect
explanation accuracy during explanation supervision. Finally, we demonstrate
our model's capabilities on multiple real-world datasets. We find that our
model produces sparse high-fidelity explanations consistent with human
intuition about those tasks.
Related papers
- Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - Generative Explanations for Graph Neural Network: Methods and
Evaluations [16.67839967139831]
Graph Neural Networks (GNNs) achieve state-of-the-art performance in various graph-related tasks.
The black-box nature of GNNs limits their interpretability and trustworthiness.
Numerous explainability methods have been proposed to uncover the decision-making logic of GNNs.
arXiv Detail & Related papers (2023-11-09T22:07:15Z) - Faithful Explanations for Deep Graph Models [44.3056871040946]
This paper studies faithful explanations for Graph Neural Networks (GNNs)
It applies to existing explanation methods, including feature attributions and subgraph explanations.
Third, we introduce emphk-hop Explanation with a Convolutional Core (KEC), a new explanation method that provably maximizes faithfulness to the original GNN.
arXiv Detail & Related papers (2022-05-24T07:18:56Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Task-Agnostic Graph Explanations [50.17442349253348]
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data.
Existing learning-based GNN explanation approaches are task-specific in training.
We propose a Task-Agnostic GNN Explainer (TAGE) trained under self-supervision with no knowledge of downstream tasks.
arXiv Detail & Related papers (2022-02-16T21:11:47Z) - A Meta-Learning Approach for Training Explainable Graph Neural Networks [10.11960004698409]
We propose a meta-learning framework for improving the level of explainability of a GNN directly at training time.
Our framework jointly trains a model to solve the original task, e.g., node classification, and to provide easily processable outputs for downstream algorithms.
Our model-agnostic approach can improve the explanations produced for different GNN architectures and use any instance-based explainer to drive this process.
arXiv Detail & Related papers (2021-09-20T11:09:10Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - ExplaGraphs: An Explanation Graph Generation Task for Structured
Commonsense Reasoning [65.15423587105472]
We present a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction.
Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance.
A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths.
arXiv Detail & Related papers (2021-04-15T17:51:36Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - Contrastive Graph Neural Network Explanation [13.234975857626749]
Graph Neural Networks achieve remarkable results on problems with structured data but come as black-box predictors.
We argue that explicability must use graphs compliant with the distribution underlying the training data.
We present a novel Contrastive GNN Explanation technique following this paradigm.
arXiv Detail & Related papers (2020-10-26T15:32:42Z) - Explainable Deep Modeling of Tabular Data using TableGraphNet [1.376408511310322]
We propose a new architecture that produces explainable predictions in the form of additive feature attributions.
We show that our explainable model attains the same level of performance as black box models.
arXiv Detail & Related papers (2020-02-12T20:02:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.