Contrastive Graph Neural Network Explanation
- URL: http://arxiv.org/abs/2010.13663v1
- Date: Mon, 26 Oct 2020 15:32:42 GMT
- Title: Contrastive Graph Neural Network Explanation
- Authors: Lukas Faber, Amin K. Moghaddam, Roger Wattenhofer
- Abstract summary: Graph Neural Networks achieve remarkable results on problems with structured data but come as black-box predictors.
We argue that explicability must use graphs compliant with the distribution underlying the training data.
We present a novel Contrastive GNN Explanation technique following this paradigm.
- Score: 13.234975857626749
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks achieve remarkable results on problems with structured
data but come as black-box predictors. Transferring existing explanation
techniques, such as occlusion, fails as even removing a single node or edge can
lead to drastic changes in the graph. The resulting graphs can differ from all
training examples, causing model confusion and wrong explanations. Thus, we
argue that explicability must use graphs compliant with the distribution
underlying the training data. We coin this property Distribution Compliant
Explanation (DCE) and present a novel Contrastive GNN Explanation (CoGE)
technique following this paradigm. An experimental study supports the efficacy
of CoGE.
Related papers
- PAC Learnability under Explanation-Preserving Graph Perturbations [15.83659369727204]
Graph neural networks (GNNs) operate over graphs, enabling the model to leverage the complex relationships and dependencies in graph-structured data.
A graph explanation is a subgraph which is an almost sufficient' statistic of the input graph with respect to its classification label.
This work considers two methods for leveraging such perturbation invariances in the design and training of GNNs.
arXiv Detail & Related papers (2024-02-07T17:23:15Z) - Factorized Explainer for Graph Neural Networks [7.382632811417645]
Graph Neural Networks (GNNs) have received increasing attention due to their ability to learn from graph-structured data.
Post-hoc instance-level explanation methods have been proposed to understand GNN predictions.
We introduce a novel factorized explanation model with theoretical performance guarantees.
arXiv Detail & Related papers (2023-12-09T15:29:45Z) - MixupExplainer: Generalizing Explanations for Graph Neural Networks with
Data Augmentation [6.307753856507624]
Graph Neural Networks (GNNs) have received increasing attention due to their ability to learn from graph-structured data.
Post-hoc instance-level explanation methods have been proposed to understand GNN predictions.
We shed light on the existence of the distribution shifting issue in existing methods, which affects explanation quality.
arXiv Detail & Related papers (2023-07-15T15:46:38Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Structural Explanations for Graph Neural Networks using HSIC [21.929646888419914]
Graph neural networks (GNNs) are a type of neural model that tackle graphical tasks in an end-to-end manner.
The complicated dynamics of GNNs make it difficult to understand which parts of the graph features contribute more strongly to the predictions.
In this study, a flexible model agnostic explanation method is proposed to detect significant structures in graphs.
arXiv Detail & Related papers (2023-02-04T09:46:47Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - EEGNN: Edge Enhanced Graph Neural Networks [1.0246596695310175]
We propose a new explanation for such deteriorated performance phenomenon, mis-simplification.
We show that such simplifying can reduce the potential of message-passing layers to capture the structural information of graphs.
EEGNN uses the structural information extracted from the proposed Dirichlet mixture Poisson graph model to improve the performance of various deep message-passing GNNs.
arXiv Detail & Related papers (2022-08-12T15:24:55Z) - Graph Condensation via Receptive Field Distribution Matching [61.71711656856704]
This paper focuses on creating a small graph to represent the original graph, so that GNNs trained on the size-reduced graph can make accurate predictions.
We view the original graph as a distribution of receptive fields and aim to synthesize a small graph whose receptive fields share a similar distribution.
arXiv Detail & Related papers (2022-06-28T02:10:05Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.