Graph Neural Network Causal Explanation via Neural Causal Models
- URL: http://arxiv.org/abs/2407.09378v1
- Date: Fri, 12 Jul 2024 15:56:33 GMT
- Title: Graph Neural Network Causal Explanation via Neural Causal Models
- Authors: Arman Behnam, Binghui Wang,
- Abstract summary: Graph neural network (GNN) explainers identify the important subgraph that ensures the prediction for a given graph.
We propose name, a GNN causal explainer via causal inference.
name significantly outperforms existing GNN explainers in exact groundtruth explanation identification.
- Score: 14.288781140044465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural network (GNN) explainers identify the important subgraph that ensures the prediction for a given graph. Until now, almost all GNN explainers are based on association, which is prone to spurious correlations. We propose {\name}, a GNN causal explainer via causal inference. Our explainer is based on the observation that a graph often consists of a causal underlying subgraph. {\name} includes three main steps: 1) It builds causal structure and the corresponding structural causal model (SCM) for a graph, which enables the cause-effect calculation among nodes. 2) Directly calculating the cause-effect in real-world graphs is computationally challenging. It is then enlightened by the recent neural causal model (NCM), a special type of SCM that is trainable, and design customized NCMs for GNNs. By training these GNN NCMs, the cause-effect can be easily calculated. 3) It uncovers the subgraph that causally explains the GNN predictions via the optimized GNN-NCMs. Evaluation results on multiple synthetic and real-world graphs validate that {\name} significantly outperforms existing GNN explainers in exact groundtruth explanation identification
Related papers
- Global Graph Counterfactual Explanation: A Subgraph Mapping Approach [54.42907350881448]
Graph Neural Networks (GNNs) have been widely deployed in various real-world applications.
Counterfactual explanation aims to find minimum perturbations on input graphs that change the GNN predictions.
We propose GlobalGCE, a novel global-level graph counterfactual explanation method.
arXiv Detail & Related papers (2024-10-25T21:39:05Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - GANExplainer: GAN-based Graph Neural Networks Explainer [5.641321839562139]
It is critical to explain why graph neural network (GNN) makes particular predictions for them to be believed in many applications.
We propose GANExplainer, based on Generative Adversarial Network (GAN) architecture.
GANExplainer improves explanation accuracy by up to 35% compared to its alternatives.
arXiv Detail & Related papers (2022-12-30T23:11:24Z) - Explainability in subgraphs-enhanced Graph Neural Networks [12.526174412246107]
Subgraphs-enhanced Graph Neural Networks (SGNNs) have been introduced to enhance the expressive power of GNNs.
In this work, we adapt PGExplainer, one of the most recent explainers for GNNs, to SGNNs.
We show that our framework is successful in explaining the decision process of an SGNN on graph classification tasks.
arXiv Detail & Related papers (2022-09-16T13:39:10Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - Towards Self-Explainable Graph Neural Network [24.18369781999988]
Graph Neural Networks (GNNs) generalize the deep neural networks to graph-structured data.
GNNs lack explainability, which limits their adoption in scenarios that demand the transparency of models.
We propose a new framework which can find $K$-nearest labeled nodes for each unlabeled node to give explainable node classification.
arXiv Detail & Related papers (2021-08-26T22:45:11Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Ego-GNNs: Exploiting Ego Structures in Graph Neural Networks [12.97622530614215]
We show that Ego-GNNs are capable of recognizing closed triangles, which is essential given the prominence of transitivity in real-world graphs.
In particular, we show that Ego-GNNs are capable of recognizing closed triangles, which is essential given the prominence of transitivity in real-world graphs.
arXiv Detail & Related papers (2021-07-22T23:42:23Z) - Generative Causal Explanations for Graph Neural Networks [39.60333255875979]
Gem is a model-agnostic approach for providing interpretable explanations for any GNNs on various graph learning tasks.
It achieves a relative increase of the explanation accuracy by up to $30%$ and speeds up the explanation process by up to $110times$ as compared to its state-of-the-art alternatives.
arXiv Detail & Related papers (2021-04-14T06:22:21Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.