Explaining GNN over Evolving Graphs using Information Flow
- URL: http://arxiv.org/abs/2111.10037v1
- Date: Fri, 19 Nov 2021 04:29:38 GMT
- Title: Explaining GNN over Evolving Graphs using Information Flow
- Authors: Yazheng Liu and Xi Zhang and Sihong Xie
- Abstract summary: Graph neural networks (GNN) are the current state-of-the-art for these applications, and yet remain obscure to humans.
We propose an axiomatic attribution method to uniquely decompose the change in a prediction to paths on computation graphs.
We formulate a novel convex optimization problem to optimally select the paths that explain the prediction evolution.
- Score: 12.33508497537769
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graphs are ubiquitous in many applications, such as social networks,
knowledge graphs, smart grids, etc.. Graph neural networks (GNN) are the
current state-of-the-art for these applications, and yet remain obscure to
humans. Explaining the GNN predictions can add transparency. However, as many
graphs are not static but continuously evolving, explaining changes in
predictions between two graph snapshots is different but equally important.
Prior methods only explain static predictions or generate coarse or irrelevant
explanations for dynamic predictions. We define the problem of explaining
evolving GNN predictions and propose an axiomatic attribution method to
uniquely decompose the change in a prediction to paths on computation graphs.
The attribution to many paths involving high-degree nodes is still not
interpretable, while simply selecting the top important paths can be suboptimal
in approximating the change. We formulate a novel convex optimization problem
to optimally select the paths that explain the prediction evolution.
Theoretically, we prove that the existing method based on
Layer-Relevance-Propagation (LRP) is a special case of the proposed algorithm
when an empty graph is compared with. Empirically, on seven graph datasets,
with a novel metric designed for evaluating explanations of prediction change,
we demonstrate the superiority of the proposed approach over existing methods,
including LRP, DeepLIFT, and other path selection methods.
Related papers
- A Differential Geometric View and Explainability of GNN on Evolving
Graphs [15.228139478280747]
Graphs are ubiquitous in social networks and biochemistry, where Graph Neural Networks (GNN) are the state-of-the-art models for prediction.
We propose a smooth parameterization of the GNN predicted distributions using axiomatic attribution.
Experiments on node classification, link prediction, and graph classification tasks with evolving graphs demonstrate the better sparsity, faithfulness, and intuitiveness of the proposed method.
arXiv Detail & Related papers (2024-03-11T04:26:18Z) - Empowering Counterfactual Reasoning over Graph Neural Networks through
Inductivity [7.094238868711952]
Graph neural networks (GNNs) have various practical applications, such as drug discovery, recommendation engines, and chip design.
Counterfactual reasoning is used to make minimal changes to the input graph of a GNN in order to alter its prediction.
arXiv Detail & Related papers (2023-06-07T23:40:18Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Transformation of Node to Knowledge Graph Embeddings for Faster Link
Prediction in Social Networks [2.458658951393896]
Recent advances in neural networks have solved common graph problems such as link prediction, node classification, node clustering, node recommendation.
In this work, we investigate a transformation model which converts node embeddings obtained from random walk based methods to embeddings obtained from knowledge graph methods directly without an increase in the computational cost.
arXiv Detail & Related papers (2021-11-17T04:57:41Z) - Robust Counterfactual Explanations on Graph Neural Networks [42.91881080506145]
Massive deployment of Graph Neural Networks (GNNs) in high-stake applications generates a strong demand for explanations that are robust to noise.
Most existing methods generate explanations by identifying a subgraph of an input graph that has a strong correlation with the prediction.
We propose a novel method to generate robust counterfactual explanations on GNNs by explicitly modelling the common decision logic of GNNs on similar input graphs.
arXiv Detail & Related papers (2021-07-08T19:50:00Z) - CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks [40.47070962945751]
Graph neural networks (GNNs) have shown increasing promise in real-world applications.
We propose CF-GNNExplainer: the first method for generating counterfactual explanations for GNNs.
arXiv Detail & Related papers (2021-02-05T17:58:14Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.