Explaining Dynamic Graph Neural Networks via Relevance Back-propagation
- URL: http://arxiv.org/abs/2207.11175v1
- Date: Fri, 22 Jul 2022 16:20:34 GMT
- Title: Explaining Dynamic Graph Neural Networks via Relevance Back-propagation
- Authors: Jiaxuan Xie, Yezi Liu, Yanning Shen
- Abstract summary: Graph Neural Networks (GNNs) have shown remarkable effectiveness in capturing abundant information in graph-structured data.
The black-box nature of GNNs hinders users from understanding and trusting the models, thus leading to difficulties in their applications.
We propose DGExplainer to provide reliable explanation on dynamic GNNs.
- Score: 8.035521056416242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have shown remarkable effectiveness in capturing
abundant information in graph-structured data. However, the black-box nature of
GNNs hinders users from understanding and trusting the models, thus leading to
difficulties in their applications. While recent years witness the prosperity
of the studies on explaining GNNs, most of them focus on static graphs, leaving
the explanation of dynamic GNNs nearly unexplored. It is challenging to explain
dynamic GNNs, due to their unique characteristic of time-varying graph
structures. Directly using existing models designed for static graphs on
dynamic graphs is not feasible because they ignore temporal dependencies among
the snapshots. In this work, we propose DGExplainer to provide reliable
explanation on dynamic GNNs. DGExplainer redistributes the output activation
score of a dynamic GNN to the relevances of the neurons of its previous layer,
which iterates until the relevance scores of the input neuron are obtained. We
conduct quantitative and qualitative experiments on real-world datasets to
demonstrate the effectiveness of the proposed framework for identifying
important nodes for link prediction and node regression for dynamic GNNs.
Related papers
- A survey of dynamic graph neural networks [26.162035361191805]
Graph neural networks (GNNs) have emerged as a powerful tool for effectively mining and learning from graph-structured data.
This paper provides a comprehensive review of the fundamental concepts, key techniques, and state-of-the-art dynamic GNN models.
arXiv Detail & Related papers (2024-04-28T15:07:48Z) - DyExplainer: Explainable Dynamic Graph Neural Networks [37.16783248212211]
We present DyExplainer, a novel approach to explaining dynamic Graph Neural Networks (GNNs) on the fly.
DyExplainer trains a dynamic GNN backbone to extract representations of the graph at each snapshot.
We also augment our approach with contrastive learning techniques to provide priori-guided regularization.
arXiv Detail & Related papers (2023-10-25T05:26:33Z) - From Continuous Dynamics to Graph Neural Networks: Neural Diffusion and
Beyond [32.290102818872526]
Graph neural networks (GNNs) have demonstrated significant promise in modelling data and have been widely applied in various fields of interest.
We provide the first systematic and comprehensive review of studies that leverage the continuous perspective of GNNs.
arXiv Detail & Related papers (2023-10-16T06:57:24Z) - Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Dynamic Causal Explanation Based Diffusion-Variational Graph Neural
Network for Spatio-temporal Forecasting [60.03169701753824]
We propose a novel Dynamic Diffusion-al Graph Neural Network (DVGNN) fortemporal forecasting.
The proposed DVGNN model outperforms state-of-the-art approaches and achieves outstanding Root Mean Squared Error result.
arXiv Detail & Related papers (2023-05-16T11:38:19Z) - Graph Sequential Neural ODE Process for Link Prediction on Dynamic and
Sparse Graphs [33.294977897987685]
Link prediction on dynamic graphs is an important task in graph mining.
Existing approaches based on dynamic graph neural networks (DGNNs) typically require a significant amount of historical data.
We propose a novel method based on the neural process, called Graph Sequential Neural ODE Process (GSNOP)
arXiv Detail & Related papers (2022-11-15T23:21:02Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.