DyExplainer: Explainable Dynamic Graph Neural Networks
- URL: http://arxiv.org/abs/2310.16375v1
- Date: Wed, 25 Oct 2023 05:26:33 GMT
- Title: DyExplainer: Explainable Dynamic Graph Neural Networks
- Authors: Tianchun Wang, Dongsheng Luo, Wei Cheng, Haifeng Chen, Xiang Zhang
- Abstract summary: We present DyExplainer, a novel approach to explaining dynamic Graph Neural Networks (GNNs) on the fly.
DyExplainer trains a dynamic GNN backbone to extract representations of the graph at each snapshot.
We also augment our approach with contrastive learning techniques to provide priori-guided regularization.
- Score: 37.16783248212211
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) resurge as a trending research subject owing to
their impressive ability to capture representations from graph-structured data.
However, the black-box nature of GNNs presents a significant challenge in terms
of comprehending and trusting these models, thereby limiting their practical
applications in mission-critical scenarios. Although there has been substantial
progress in the field of explaining GNNs in recent years, the majority of these
studies are centered on static graphs, leaving the explanation of dynamic GNNs
largely unexplored. Dynamic GNNs, with their ever-evolving graph structures,
pose a unique challenge and require additional efforts to effectively capture
temporal dependencies and structural relationships. To address this challenge,
we present DyExplainer, a novel approach to explaining dynamic GNNs on the fly.
DyExplainer trains a dynamic GNN backbone to extract representations of the
graph at each snapshot, while simultaneously exploring structural relationships
and temporal dependencies through a sparse attention technique. To preserve the
desired properties of the explanation, such as structural consistency and
temporal continuity, we augment our approach with contrastive learning
techniques to provide priori-guided regularization. To model longer-term
temporal dependencies, we develop a buffer-based live-updating scheme for
training. The results of our extensive experiments on various datasets
demonstrate the superiority of DyExplainer, not only providing faithful
explainability of the model predictions but also significantly improving the
model prediction accuracy, as evidenced in the link prediction task.
Related papers
- Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks [66.70786325911124]
Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data.
With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning.
We propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - A survey of dynamic graph neural networks [26.162035361191805]
Graph neural networks (GNNs) have emerged as a powerful tool for effectively mining and learning from graph-structured data.
This paper provides a comprehensive review of the fundamental concepts, key techniques, and state-of-the-art dynamic GNN models.
arXiv Detail & Related papers (2024-04-28T15:07:48Z) - Exploring Time Granularity on Temporal Graphs for Dynamic Link
Prediction in Real-world Networks [0.48346848229502226]
Dynamic Graph Neural Networks (DGNNs) have emerged as the predominant approach for processing dynamic graph-structured data.
In this paper, we explore the impact of time granularity when training DGNNs on dynamic graphs through extensive experiments.
arXiv Detail & Related papers (2023-11-21T00:34:53Z) - How Graph Neural Networks Learn: Lessons from Training Dynamics [80.41778059014393]
We study the training dynamics in function space of graph neural networks (GNNs)
We find that the gradient descent optimization of GNNs implicitly leverages the graph structure to update the learned function.
This finding offers new interpretable insights into when and why the learned GNN functions generalize.
arXiv Detail & Related papers (2023-10-08T10:19:56Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Dynamic Causal Explanation Based Diffusion-Variational Graph Neural
Network for Spatio-temporal Forecasting [60.03169701753824]
We propose a novel Dynamic Diffusion-al Graph Neural Network (DVGNN) fortemporal forecasting.
The proposed DVGNN model outperforms state-of-the-art approaches and achieves outstanding Root Mean Squared Error result.
arXiv Detail & Related papers (2023-05-16T11:38:19Z) - Dynamic Graph Representation Learning via Edge Temporal States Modeling and Structure-reinforced Transformer [5.093187534912688]
We introduce the Recurrent Structure-reinforced Graph Transformer (RSGT), a novel framework for dynamic graph representation learning.
RSGT captures temporal node representations encoding both graph topology and evolving dynamics through a recurrent learning paradigm.
We show RSGT's superior performance in discrete dynamic graph representation learning, consistently outperforming existing methods in dynamic link prediction tasks.
arXiv Detail & Related papers (2023-04-20T04:12:50Z) - EasyDGL: Encode, Train and Interpret for Continuous-time Dynamic Graph Learning [92.71579608528907]
This paper aims to design an easy-to-use pipeline (termed as EasyDGL) composed of three key modules with both strong ability fitting and interpretability.
EasyDGL can effectively quantify the predictive power of frequency content that a model learn from the evolving graph data.
arXiv Detail & Related papers (2023-03-22T06:35:08Z) - An Explainer for Temporal Graph Neural Networks [27.393641343203363]
Temporal graph neural networks (TGNNs) have been widely used for modeling time-evolving graph-related tasks.
We propose a novel explainer framework for TGNN models.
arXiv Detail & Related papers (2022-09-02T04:12:40Z) - Explaining Dynamic Graph Neural Networks via Relevance Back-propagation [8.035521056416242]
Graph Neural Networks (GNNs) have shown remarkable effectiveness in capturing abundant information in graph-structured data.
The black-box nature of GNNs hinders users from understanding and trusting the models, thus leading to difficulties in their applications.
We propose DGExplainer to provide reliable explanation on dynamic GNNs.
arXiv Detail & Related papers (2022-07-22T16:20:34Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.