Mind the truncation gap: challenges of learning on dynamic graphs with recurrent architectures
- URL: http://arxiv.org/abs/2412.21046v1
- Date: Mon, 30 Dec 2024 16:07:41 GMT
- Title: Mind the truncation gap: challenges of learning on dynamic graphs with recurrent architectures
- Authors: João Bravo, Jacopo Bono, Pedro Saleiro, Hugo Ferreira, Pedro Bizarro,
- Abstract summary: Continuous-time dynamic graphs (CTDGs) pose challenges for machine learning (ML) approaches.
We show that a short truncation of backpropagation-through-time (BPTT) can limit the learning of dependencies beyond a single hop.
We argue that understanding and addressing this gap is essential as the importance of CTDGs grows.
- Score: 10.434476078553786
- License:
- Abstract: Systems characterized by evolving interactions, prevalent in social, financial, and biological domains, are effectively modeled as continuous-time dynamic graphs (CTDGs). To manage the scale and complexity of these graph datasets, machine learning (ML) approaches have become essential. However, CTDGs pose challenges for ML because traditional static graph methods do not naturally account for event timings. Newer approaches, such as graph recurrent neural networks (GRNNs), are inherently time-aware and offer advantages over static methods for CTDGs. However, GRNNs face another issue: the short truncation of backpropagation-through-time (BPTT), whose impact has not been properly examined until now. In this work, we demonstrate that this truncation can limit the learning of dependencies beyond a single hop, resulting in reduced performance. Through experiments on a novel synthetic task and real-world datasets, we reveal a performance gap between full backpropagation-through-time (F-BPTT) and the truncated backpropagation-through-time (T-BPTT) commonly used to train GRNN models. We term this gap the "truncation gap" and argue that understanding and addressing it is essential as the importance of CTDGs grows, discussing potential future directions for research in this area.
Related papers
- ScaDyG:A New Paradigm for Large-scale Dynamic Graph Learning [31.629956388962814]
ScaDyG is a time-aware scalable learning paradigm for dynamic graph networks.
experiments on 12 datasets demonstrate that ScaDyG performs comparably well or even outperforms other SOTA methods in both node and link-level downstream tasks.
arXiv Detail & Related papers (2025-01-27T12:39:16Z) - ST-FiT: Inductive Spatial-Temporal Forecasting with Limited Training Data [59.78770412981611]
In real-world applications, most nodes may not possess any available temporal data during training.
We propose a principled framework named ST-FiT to handle this problem.
arXiv Detail & Related papers (2024-12-14T17:51:29Z) - Temporal-Aware Evaluation and Learning for Temporal Graph Neural Networks [2.3043270848984]
Temporal Graph Neural Networks (TGNNs) are a family of graph neural networks designed to model and learn dynamic information from temporal graphs.
This paper investigates the commonly used evaluation metrics for TGNNs and illustrates the failure mechanisms of these metrics in capturing essential temporal structures.
We introduce a new volatility-aware evaluation metric (termed volatility cluster statistics) designed for a more refined analysis of model temporal performance.
arXiv Detail & Related papers (2024-12-10T07:56:33Z) - Exploring Time Granularity on Temporal Graphs for Dynamic Link
Prediction in Real-world Networks [0.48346848229502226]
Dynamic Graph Neural Networks (DGNNs) have emerged as the predominant approach for processing dynamic graph-structured data.
In this paper, we explore the impact of time granularity when training DGNNs on dynamic graphs through extensive experiments.
arXiv Detail & Related papers (2023-11-21T00:34:53Z) - DyExplainer: Explainable Dynamic Graph Neural Networks [37.16783248212211]
We present DyExplainer, a novel approach to explaining dynamic Graph Neural Networks (GNNs) on the fly.
DyExplainer trains a dynamic GNN backbone to extract representations of the graph at each snapshot.
We also augment our approach with contrastive learning techniques to provide priori-guided regularization.
arXiv Detail & Related papers (2023-10-25T05:26:33Z) - Dynamic Graph Representation Learning via Edge Temporal States Modeling and Structure-reinforced Transformer [5.093187534912688]
We introduce the Recurrent Structure-reinforced Graph Transformer (RSGT), a novel framework for dynamic graph representation learning.
RSGT captures temporal node representations encoding both graph topology and evolving dynamics through a recurrent learning paradigm.
We show RSGT's superior performance in discrete dynamic graph representation learning, consistently outperforming existing methods in dynamic link prediction tasks.
arXiv Detail & Related papers (2023-04-20T04:12:50Z) - EasyDGL: Encode, Train and Interpret for Continuous-time Dynamic Graph Learning [92.71579608528907]
This paper aims to design an easy-to-use pipeline (termed as EasyDGL) composed of three key modules with both strong ability fitting and interpretability.
EasyDGL can effectively quantify the predictive power of frequency content that a model learn from the evolving graph data.
arXiv Detail & Related papers (2023-03-22T06:35:08Z) - Space-Time Graph Neural Networks with Stochastic Graph Perturbations [100.31591011966603]
Space-time graph neural networks (ST-GNNs) learn efficient graph representations of time-varying data.
In this paper we revisit the properties of ST-GNNs and prove that they are stable to graph stabilitys.
Our analysis suggests that ST-GNNs are suitable for transfer learning on time-varying graphs.
arXiv Detail & Related papers (2022-10-28T16:59:51Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Spatio-Temporal Joint Graph Convolutional Networks for Traffic
Forecasting [75.10017445699532]
Recent have shifted their focus towards formulating traffic forecasting as atemporal graph modeling problem.
We propose a novel approach for accurate traffic forecasting on road networks over multiple future time steps.
arXiv Detail & Related papers (2021-11-25T08:45:14Z) - Spatio-Temporal Graph Scattering Transform [54.52797775999124]
Graph neural networks may be impractical in some real-world scenarios due to a lack of sufficient high-quality training data.
We put forth a novel mathematically designed framework to analyze-temporal data.
arXiv Detail & Related papers (2020-12-06T19:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.