Global-Lens Transformers: Adaptive Token Mixing for Dynamic Link Prediction
- URL: http://arxiv.org/abs/2511.12442v1
- Date: Sun, 16 Nov 2025 04:05:56 GMT
- Title: Global-Lens Transformers: Adaptive Token Mixing for Dynamic Link Prediction
- Authors: Tao Zou, Chengfeng Wu, Tianxi Liao, Junchen Ye, Bowen Du,
- Abstract summary: We propose GLFormer, a novel attention-free Transformer-style framework for dynamic graphs.<n>Experiments on six widely-used dynamic graph benchmarks show that GLFormer achieves SOTA performance, which reveals that attention-free architectures can match or surpass Transformer baselines.
- Score: 9.234363752442915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic graph learning plays a pivotal role in modeling evolving relationships over time, especially for temporal link prediction tasks in domains such as traffic systems, social networks, and recommendation platforms. While Transformer-based models have demonstrated strong performance by capturing long-range temporal dependencies, their reliance on self-attention results in quadratic complexity with respect to sequence length, limiting scalability on high-frequency or large-scale graphs. In this work, we revisit the necessity of self-attention in dynamic graph modeling. Inspired by recent findings that attribute the success of Transformers more to their architectural design than attention itself, we propose GLFormer, a novel attention-free Transformer-style framework for dynamic graphs. GLFormer introduces an adaptive token mixer that performs context-aware local aggregation based on interaction order and time intervals. To capture long-term dependencies, we further design a hierarchical aggregation module that expands the temporal receptive field by stacking local token mixers across layers. Experiments on six widely-used dynamic graph benchmarks show that GLFormer achieves SOTA performance, which reveals that attention-free architectures can match or surpass Transformer baselines in dynamic graph settings with significantly improved efficiency.
Related papers
- DWAFM: Dynamic Weighted Graph Structure Embedding Integrated with Attention and Frequency-Domain MLPs for Traffic Forecasting [12.788467568098817]
This letter proposes a novel dynamic weighted graph structure (DWGS) embedding method.<n>It relies on a graph structure that can truly reflect the changes in the strength of dynamic associations between nodes over time.<n> Experiments on five real-world traffic datasets show that the DWAFM achieves better prediction performance than some state-of-the-arts methods.
arXiv Detail & Related papers (2026-03-01T08:50:41Z) - When Speed meets Accuracy: an Efficient and Effective Graph Model for Temporal Link Prediction [20.093092172339286]
Temporal Graph Neural Networks (T-GNNs) have achieved notable success by leveraging complex architectures to model temporal and structural dependencies.<n>We propose a lightweight framework that integrates short-term temporal recency and long-term global structural patterns.
arXiv Detail & Related papers (2025-07-18T11:29:15Z) - TIDFormer: Exploiting Temporal and Interactive Dynamics Makes A Great Dynamic Graph Transformer [27.798471160707436]
We propose TIDFormer, a dynamic graph TransFormer that fully exploits Temporal and Interactive Dynamics.<n>To model the temporal and interactive dynamics, respectively, we utilize the calendar-based time partitioning information.<n>In addition, we jointly model temporal and interactive features by capturing potential changes in historical interaction patterns.
arXiv Detail & Related papers (2025-05-31T07:23:05Z) - A Comparative Study on Dynamic Graph Embedding based on Mamba and Transformers [0.29687381456164]
This study presents a comparative analysis of dynamic graph embedding approaches using transformers and the recently proposed Mamba architecture.<n>We introduce three novel models: TransformerG2G augment with graph convolutional networks, mathcalDG-Mamba, and mathcalGDG-Mamba with graph isomorphism network edge convolutions.<n>Our experiments on multiple benchmark datasets demonstrate that Mamba-based models achieve comparable or superior performance to transformer-based approaches in link prediction tasks.
arXiv Detail & Related papers (2024-12-15T19:56:56Z) - Supra-Laplacian Encoding for Transformer on Dynamic Graphs [14.293220696079919]
We present a new-temporal encoding for GT architecture while keeping temporal information.
Specifically, we transform Time Dynamic Graphplas into multi-layer graphs and take advantage of the spectral properties of their associated supra-lacian matrix.
Our second contribution explicitly model nodes pairwise with a cross-attention mechanism, providing an accurate edge representation for dynamic link prediction.
arXiv Detail & Related papers (2024-09-26T15:56:40Z) - DyG-Mamba: Continuous State Space Modeling on Dynamic Graphs [59.434893231950205]
Dynamic graph learning aims to uncover evolutionary laws in real-world systems.
We propose DyG-Mamba, a new continuous state space model for dynamic graph learning.
We show that DyG-Mamba achieves state-of-the-art performance on most datasets.
arXiv Detail & Related papers (2024-08-13T15:21:46Z) - Todyformer: Towards Holistic Dynamic Graph Transformers with
Structure-Aware Tokenization [6.799413002613627]
Todyformer is a novel Transformer-based neural network tailored for dynamic graphs.
It unifies the local encoding capacity of Message-Passing Neural Networks (MPNNs) with the global encoding of Transformers.
We show that Todyformer consistently outperforms the state-of-the-art methods for downstream tasks.
arXiv Detail & Related papers (2024-02-02T23:05:30Z) - TimeGraphs: Graph-based Temporal Reasoning [64.18083371645956]
TimeGraphs is a novel approach that characterizes dynamic interactions as a hierarchical temporal graph.
Our approach models the interactions using a compact graph-based representation, enabling adaptive reasoning across diverse time scales.
We evaluate TimeGraphs on multiple datasets with complex, dynamic agent interactions, including a football simulator, the Resistance game, and the MOMA human activity dataset.
arXiv Detail & Related papers (2024-01-06T06:26:49Z) - EasyDGL: Encode, Train and Interpret for Continuous-time Dynamic Graph Learning [92.71579608528907]
This paper aims to design an easy-to-use pipeline (termed as EasyDGL) composed of three key modules with both strong ability fitting and interpretability.
EasyDGL can effectively quantify the predictive power of frequency content that a model learn from the evolving graph data.
arXiv Detail & Related papers (2023-03-22T06:35:08Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Time-aware Dynamic Graph Embedding for Asynchronous Structural Evolution [60.695162101159134]
Existing works merely view a dynamic graph as a sequence of changes.
We formulate dynamic graphs as temporal edge sequences associated with joining time of.
vertex and timespan of edges.
A time-aware Transformer is proposed to embed.
vertex' dynamic connections and ToEs into the learned.
vertex representations.
arXiv Detail & Related papers (2022-07-01T15:32:56Z) - TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning [87.38675639186405]
We propose a novel graph neural network approach, called TCL, which deals with the dynamically-evolving graph in a continuous-time fashion.
To the best of our knowledge, this is the first attempt to apply contrastive learning to representation learning on dynamic graphs.
arXiv Detail & Related papers (2021-05-17T15:33:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.