Understanding and Improving Laplacian Positional Encodings For Temporal GNNs
- URL: http://arxiv.org/abs/2506.01596v1
- Date: Mon, 02 Jun 2025 12:30:58 GMT
- Title: Understanding and Improving Laplacian Positional Encodings For Temporal GNNs
- Authors: Yaniv Galron, Fabrizio Frasca, Haggai Maron, Eran Treister, Moshe Eliasof,
- Abstract summary: We introduce a theoretical framework that connects supra-Laplacian encodings to per-time-slice encodings.<n>We also introduce novel methods to reduce the computational overhead, achieving up to 56x faster runtimes while scaling to graphs with 50,000 active nodes.<n>Our findings reveal that while positional encodings can significantly boost performance in certain scenarios, their effectiveness varies across different models.
- Score: 24.209908552258632
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Temporal graph learning has applications in recommendation systems, traffic forecasting, and social network analysis. Although multiple architectures have been introduced, progress in positional encoding for temporal graphs remains limited. Extending static Laplacian eigenvector approaches to temporal graphs through the supra-Laplacian has shown promise, but also poses key challenges: high eigendecomposition costs, limited theoretical understanding, and ambiguity about when and how to apply these encodings. In this paper, we address these issues by (1) offering a theoretical framework that connects supra-Laplacian encodings to per-time-slice encodings, highlighting the benefits of leveraging additional temporal connectivity, (2) introducing novel methods to reduce the computational overhead, achieving up to 56x faster runtimes while scaling to graphs with 50,000 active nodes, and (3) conducting an extensive experimental study to identify which models, tasks, and datasets benefit most from these encodings. Our findings reveal that while positional encodings can significantly boost performance in certain scenarios, their effectiveness varies across different models.
Related papers
- Learnable Spatial-Temporal Positional Encoding for Link Prediction [44.0907827498725]
We propose a simple temporal link prediction model named L-STEP.<n>L-STEP can preserve the graph property from the spatial-temporal spectral viewpoint.<n>L-STEP obtains the leading performance in the newest large-scale TGB benchmark.
arXiv Detail & Related papers (2025-06-10T00:35:53Z) - Between Linear and Sinusoidal: Rethinking the Time Encoder in Dynamic Graph Learning [12.201459405989898]
Seminal attention-based models like TGAT and DyGFormer rely on sinusoidal time encoders to capture temporal dependencies between edge events.<n>We show that the linear time encoder avoids temporal information loss caused by sinusoidal functions and reduces the need for high-dimensional time encoders.
arXiv Detail & Related papers (2025-04-10T21:12:10Z) - ScaDyG:A New Paradigm for Large-scale Dynamic Graph Learning [31.629956388962814]
ScaDyG is a time-aware scalable learning paradigm for dynamic graph networks.<n> experiments on 12 datasets demonstrate that ScaDyG performs comparably well or even outperforms other SOTA methods in both node and link-level downstream tasks.
arXiv Detail & Related papers (2025-01-27T12:39:16Z) - Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration [54.897493351694195]
We propose a novel parallel decoding approach, namely textithidden transfer, which decodes multiple successive tokens simultaneously in a single forward pass.
In terms of acceleration metrics, we outperform all the single-model acceleration techniques, including Medusa and Self-Speculative decoding.
arXiv Detail & Related papers (2024-04-18T09:17:06Z) - Gegenbauer Graph Neural Networks for Time-varying Signal Reconstruction [4.6210788730570584]
Time-varying graph signals are a critical problem in machine learning and signal processing with broad applications.
We propose a novel approach that incorporates a learning module to enhance the accuracy of the downstream task.
We conduct extensive experiments on real datasets to evaluate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2024-03-28T19:29:17Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - DyG2Vec: Efficient Representation Learning for Dynamic Graphs [26.792732615703372]
Temporal graph neural networks have shown promising results in learning inductive representations by automatically extracting temporal patterns.
We present an efficient yet effective attention-based encoder that leverages temporal edge encodings and window-based subgraph sampling to generate task-agnostic embeddings.
arXiv Detail & Related papers (2022-10-30T18:13:04Z) - Scalable Spatiotemporal Graph Neural Networks [14.415967477487692]
Graph neural networks (GNNs) are often the core component of the forecasting architecture.
In most pretemporal GNNs, the computational complexity scales up to a quadratic factor with the length of the sequence times the number of links in the graph.
We propose a scalable architecture that exploits an efficient encoding of both temporal and spatial dynamics.
arXiv Detail & Related papers (2022-09-14T09:47:38Z) - Confident Adaptive Language Modeling [95.45272377648773]
CALM is a framework for dynamically allocating different amounts of compute per input and generation timestep.
We demonstrate the efficacy of our framework in reducing compute -- potential speedup of up to $times 3$ -- while provably maintaining high performance.
arXiv Detail & Related papers (2022-07-14T17:00:19Z) - ZippyPoint: Fast Interest Point Detection, Description, and Matching
through Mixed Precision Discretization [71.91942002659795]
We investigate and adapt network quantization techniques to accelerate inference and enable its use on compute limited platforms.
ZippyPoint, our efficient quantized network with binary descriptors, improves the network runtime speed, the descriptor matching speed, and the 3D model size.
These improvements come at a minor performance degradation as evaluated on the tasks of homography estimation, visual localization, and map-free visual relocalization.
arXiv Detail & Related papers (2022-03-07T18:59:03Z) - Rewiring with Positional Encodings for Graph Neural Networks [37.394229290996364]
Several recent works use positional encodings to extend receptive fields of graph neural network layers equipped with attention mechanisms.
We use positional encodings to expand receptive fields to $r$-hop neighborhoods.
We obtain improvements on a variety of models and datasets and reach competitive performance using traditional GNNs or graph Transformers.
arXiv Detail & Related papers (2022-01-29T22:26:02Z) - Multi-Exit Semantic Segmentation Networks [78.44441236864057]
We propose a framework for converting state-of-the-art segmentation models to MESS networks.
specially trained CNNs that employ parametrised early exits along their depth to save during inference on easier samples.
We co-optimise the number, placement and architecture of the attached segmentation heads, along with the exit policy, to adapt to the device capabilities and application-specific requirements.
arXiv Detail & Related papers (2021-06-07T11:37:03Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - A Transformer-based Approach for Source Code Summarization [86.08359401867577]
We learn code representation for summarization by modeling the pairwise relationship between code tokens.
We show that despite the approach is simple, it outperforms the state-of-the-art techniques by a significant margin.
arXiv Detail & Related papers (2020-05-01T23:29:36Z) - Auto-Encoding Twin-Bottleneck Hashing [141.5378966676885]
This paper proposes an efficient and adaptive code-driven graph.
It is updated by decoding in the context of an auto-encoder.
Experiments on benchmarked datasets clearly show the superiority of our framework over the state-of-the-art hashing methods.
arXiv Detail & Related papers (2020-02-27T05:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.