No Need to Look Back: An Efficient and Scalable Approach for Temporal
Network Representation Learning
- URL: http://arxiv.org/abs/2402.01964v1
- Date: Sat, 3 Feb 2024 00:12:36 GMT
- Title: No Need to Look Back: An Efficient and Scalable Approach for Temporal
Network Representation Learning
- Authors: Yuhong Luo and Pan Li
- Abstract summary: This paper introduces a novel efficient TGRL framework, No-Looking-Back (NLB)
NLB employs a "forward recent sampling" strategy, which bypasses the need for backtracking historical interactions.
Empirical evaluations demonstrate that NLB matches or surpasses state-of-the-art methods in accuracy for link prediction and node classification.
- Score: 9.218415145210715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal graph representation learning (TGRL) is crucial for modeling
complex, dynamic systems in real-world networks. Traditional TGRL methods,
though effective, suffer from high computational demands and inference latency.
This is mainly induced by their inefficient sampling of temporal neighbors by
backtracking the interaction history of each node when making model inference.
This paper introduces a novel efficient TGRL framework, No-Looking-Back (NLB).
NLB employs a "forward recent sampling" strategy, which bypasses the need for
backtracking historical interactions. This strategy is implemented using a
GPU-executable size-constrained hash table for each node, recording
down-sampled recent interactions, which enables rapid response to queries with
minimal inference latency. The maintenance of this hash table is highly
efficient, with $O(1)$ complexity. NLB is fully compatible with GPU processing,
maximizing programmability, parallelism, and power efficiency. Empirical
evaluations demonstrate that NLB matches or surpasses state-of-the-art methods
in accuracy for link prediction and node classification across six real-world
datasets. Significantly, it is 1.32-4.40 $\times$ faster in training, 1.2-7.94
$\times$ more energy efficient, and 1.97-5.02 $\times$ more effective in
reducing inference latency compared to the most competitive baselines. The link
to the code: https://github.com/Graph-COM/NLB.
Related papers
- Rel-HNN: Split Parallel Hypergraph Neural Network for Learning on Relational Databases [3.6423651166048874]
Flattening the database poses challenges for deep learning models.<n>We propose a novel hypergraph-based framework, that we call rel-HNN.<n>We show that rel-HNN significantly outperforms existing methods in both classification and regression tasks.
arXiv Detail & Related papers (2025-07-16T18:20:45Z) - LoRANN: Low-Rank Matrix Factorization for Approximate Nearest Neighbor Search [4.194768796374315]
We propose a new supervised score computation method based on the observation that inner product approximation is a multi-output regression problem.
Our experiments show that the proposed reduced-rank regression (RRR) method is superior to PQ in both query latency and memory usage.
We also introduce LoRANN, a clustering-based ANN library that leverages the proposed score computation method.
arXiv Detail & Related papers (2024-10-24T17:13:39Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Temporal Aggregation and Propagation Graph Neural Networks for Dynamic
Representation [67.26422477327179]
Temporal graphs exhibit dynamic interactions between nodes over continuous time.
We propose a novel method of temporal graph convolution with the whole neighborhood.
Our proposed TAP-GNN outperforms existing temporal graph methods by a large margin in terms of both predictive performance and online inference latency.
arXiv Detail & Related papers (2023-04-15T08:17:18Z) - Communication-Efficient Adam-Type Algorithms for Distributed Data Mining [93.50424502011626]
We propose a class of novel distributed Adam-type algorithms (emphi.e., SketchedAMSGrad) utilizing sketching.
Our new algorithm achieves a fast convergence rate of $O(frac1sqrtnT + frac1(k/d)2 T)$ with the communication cost of $O(k log(d))$ at each iteration.
arXiv Detail & Related papers (2022-10-14T01:42:05Z) - Learning to Solve Combinatorial Graph Partitioning Problems via
Efficient Exploration [72.15369769265398]
Experimentally, ECORD achieves a new SOTA for RL algorithms on the Maximum Cut problem.
Compared to the nearest competitor, ECORD reduces the optimality gap by up to 73%.
arXiv Detail & Related papers (2022-05-27T17:13:10Z) - AEGNN: Asynchronous Event-based Graph Neural Networks [54.528926463775946]
Event-based Graph Neural Networks generalize standard GNNs to process events as "evolving"-temporal graphs.
AEGNNs are easily trained on synchronous inputs and can be converted to efficient, "asynchronous" networks at test time.
arXiv Detail & Related papers (2022-03-31T16:21:12Z) - Parallel Actors and Learners: A Framework for Generating Scalable RL
Implementations [14.432131909590824]
Reinforcement Learning (RL) has achieved significant success in application domains such as robotics, games, health care and others.
Current implementations exhibit poor performance due to challenges such as irregular memory accesses and synchronization overheads.
We propose a framework for generating scalable reinforcement learning implementations on multicore systems.
arXiv Detail & Related papers (2021-10-03T21:00:53Z) - Combining Label Propagation and Simple Models Out-performs Graph Neural
Networks [52.121819834353865]
We show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs.
We call this overall procedure Correct and Smooth (C&S)
Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks.
arXiv Detail & Related papers (2020-10-27T02:10:52Z) - Accurate, Efficient and Scalable Training of Graph Neural Networks [9.569918335816963]
Graph Neural Networks (GNNs) are powerful deep learning models to generate node embeddings on graphs.
It is still challenging to perform training in an efficient and scalable way.
We propose a novel parallel training framework that reduces training workload by orders of magnitude compared with state-of-the-art minibatch methods.
arXiv Detail & Related papers (2020-10-05T22:06:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.