Leveraging Temporal Graph Networks Using Module Decoupling
- URL: http://arxiv.org/abs/2310.02721v2
- Date: Thu, 6 Jun 2024 14:11:29 GMT
- Title: Leveraging Temporal Graph Networks Using Module Decoupling
- Authors: Or Feldman, Chaim Baskin,
- Abstract summary: We suggest a decoupling strategy that enables the models to update frequently while using batches.
We have developed the Lightweight Decoupled Temporal Graph Network (LDTGN), an exceptionally efficient model for learning on dynamic graphs.
Our method outperforms previous approaches by more than 20% on benchmarks that require rapid model update rates.
- Score: 3.115375810642661
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Modern approaches for learning on dynamic graphs have adopted the use of batches instead of applying updates one by one. The use of batches allows these techniques to become helpful in streaming scenarios where updates to graphs are received at extreme speeds. Using batches, however, forces the models to update infrequently, which results in the degradation of their performance. In this work, we suggest a decoupling strategy that enables the models to update frequently while using batches. By decoupling the core modules of temporal graph networks and implementing them using a minimal number of learnable parameters, we have developed the Lightweight Decoupled Temporal Graph Network (LDTGN), an exceptionally efficient model for learning on dynamic graphs. LDTG was validated on various dynamic graph benchmarks, providing comparable or state-of-the-art results with significantly higher throughput than previous art. Notably, our method outperforms previous approaches by more than 20\% on benchmarks that require rapid model update rates, such as USLegis or UNTrade. The code to reproduce our experiments is available at \href{https://orfeld415.github.io/module-decoupling}{this http url}.
Related papers
- GLBench: A Comprehensive Benchmark for Graph with Large Language Models [41.89444363336435]
We introduce GLBench, the first comprehensive benchmark for evaluating GraphLLM methods in both supervised and zero-shot scenarios.
GLBench provides a fair and thorough evaluation of different categories of GraphLLM methods, along with traditional baselines such as graph neural networks.
arXiv Detail & Related papers (2024-07-10T08:20:47Z) - Temporal Graph Benchmark for Machine Learning on Temporal Graphs [54.52243310226456]
Temporal Graph Benchmark (TGB) is a collection of challenging and diverse benchmark datasets.
We benchmark each dataset and find that the performance of common models can vary drastically across datasets.
TGB provides an automated machine learning pipeline for reproducible and accessible temporal graph research.
arXiv Detail & Related papers (2023-07-03T13:58:20Z) - Instant Representation Learning for Recommendation over Large Dynamic
Graphs [29.41179019520622]
We propose SUPA, a novel graph neural network for dynamic multiplex heterogeneous graphs.
For each new edge, SUPA samples an influenced subgraph, updates the representations of the two interactive nodes, and propagates the interaction information to the sampled subgraph.
To train SUPA incrementally online, we propose InsLearn, an efficient workflow for single-pass training of large dynamic graphs.
arXiv Detail & Related papers (2023-05-22T15:36:10Z) - Learnable Graph Matching: A Practical Paradigm for Data Association [74.28753343714858]
We propose a general learnable graph matching method to address these issues.
Our method achieves state-of-the-art performance on several MOT datasets.
For image matching, our method outperforms state-of-the-art methods on a popular indoor dataset, ScanNet.
arXiv Detail & Related papers (2023-03-27T17:39:00Z) - Towards Better Dynamic Graph Learning: New Architecture and Unified
Library [29.625205125350313]
DyGFormer is a Transformer-based architecture for dynamic graph learning.
DyGLib is a unified library with standard training pipelines and coding interfaces.
arXiv Detail & Related papers (2023-03-23T05:27:32Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Dynamic Graph Learning-Neural Network for Multivariate Time Series
Modeling [2.3022070933226217]
We propose a novel framework, namely static- and dynamic-graph learning-neural network (GL)
The model acquires static and dynamic graph matrices from data to model long-term and short-term patterns respectively.
It achieves state-of-the-art performance on almost all datasets.
arXiv Detail & Related papers (2021-12-06T08:19:15Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - Combining Label Propagation and Simple Models Out-performs Graph Neural
Networks [52.121819834353865]
We show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs.
We call this overall procedure Correct and Smooth (C&S)
Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks.
arXiv Detail & Related papers (2020-10-27T02:10:52Z) - Revisiting Graph based Collaborative Filtering: A Linear Residual Graph
Convolutional Network Approach [55.44107800525776]
Graph Convolutional Networks (GCNs) are state-of-the-art graph based representation learning models.
In this paper, we revisit GCN based Collaborative Filtering (CF) based Recommender Systems (RS)
We show that removing non-linearities would enhance recommendation performance, consistent with the theories in simple graph convolutional networks.
We propose a residual network structure that is specifically designed for CF with user-item interaction modeling.
arXiv Detail & Related papers (2020-01-28T04:41:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.