DyG2Vec: Efficient Representation Learning for Dynamic Graphs
- URL: http://arxiv.org/abs/2210.16906v3
- Date: Mon, 8 Jan 2024 20:42:05 GMT
- Title: DyG2Vec: Efficient Representation Learning for Dynamic Graphs
- Authors: Mohammad Ali Alomrani, Mahdi Biparva, Yingxue Zhang, Mark Coates
- Abstract summary: Temporal graph neural networks have shown promising results in learning inductive representations by automatically extracting temporal patterns.
We present an efficient yet effective attention-based encoder that leverages temporal edge encodings and window-based subgraph sampling to generate task-agnostic embeddings.
- Score: 26.792732615703372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Temporal graph neural networks have shown promising results in learning
inductive representations by automatically extracting temporal patterns.
However, previous works often rely on complex memory modules or inefficient
random walk methods to construct temporal representations. To address these
limitations, we present an efficient yet effective attention-based encoder that
leverages temporal edge encodings and window-based subgraph sampling to
generate task-agnostic embeddings. Moreover, we propose a joint-embedding
architecture using non-contrastive SSL to learn rich temporal embeddings
without labels. Experimental results on 7 benchmark datasets indicate that on
average, our model outperforms SoTA baselines on the future link prediction
task by 4.23% for the transductive setting and 3.30% for the inductive setting
while only requiring 5-10x less training/inference time. Lastly, different
aspects of the proposed framework are investigated through experimental
analysis and ablation studies. The code is publicly available at
https://github.com/huawei-noah/noah-research/tree/master/graph_atlas.
Related papers
- From random-walks to graph-sprints: a low-latency node embedding
framework on continuous-time dynamic graphs [4.372841335228306]
We propose a framework for continuous-time-dynamic-graphs (CTDGs) that has low latency and is competitive with state-of-the-art, higher latency models.
In our framework, time-aware node embeddings summarizing multi-hop information are computed using only single-hop operations on the incoming edges.
We demonstrate that our graph-sprints features, combined with a machine learning, achieve competitive performance.
arXiv Detail & Related papers (2023-07-17T12:25:52Z) - Sparsity exploitation via discovering graphical models in multi-variate
time-series forecasting [1.2762298148425795]
We propose a decoupled training method, which includes a graph generating module and a GNNs forecasting module.
First, we use Graphical Lasso (or GraphLASSO) to directly exploit the sparsity pattern from data to build graph structures.
Second, we fit these graph structures and the input data into a Graph Convolutional Recurrent Network (GCRN) to train a forecasting model.
arXiv Detail & Related papers (2023-06-29T16:48:00Z) - Contrastive Learning for Time Series on Dynamic Graphs [17.46524362769774]
We propose a framework called GraphTNC for unsupervised learning of joint representations of the graph and the time-series.
We show that it can prove beneficial for the classification task with real-world datasets.
arXiv Detail & Related papers (2022-09-21T21:14:28Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Spatio-Temporal Graph Contrastive Learning [49.132528449909316]
We propose a Spatio-Temporal Graph Contrastive Learning framework (STGCL) to tackle these issues.
We elaborate on four types of data augmentations which disturb data in terms of graph structure, time domain, and frequency domain.
Our framework is evaluated across three real-world datasets and four state-of-the-art models.
arXiv Detail & Related papers (2021-08-26T16:05:32Z) - Temporal Graph Network Embedding with Causal Anonymous Walks
Representations [54.05212871508062]
We propose a novel approach for dynamic network representation learning based on Temporal Graph Network.
For evaluation, we provide a benchmark pipeline for the evaluation of temporal network embeddings.
We show the applicability and superior performance of our model in the real-world downstream graph machine learning task provided by one of the top European banks.
arXiv Detail & Related papers (2021-08-19T15:39:52Z) - A Large-Scale Study on Unsupervised Spatiotemporal Representation
Learning [60.720251418816815]
We present a large-scale study on unsupervised representation learning from videos.
Our objective encourages temporally-persistent features in the same video.
We find that encouraging long-spanned persistency can be effective even if the timespan is 60 seconds.
arXiv Detail & Related papers (2021-04-29T17:59:53Z) - Time-varying Graph Representation Learning via Higher-Order Skip-Gram
with Negative Sampling [0.456877715768796]
We build upon the fact that the skip-gram embedding approach implicitly performs a matrix factorization.
We show that higher-order skip-gram with negative sampling is able to disentangle the role of nodes and time.
We empirically evaluate our approach using time-resolved face-to-face proximity data, showing that the learned time-varying graph representations outperform state-of-the-art methods.
arXiv Detail & Related papers (2020-06-25T12:04:48Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.