Dynamic Graph Representation Learning via Graph Transformer Networks
- URL: http://arxiv.org/abs/2111.10447v1
- Date: Fri, 19 Nov 2021 21:44:23 GMT
- Title: Dynamic Graph Representation Learning via Graph Transformer Networks
- Authors: Weilin Cong, Yanhong Wu, Yuandong Tian, Mengting Gu, Yinglong Xia,
Mehrdad Mahdavi, Chun-cheng Jason Chen
- Abstract summary: We propose a Transformer-based dynamic graph learning method named Dynamic Graph Transformer (DGT)
DGT has spatial-temporal encoding to effectively learn graph topology and capture implicit links.
We show that DGT presents superior performance compared with several state-of-the-art baselines.
- Score: 41.570839291138114
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic graph representation learning is an important task with widespread
applications. Previous methods on dynamic graph learning are usually sensitive
to noisy graph information such as missing or spurious connections, which can
yield degenerated performance and generalization. To overcome this challenge,
we propose a Transformer-based dynamic graph learning method named Dynamic
Graph Transformer (DGT) with spatial-temporal encoding to effectively learn
graph topology and capture implicit links. To improve the generalization
ability, we introduce two complementary self-supervised pre-training tasks and
show that jointly optimizing the two pre-training tasks results in a smaller
Bayesian error rate via an information-theoretic analysis. We also propose a
temporal-union graph structure and a target-context node sampling strategy for
efficient and scalable training. Extensive experiments on real-world datasets
illustrate that DGT presents superior performance compared with several
state-of-the-art baselines.
Related papers
- Deep Prompt Tuning for Graph Transformers [55.2480439325792]
Fine-tuning is resource-intensive and requires storing multiple copies of large models.
We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning.
By freezing the pre-trained parameters and only updating the added tokens, our approach reduces the number of free parameters and eliminates the need for multiple model copies.
arXiv Detail & Related papers (2023-09-18T20:12:17Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - ENGAGE: Explanation Guided Data Augmentation for Graph Representation
Learning [34.23920789327245]
We propose ENGAGE, where explanation guides the contrastive augmentation process to preserve the key parts in graphs.
We also design two data augmentation schemes on graphs for perturbing structural and feature information, respectively.
arXiv Detail & Related papers (2023-07-03T14:33:14Z) - Hierarchical Transformer for Scalable Graph Learning [22.462712609402324]
Graph Transformer has demonstrated state-of-the-art performance on benchmarks for graph representation learning.
The complexity of the global self-attention mechanism presents a challenge for full-batch training when applied to larger graphs.
We introduce the Hierarchical Scalable Graph Transformer (HSGT) as a solution to these challenges.
HSGT successfully scales the Transformer architecture to node representation learning tasks on large-scale graphs, while maintaining high performance.
arXiv Detail & Related papers (2023-05-04T14:23:22Z) - EasyDGL: Encode, Train and Interpret for Continuous-time Dynamic Graph Learning [92.71579608528907]
This paper aims to design an easy-to-use pipeline (termed as EasyDGL) composed of three key modules with both strong ability fitting and interpretability.
EasyDGL can effectively quantify the predictive power of frequency content that a model learn from the evolving graph data.
arXiv Detail & Related papers (2023-03-22T06:35:08Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Anomaly Detection in Dynamic Graphs via Transformer [30.926884264054042]
We present a novel Transformer-based Anomaly Detection framework for DYnamic graph (TADDY)
Our framework constructs a comprehensive node encoding strategy to better represent each node's structural and temporal roles in an evolving graphs stream.
Our proposed TADDY framework outperforms the state-of-the-art methods by a large margin on four real-world datasets.
arXiv Detail & Related papers (2021-06-18T02:27:19Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.