Towards Better Dynamic Graph Learning: New Architecture and Unified
Library
- URL: http://arxiv.org/abs/2303.13047v3
- Date: Thu, 19 Oct 2023 03:07:09 GMT
- Title: Towards Better Dynamic Graph Learning: New Architecture and Unified
Library
- Authors: Le Yu, Leilei Sun, Bowen Du, Weifeng Lv
- Abstract summary: DyGFormer is a Transformer-based architecture for dynamic graph learning.
DyGLib is a unified library with standard training pipelines and coding interfaces.
- Score: 29.625205125350313
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose DyGFormer, a new Transformer-based architecture for dynamic graph
learning. DyGFormer is conceptually simple and only needs to learn from nodes'
historical first-hop interactions by: (1) a neighbor co-occurrence encoding
scheme that explores the correlations of the source node and destination node
based on their historical sequences; (2) a patching technique that divides each
sequence into multiple patches and feeds them to Transformer, allowing the
model to effectively and efficiently benefit from longer histories. We also
introduce DyGLib, a unified library with standard training pipelines,
extensible coding interfaces, and comprehensive evaluating protocols to promote
reproducible, scalable, and credible dynamic graph learning research. By
performing exhaustive experiments on thirteen datasets for dynamic link
prediction and dynamic node classification tasks, we find that DyGFormer
achieves state-of-the-art performance on most of the datasets, demonstrating
its effectiveness in capturing nodes' correlations and long-term temporal
dependencies. Moreover, some results of baselines are inconsistent with
previous reports, which may be caused by their diverse but less rigorous
implementations, showing the importance of DyGLib. All the used resources are
publicly available at https://github.com/yule-BUAA/DyGLib.
Related papers
- A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Learning Long Range Dependencies on Graphs via Random Walks [6.7864586321550595]
Message-passing graph neural networks (GNNs) excel at capturing local relationships but struggle with long-range dependencies in graphs.
graph transformers (GTs) enable global information exchange but often oversimplify the graph structure by representing graphs as sets of fixed-length vectors.
This work introduces a novel architecture that overcomes the shortcomings of both approaches by combining the long-range information of random walks with local message passing.
arXiv Detail & Related papers (2024-06-05T15:36:57Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - An Empirical Evaluation of Temporal Graph Benchmark [1.4211059618531252]
We conduct an empirical evaluation of Temporal Graph Benchmark (TGB) by extending our Dynamic Graph Library (DyGLib) to TGB.
We find that (1) different models depict varying performance across various datasets, which is in line with previous observations; (2) the performance of some baselines can be significantly improved over the reported results in TGB when using DyGLib.
arXiv Detail & Related papers (2023-07-24T03:52:11Z) - Instant Representation Learning for Recommendation over Large Dynamic
Graphs [29.41179019520622]
We propose SUPA, a novel graph neural network for dynamic multiplex heterogeneous graphs.
For each new edge, SUPA samples an influenced subgraph, updates the representations of the two interactive nodes, and propagates the interaction information to the sampled subgraph.
To train SUPA incrementally online, we propose InsLearn, an efficient workflow for single-pass training of large dynamic graphs.
arXiv Detail & Related papers (2023-05-22T15:36:10Z) - DORE: Document Ordered Relation Extraction based on Generative Framework [56.537386636819626]
This paper investigates the root cause of the underwhelming performance of the existing generative DocRE models.
We propose to generate a symbolic and ordered sequence from the relation matrix which is deterministic and easier for model to learn.
Experimental results on four datasets show that our proposed method can improve the performance of the generative DocRE models.
arXiv Detail & Related papers (2022-10-28T11:18:10Z) - From Discrimination to Generation: Knowledge Graph Completion with
Generative Transformer [41.69537736842654]
We provide an approach GenKGC, which converts knowledge graph completion to sequence-to-sequence generation task with the pre-trained language model.
We introduce relation-guided demonstration and entity-aware hierarchical decoding for better representation learning and fast inference.
We also release a new large-scale Chinese knowledge graph dataset AliopenKG500 for research purpose.
arXiv Detail & Related papers (2022-02-04T12:52:32Z) - Dynamic Graph Representation Learning via Graph Transformer Networks [41.570839291138114]
We propose a Transformer-based dynamic graph learning method named Dynamic Graph Transformer (DGT)
DGT has spatial-temporal encoding to effectively learn graph topology and capture implicit links.
We show that DGT presents superior performance compared with several state-of-the-art baselines.
arXiv Detail & Related papers (2021-11-19T21:44:23Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.