An Empirical Evaluation of Temporal Graph Benchmark
- URL: http://arxiv.org/abs/2307.12510v5
- Date: Sun, 15 Oct 2023 09:18:23 GMT
- Title: An Empirical Evaluation of Temporal Graph Benchmark
- Authors: Le Yu
- Abstract summary: We conduct an empirical evaluation of Temporal Graph Benchmark (TGB) by extending our Dynamic Graph Library (DyGLib) to TGB.
We find that (1) different models depict varying performance across various datasets, which is in line with previous observations; (2) the performance of some baselines can be significantly improved over the reported results in TGB when using DyGLib.
- Score: 1.4211059618531252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we conduct an empirical evaluation of Temporal Graph Benchmark
(TGB) by extending our Dynamic Graph Library (DyGLib) to TGB. Compared with
TGB, we include eleven popular dynamic graph learning methods for more
exhaustive comparisons. Through the experiments, we find that (1) different
models depict varying performance across various datasets, which is in line
with previous observations; (2) the performance of some baselines can be
significantly improved over the reported results in TGB when using DyGLib. This
work aims to ease the researchers' efforts in evaluating various dynamic graph
learning methods on TGB and attempts to offer results that can be directly
referenced in the follow-up research. All the used resources in this project
are publicly available at https://github.com/yule-BUAA/DyGLib_TGB. This work is
in progress, and feedback from the community is welcomed for improvements.
Related papers
- GLBench: A Comprehensive Benchmark for Graph with Large Language Models [41.89444363336435]
We introduce GLBench, the first comprehensive benchmark for evaluating GraphLLM methods in both supervised and zero-shot scenarios.
GLBench provides a fair and thorough evaluation of different categories of GraphLLM methods, along with traditional baselines such as graph neural networks.
arXiv Detail & Related papers (2024-07-10T08:20:47Z) - GSINA: Improving Subgraph Extraction for Graph Invariant Learning via
Graph Sinkhorn Attention [52.67633391931959]
Graph invariant learning (GIL) has been an effective approach to discovering the invariant relationships between graph data and its labels.
We propose a novel graph attention mechanism called Graph Sinkhorn Attention (GSINA)
GSINA is able to obtain meaningful, differentiable invariant subgraphs with controllable sparsity and softness.
arXiv Detail & Related papers (2024-02-11T12:57:16Z) - Leveraging Temporal Graph Networks Using Module Decoupling [3.115375810642661]
We suggest a decoupling strategy that enables the models to update frequently while using batches.
We have developed the Lightweight Decoupled Temporal Graph Network (LDTGN), an exceptionally efficient model for learning on dynamic graphs.
Our method outperforms previous approaches by more than 20% on benchmarks that require rapid model update rates.
arXiv Detail & Related papers (2023-10-04T10:52:51Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - Temporal Graph Benchmark for Machine Learning on Temporal Graphs [54.52243310226456]
Temporal Graph Benchmark (TGB) is a collection of challenging and diverse benchmark datasets.
We benchmark each dataset and find that the performance of common models can vary drastically across datasets.
TGB provides an automated machine learning pipeline for reproducible and accessible temporal graph research.
arXiv Detail & Related papers (2023-07-03T13:58:20Z) - Towards Better Dynamic Graph Learning: New Architecture and Unified
Library [29.625205125350313]
DyGFormer is a Transformer-based architecture for dynamic graph learning.
DyGLib is a unified library with standard training pipelines and coding interfaces.
arXiv Detail & Related papers (2023-03-23T05:27:32Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Open Graph Benchmark: Datasets for Machine Learning on Graphs [86.96887552203479]
We present the Open Graph Benchmark (OGB) to facilitate scalable, robust, and reproducible graph machine learning (ML) research.
OGB datasets are large-scale, encompass multiple important graph ML tasks, and cover a diverse range of domains.
For each dataset, we provide a unified evaluation protocol using meaningful application-specific data splits and evaluation metrics.
arXiv Detail & Related papers (2020-05-02T03:09:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.