OpenGT: A Comprehensive Benchmark For Graph Transformers
- URL: http://arxiv.org/abs/2506.04765v1
- Date: Thu, 05 Jun 2025 08:48:46 GMT
- Title: OpenGT: A Comprehensive Benchmark For Graph Transformers
- Authors: Jiachen Tang, Zhonghao Wang, Sirui Chen, Sheng Zhou, Jiawei Chen, Jiajun Bu,
- Abstract summary: Graph Transformers (GTs) have recently demonstrated remarkable performance across diverse domains.<n>This paper introduces OpenGT, a comprehensive benchmark for Graph Transformers.
- Score: 13.214504021335749
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Transformers (GTs) have recently demonstrated remarkable performance across diverse domains. By leveraging attention mechanisms, GTs are capable of modeling long-range dependencies and complex structural relationships beyond local neighborhoods. However, their applicable scenarios are still underexplored, this highlights the need to identify when and why they excel. Furthermore, unlike GNNs, which predominantly rely on message-passing mechanisms, GTs exhibit a diverse design space in areas such as positional encoding, attention mechanisms, and graph-specific adaptations. Yet, it remains unclear which of these design choices are truly effective and under what conditions. As a result, the community currently lacks a comprehensive benchmark and library to promote a deeper understanding and further development of GTs. To address this gap, this paper introduces OpenGT, a comprehensive benchmark for Graph Transformers. OpenGT enables fair comparisons and multidimensional analysis by establishing standardized experimental settings and incorporating a broad selection of state-of-the-art GNNs and GTs. Our benchmark evaluates GTs from multiple perspectives, encompassing diverse tasks and datasets with varying properties. Through extensive experiments, our benchmark has uncovered several critical insights, including the difficulty of transferring models across task levels, the limitations of local attention, the efficiency trade-offs in several models, the application scenarios of specific positional encodings, and the preprocessing overhead of some positional encodings. We aspire for this work to establish a foundation for future graph transformer research emphasizing fairness, reproducibility, and generalizability. We have developed an easy-to-use library OpenGT for training and evaluating existing GTs. The benchmark code is available at https://github.com/eaglelab-zju/OpenGT.
Related papers
- GDI-Bench: A Benchmark for General Document Intelligence with Vision and Reasoning Decoupling [36.8157293625143]
General Document Intelligence Benchmark features 2.3k images across 9 key scenarios and 19 document-specific tasks.<n>We evaluate various open-source and closed-source models on GDI-Bench, conducting decoupled analyses in the visual and reasoning domains.<n>Our model achieves state-of-the-art performance on previous benchmarks and the GDI-Bench.
arXiv Detail & Related papers (2025-04-30T15:46:46Z) - G-OSR: A Comprehensive Benchmark for Graph Open-Set Recognition [54.45837774534411]
We introduce textbfG-OSR, a benchmark for evaluating Graph Open-Set Recognition (GOSR) methods at both the node and graph levels.<n>Results offer critical insights into the generalizability and limitations of current GOSR methods.
arXiv Detail & Related papers (2025-03-01T13:02:47Z) - Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence [7.14327815822376]
We explore the untapped potential of Graph Neural Networks (GNNs) through an enhanced framework, GNN+.<n>We conduct a systematic re-evaluation of three classic GNNs enhanced by the GNN+ framework across 14 well-known graph-level datasets.<n>Our results reveal that, contrary to prevailing beliefs, these classic GNNs consistently match or surpass the performance of GTs.
arXiv Detail & Related papers (2025-02-13T12:24:23Z) - Graph Neural Networks Are More Than Filters: Revisiting and Benchmarking from A Spectral Perspective [49.613774305350084]
Graph Neural Networks (GNNs) have achieved remarkable success in various graph-based learning tasks.<n>Recent studies suggest that other components such as non-linear layers may also significantly affect how GNNs process the input graph data in the spectral domain.<n>This paper introduces a comprehensive benchmark to measure and evaluate GNNs' capability in capturing and leveraging the information encoded in different frequency components of the input graph data.
arXiv Detail & Related papers (2024-12-10T04:53:53Z) - GEOBench-VLM: Benchmarking Vision-Language Models for Geospatial Tasks [84.86699025256705]
We present GEOBench-VLM, a benchmark specifically designed to evaluate Vision-Language Models (VLMs) on geospatial tasks.<n>Our benchmark features over 10,000 manually verified instructions and spanning diverse visual conditions, object types, and scales.<n>We evaluate several state-of-the-art VLMs to assess performance on geospatial-specific challenges.
arXiv Detail & Related papers (2024-11-28T18:59:56Z) - Benchmarking Positional Encodings for GNNs and Graph Transformers [20.706469085872516]
We present a benchmark of Positional s (PEs) in a unified framework that includes both message-passing GNNs and GTs.
We also establish theoretical connections between MPNNs and GTs and introduce a sparsified GRIT attention mechanism to examine the influence of global connectivity.
arXiv Detail & Related papers (2024-11-19T18:57:01Z) - General and Task-Oriented Video Segmentation [60.58054218592606]
We present GvSeg, a general video segmentation framework for addressing four different video segmentation tasks.
GvSeg provides a holistic disentanglement and modeling for segment targets, thoroughly examining them from the perspective of appearance, position, and shape.
Extensive experiments on seven gold-standard benchmark datasets demonstrate that GvSeg surpasses all existing specialized/general solutions.
arXiv Detail & Related papers (2024-07-09T04:21:38Z) - AnchorGT: Efficient and Flexible Attention Architecture for Scalable Graph Transformers [35.04198789195943]
We propose AnchorGT, a novel attention architecture for Graph Transformers (GTs) with global receptive field and almost linear complexity.
Inspired by anchor-based GNNs, we employ structurally important $k$-dominating node set as anchors and design an attention mechanism that focuses on the relationship between individual nodes and anchors.
With its intuitive design, AnchorGT can easily replace the attention module in various GT models with different network architectures.
arXiv Detail & Related papers (2024-05-06T13:53:09Z) - Graph Transformers for Large Graphs [57.19338459218758]
This work advances representation learning on single large-scale graphs with a focus on identifying model characteristics and critical design constraints.
A key innovation of this work lies in the creation of a fast neighborhood sampling technique coupled with a local attention mechanism.
We report a 3x speedup and 16.8% performance gain on ogbn-products and snap-patents, while we also scale LargeGT on ogbn-100M with a 5.9% performance improvement.
arXiv Detail & Related papers (2023-12-18T11:19:23Z) - Exploring Sparsity in Graph Transformers [67.48149404841925]
Graph Transformers (GTs) have achieved impressive results on various graph-related tasks.
However, the huge computational cost of GTs hinders their deployment and application, especially in resource-constrained environments.
We propose a comprehensive textbfGraph textbfTransformer textbfSParsification (GTSP) framework that helps to reduce the computational complexity of GTs.
arXiv Detail & Related papers (2023-12-09T06:21:44Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.