OpenGSL: A Comprehensive Benchmark for Graph Structure Learning
- URL: http://arxiv.org/abs/2306.10280v4
- Date: Sat, 23 Dec 2023 10:03:03 GMT
- Title: OpenGSL: A Comprehensive Benchmark for Graph Structure Learning
- Authors: Zhiyao Zhou, Sheng Zhou, Bochao Mao, Xuanyi Zhou, Jiawei Chen, Qiaoyu
Tan, Daochen Zha, Yan Feng, Chun Chen, Can Wang
- Abstract summary: We introduce OpenGSL, the first comprehensive benchmark for Graph Structure Learning (GSL)
OpenGSL enables a fair comparison among state-of-the-art GSL methods by evaluating them across various popular datasets.
We find that there is no significant correlation between the homophily of the learned structure and task performance, challenging the common belief.
- Score: 40.50100033304329
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have emerged as the de facto standard for
representation learning on graphs, owing to their ability to effectively
integrate graph topology and node attributes. However, the inherent suboptimal
nature of node connections, resulting from the complex and contingent formation
process of graphs, presents significant challenges in modeling them
effectively. To tackle this issue, Graph Structure Learning (GSL), a family of
data-centric learning approaches, has garnered substantial attention in recent
years. The core concept behind GSL is to jointly optimize the graph structure
and the corresponding GNN models. Despite the proposal of numerous GSL methods,
the progress in this field remains unclear due to inconsistent experimental
protocols, including variations in datasets, data processing techniques, and
splitting strategies. In this paper, we introduce OpenGSL, the first
comprehensive benchmark for GSL, aimed at addressing this gap. OpenGSL enables
a fair comparison among state-of-the-art GSL methods by evaluating them across
various popular datasets using uniform data processing and splitting
strategies. Through extensive experiments, we observe that existing GSL methods
do not consistently outperform vanilla GNN counterparts. We also find that
there is no significant correlation between the homophily of the learned
structure and task performance, challenging the common belief. Moreover, we
observe that the learned graph structure demonstrates a strong generalization
ability across different GNN models, despite the high computational and space
consumption. We hope that our open-sourced library will facilitate rapid and
equitable evaluation and inspire further innovative research in this field. The
code of the benchmark can be found in https://github.com/OpenGSL/OpenGSL.
Related papers
- Rethinking Structure Learning For Graph Neural Networks [16.753943955505406]
Graph Structure Learning (GSL) has been extensively applied to reconstruct or refine original graph structures.
GSL is generally thought to improve GNN performance, but there is a lack of theoretical analysis to quantify its effectiveness.
This paper proposes a new GSL framework, which includes three steps: GSL base, new structure construction, and view fusion.
arXiv Detail & Related papers (2024-11-12T09:39:22Z) - GraphEdit: Large Language Models for Graph Structure Learning [62.618818029177355]
Graph Structure Learning (GSL) focuses on capturing intrinsic dependencies and interactions among nodes in graph-structured data.
Existing GSL methods heavily depend on explicit graph structural information as supervision signals.
We propose GraphEdit, an approach that leverages large language models (LLMs) to learn complex node relationships in graph-structured data.
arXiv Detail & Related papers (2024-02-23T08:29:42Z) - GSLB: The Graph Structure Learning Benchmark [34.859275408785614]
Graph Structure Learning (GSL) has recently garnered considerable attention due to its ability to optimize both the parameters of Graph Neural Networks (GNNs) and the computation graph structure simultaneously.
There is no standard experimental setting or fair comparison for performance evaluation, which creates a great obstacle to understanding the progress in this field.
We develop a comprehensive Graph Structure Learning Benchmark (GSLB) curated from 20 diverse graph datasets and 16 distinct GSL algorithms.
arXiv Detail & Related papers (2023-10-08T14:13:03Z) - Robust Graph Structure Learning with the Alignment of Features and
Adjacency Matrix [8.711977569042865]
Many approaches have been proposed for graph structure learning (GSL) to jointly learn a clean graph structure and corresponding representations.
This paper proposes a novel regularized GSL approach, particularly with an alignment of feature information and graph information.
We conduct experiments on real-world graphs to evaluate the effectiveness of our approach.
arXiv Detail & Related papers (2023-07-05T09:05:14Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Training Free Graph Neural Networks for Graph Matching [103.45755859119035]
TFGM is a framework to boost the performance of Graph Neural Networks (GNNs) based graph matching without training.
Applying TFGM on various GNNs shows promising improvements over baselines.
arXiv Detail & Related papers (2022-01-14T09:04:46Z) - Deep Graph Structure Learning for Robust Representations: A Survey [20.564611153151834]
Graph Neural Networks (GNNs) are widely used for analyzing graph-structured data.
To improve the robustness of GNN models, many studies have been proposed around the central concept of Graph Structure Learning.
arXiv Detail & Related papers (2021-03-04T13:49:25Z) - Combining Label Propagation and Simple Models Out-performs Graph Neural
Networks [52.121819834353865]
We show that for many standard transductive node classification benchmarks, we can exceed or match the performance of state-of-the-art GNNs.
We call this overall procedure Correct and Smooth (C&S)
Our approach exceeds or nearly matches the performance of state-of-the-art GNNs on a wide variety of benchmarks.
arXiv Detail & Related papers (2020-10-27T02:10:52Z) - Self-supervised Learning on Graphs: Deep Insights and New Direction [66.78374374440467]
Self-supervised learning (SSL) aims to create domain specific pretext tasks on unlabeled data.
There are increasing interests in generalizing deep learning to the graph domain in the form of graph neural networks (GNNs)
arXiv Detail & Related papers (2020-06-17T20:30:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.