Are we really making much progress? Revisiting, benchmarking, and
refining heterogeneous graph neural networks
- URL: http://arxiv.org/abs/2112.14936v1
- Date: Thu, 30 Dec 2021 06:29:21 GMT
- Title: Are we really making much progress? Revisiting, benchmarking, and
refining heterogeneous graph neural networks
- Authors: Qingsong Lv, Ming Ding, Qiang Liu, Yuxiang Chen, Wenzheng Feng, Siming
He, Chang Zhou, Jianguo Jiang, Yuxiao Dong, Jie Tang
- Abstract summary: We present a systematical reproduction of 12 recent Heterogeneous graph neural networks (HGNNs)
We find that the simple homogeneous GNNs, e.g., GCN and GAT, are largely underestimated due to improper settings.
To facilitate robust and reproducible HGNN research, we construct the Heterogeneous Graph Benchmark (HGB)
- Score: 38.15094159495419
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heterogeneous graph neural networks (HGNNs) have been blossoming in recent
years, but the unique data processing and evaluation setups used by each work
obstruct a full understanding of their advancements. In this work, we present a
systematical reproduction of 12 recent HGNNs by using their official codes,
datasets, settings, and hyperparameters, revealing surprising findings about
the progress of HGNNs. We find that the simple homogeneous GNNs, e.g., GCN and
GAT, are largely underestimated due to improper settings. GAT with proper
inputs can generally match or outperform all existing HGNNs across various
scenarios. To facilitate robust and reproducible HGNN research, we construct
the Heterogeneous Graph Benchmark (HGB), consisting of 11 diverse datasets with
three tasks. HGB standardizes the process of heterogeneous graph data splits,
feature processing, and performance evaluation. Finally, we introduce a simple
but very strong baseline Simple-HGN--which significantly outperforms all
previous models on HGB--to accelerate the advancement of HGNNs in the future.
Related papers
- Diffusing to the Top: Boost Graph Neural Networks with Minimal Hyperparameter Tuning [33.948899558876604]
This work introduces a graph-conditioned latent diffusion framework (GNN-Diff) to generate high-performing GNNs.
We validate our method through 166 experiments across four graph tasks: node classification on small, large, and long-range graphs, as well as link prediction.
arXiv Detail & Related papers (2024-10-08T05:27:34Z) - BG-HGNN: Toward Scalable and Efficient Heterogeneous Graph Neural
Network [6.598758004828656]
Heterogeneous graph neural networks (HGNNs) stand out as a promising neural model class designed for heterogeneous graphs.
Existing HGNNs employ different parameter spaces to model the varied relationships.
This paper introduces Blend&Grind-HGNN, which integrates different relations into a unified feature space manageable by a single set of parameters.
arXiv Detail & Related papers (2024-03-13T03:03:40Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - GCNH: A Simple Method For Representation Learning On Heterophilous
Graphs [4.051099980410583]
Graph Neural Networks (GNNs) are well-suited for learning on homophilous graphs.
Recent works have proposed extensions to standard GNN architectures to improve performance on heterophilous graphs.
We propose GCN for Heterophily (GCNH), a simple yet effective GNN architecture applicable to both heterophilous and homophilous scenarios.
arXiv Detail & Related papers (2023-04-21T11:26:24Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with
Heterophily [58.76759997223951]
We propose a new metric based on von Neumann entropy to re-examine the heterophily problem of GNNs.
We also propose a Conv-Agnostic GNN framework (CAGNNs) to enhance the performance of most GNNs on heterophily datasets.
arXiv Detail & Related papers (2022-03-19T14:26:43Z) - Is Homophily a Necessity for Graph Neural Networks? [50.959340355849896]
Graph neural networks (GNNs) have shown great prowess in learning representations suitable for numerous graph-based machine learning tasks.
GNNs are widely believed to work well due to the homophily assumption ("like attracts like"), and fail to generalize to heterophilous graphs where dissimilar nodes connect.
Recent works design new architectures to overcome such heterophily-related limitations, citing poor baseline performance and new architecture improvements on a few heterophilous graph benchmark datasets as evidence for this notion.
In our experiments, we empirically find that standard graph convolutional networks (GCNs) can actually achieve better performance than
arXiv Detail & Related papers (2021-06-11T02:44:00Z) - Boost then Convolve: Gradient Boosting Meets Graph Neural Networks [6.888700669980625]
We show that gradient boosted decision trees (GBDT) often outperform other machine learning methods when faced with heterogeneous data.
We propose a novel architecture that trains GBDT and GNN jointly to get the best of both worlds.
Our model benefits from end-to-end optimization by allowing new trees to fit the gradient updates of GNN.
arXiv Detail & Related papers (2021-01-21T10:46:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.