Benchmarking Graph Neural Networks
- URL: http://arxiv.org/abs/2003.00982v4
- Date: Wed, 11 May 2022 17:07:03 GMT
- Title: Benchmarking Graph Neural Networks
- Authors: Vijay Prakash Dwivedi, Chaitanya K. Joshi, Anh Tuan Luu, Thomas
Laurent, Yoshua Bengio, Xavier Bresson
- Abstract summary: Graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs.
For any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress.
GitHub repository has reached 1,800 stars and 339 forks, which demonstrates the utility of the proposed open-source framework.
- Score: 75.42159546060509
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the last few years, graph neural networks (GNNs) have become the standard
toolkit for analyzing and learning from data on graphs. This emerging field has
witnessed an extensive growth of promising techniques that have been applied
with success to computer science, mathematics, biology, physics and chemistry.
But for any successful field to become mainstream and reliable, benchmarks must
be developed to quantify progress. This led us in March 2020 to release a
benchmark framework that i) comprises of a diverse collection of mathematical
and real-world graphs, ii) enables fair model comparison with the same
parameter budget to identify key architectures, iii) has an open-source,
easy-to-use and reproducible code infrastructure, and iv) is flexible for
researchers to experiment with new theoretical ideas. As of May 2022, the
GitHub repository has reached 1,800 stars and 339 forks, which demonstrates the
utility of the proposed open-source framework through the wide usage by the GNN
community. In this paper, we present an updated version of our benchmark with a
concise presentation of the aforementioned framework characteristics, an
additional medium-sized molecular dataset AQSOL, similar to the popular ZINC,
but with a real-world measured chemical target, and discuss how this framework
can be leveraged to explore new GNN designs and insights. As a proof of value
of our benchmark, we study the case of graph positional encoding (PE) in GNNs,
which was introduced with this benchmark and has since spurred interest of
exploring more powerful PE for Transformers and GNNs in a robust experimental
setting.
Related papers
- Rethinking Link Prediction for Directed Graphs [73.36395969796804]
Link prediction for directed graphs is a crucial task with diverse real-world applications.
Recent advances in embedding methods and Graph Neural Networks (GNNs) have shown promising improvements.
We propose a unified framework to assess the expressiveness of existing methods, highlighting the impact of dual embeddings and decoder design on performance.
arXiv Detail & Related papers (2025-02-08T23:51:05Z) - Mamba-Based Graph Convolutional Networks: Tackling Over-smoothing with Selective State Space [33.677431350509224]
We introduce MbaGCN, a novel graph convolutional architecture that draws inspiration from the Mamba paradigm.
MbaGCN presents a new backbone for GNNs, consisting of three key components: the Message Aggregation Layer, the Selective State Space Transition Layer, and the Node State Prediction Layer.
arXiv Detail & Related papers (2025-01-26T09:09:44Z) - Graph Neural Networks Are More Than Filters: Revisiting and Benchmarking from A Spectral Perspective [49.613774305350084]
Graph Neural Networks (GNNs) have achieved remarkable success in various graph-based learning tasks.
Recent studies suggest that other components such as non-linear layers may also significantly affect how GNNs process the input graph data in the spectral domain.
This paper introduces a comprehensive benchmark to measure and evaluate GNNs' capability in capturing and leveraging the information encoded in different frequency components of the input graph data.
arXiv Detail & Related papers (2024-12-10T04:53:53Z) - Application of Graph Neural Networks and graph descriptors for graph
classification [0.0]
We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning.
We design fair evaluation experimental protocol and choose proper datasets collection.
We arrive to many conclusions, which shed new light on performance and quality of novel algorithms.
arXiv Detail & Related papers (2022-11-07T16:25:22Z) - Characterizing the Efficiency of Graph Neural Network Frameworks with a
Magnifying Glass [10.839902229218577]
Graph neural networks (GNNs) have received great attention due to their success in various graph-related learning tasks.
Recent GNNs have been developed with different graph sampling techniques for mini-batch training of GNNs on large graphs.
It is unknown how much the frameworks are 'eco-friendly' from a green computing perspective.
arXiv Detail & Related papers (2022-11-06T04:22:19Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - PaSca: a Graph Neural Architecture Search System under the Scalable
Paradigm [24.294196319217907]
Graph neural networks (GNNs) have achieved state-of-the-art performance in various graph-based tasks.
However, GNNs do not scale well to data size and message passing steps.
This paper proposes PasCa, a new paradigm and system that offers a principled approach to systemically construct and explore the design space for scalable GNNs.
arXiv Detail & Related papers (2022-03-01T17:26:50Z) - Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive
Benchmark Study [100.27567794045045]
Training deep graph neural networks (GNNs) is notoriously hard.
We present the first fair and reproducible benchmark dedicated to assessing the "tricks" of training deep GNNs.
arXiv Detail & Related papers (2021-08-24T05:00:37Z) - Node Masking: Making Graph Neural Networks Generalize and Scale Better [71.51292866945471]
Graph Neural Networks (GNNs) have received a lot of interest in the recent times.
In this paper, we utilize some theoretical tools to better visualize the operations performed by state of the art spatial GNNs.
We introduce a simple concept, Node Masking, that allows them to generalize and scale better.
arXiv Detail & Related papers (2020-01-17T06:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.