Benchmarking Graph Neural Networks
- URL: http://arxiv.org/abs/2003.00982v4
- Date: Wed, 11 May 2022 17:07:03 GMT
- Title: Benchmarking Graph Neural Networks
- Authors: Vijay Prakash Dwivedi, Chaitanya K. Joshi, Anh Tuan Luu, Thomas
Laurent, Yoshua Bengio, Xavier Bresson
- Abstract summary: Graph neural networks (GNNs) have become the standard toolkit for analyzing and learning from data on graphs.
For any successful field to become mainstream and reliable, benchmarks must be developed to quantify progress.
GitHub repository has reached 1,800 stars and 339 forks, which demonstrates the utility of the proposed open-source framework.
- Score: 75.42159546060509
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the last few years, graph neural networks (GNNs) have become the standard
toolkit for analyzing and learning from data on graphs. This emerging field has
witnessed an extensive growth of promising techniques that have been applied
with success to computer science, mathematics, biology, physics and chemistry.
But for any successful field to become mainstream and reliable, benchmarks must
be developed to quantify progress. This led us in March 2020 to release a
benchmark framework that i) comprises of a diverse collection of mathematical
and real-world graphs, ii) enables fair model comparison with the same
parameter budget to identify key architectures, iii) has an open-source,
easy-to-use and reproducible code infrastructure, and iv) is flexible for
researchers to experiment with new theoretical ideas. As of May 2022, the
GitHub repository has reached 1,800 stars and 339 forks, which demonstrates the
utility of the proposed open-source framework through the wide usage by the GNN
community. In this paper, we present an updated version of our benchmark with a
concise presentation of the aforementioned framework characteristics, an
additional medium-sized molecular dataset AQSOL, similar to the popular ZINC,
but with a real-world measured chemical target, and discuss how this framework
can be leveraged to explore new GNN designs and insights. As a proof of value
of our benchmark, we study the case of graph positional encoding (PE) in GNNs,
which was introduced with this benchmark and has since spurred interest of
exploring more powerful PE for Transformers and GNNs in a robust experimental
setting.
Related papers
- Application of Graph Neural Networks and graph descriptors for graph
classification [0.0]
We focus on Graph Neural Networks (GNNs), which emerged as a de facto standard deep learning technique for graph representation learning.
We design fair evaluation experimental protocol and choose proper datasets collection.
We arrive to many conclusions, which shed new light on performance and quality of novel algorithms.
arXiv Detail & Related papers (2022-11-07T16:25:22Z) - Characterizing the Efficiency of Graph Neural Network Frameworks with a
Magnifying Glass [10.839902229218577]
Graph neural networks (GNNs) have received great attention due to their success in various graph-related learning tasks.
Recent GNNs have been developed with different graph sampling techniques for mini-batch training of GNNs on large graphs.
It is unknown how much the frameworks are 'eco-friendly' from a green computing perspective.
arXiv Detail & Related papers (2022-11-06T04:22:19Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - NAS-Bench-Graph: Benchmarking Graph Neural Architecture Search [55.75621026447599]
We propose NAS-Bench-Graph, a tailored benchmark that supports unified, reproducible, and efficient evaluations for GraphNAS.
Specifically, we construct a unified, expressive yet compact search space, covering 26,206 unique graph neural network (GNN) architectures.
Based on our proposed benchmark, the performance of GNN architectures can be directly obtained by a look-up table without any further computation.
arXiv Detail & Related papers (2022-06-18T10:17:15Z) - PaSca: a Graph Neural Architecture Search System under the Scalable
Paradigm [24.294196319217907]
Graph neural networks (GNNs) have achieved state-of-the-art performance in various graph-based tasks.
However, GNNs do not scale well to data size and message passing steps.
This paper proposes PasCa, a new paradigm and system that offers a principled approach to systemically construct and explore the design space for scalable GNNs.
arXiv Detail & Related papers (2022-03-01T17:26:50Z) - GraphWorld: Fake Graphs Bring Real Insights for GNNs [4.856486822139849]
GraphWorld allows a user to efficiently generate a world with millions of statistically diverse datasets.
We present insights from GraphWorld experiments regarding the performance characteristics of tens of thousands of GNN models over millions of benchmark datasets.
arXiv Detail & Related papers (2022-02-28T22:00:02Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Bag of Tricks for Training Deeper Graph Neural Networks: A Comprehensive
Benchmark Study [100.27567794045045]
Training deep graph neural networks (GNNs) is notoriously hard.
We present the first fair and reproducible benchmark dedicated to assessing the "tricks" of training deep GNNs.
arXiv Detail & Related papers (2021-08-24T05:00:37Z) - Node Masking: Making Graph Neural Networks Generalize and Scale Better [71.51292866945471]
Graph Neural Networks (GNNs) have received a lot of interest in the recent times.
In this paper, we utilize some theoretical tools to better visualize the operations performed by state of the art spatial GNNs.
We introduce a simple concept, Node Masking, that allows them to generalize and scale better.
arXiv Detail & Related papers (2020-01-17T06:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.