NAS-Bench-Graph: Benchmarking Graph Neural Architecture Search
- URL: http://arxiv.org/abs/2206.09166v2
- Date: Sat, 9 Mar 2024 18:32:48 GMT
- Title: NAS-Bench-Graph: Benchmarking Graph Neural Architecture Search
- Authors: Yijian Qin, Ziwei Zhang, Xin Wang, Zeyang Zhang, Wenwu Zhu
- Abstract summary: We propose NAS-Bench-Graph, a tailored benchmark that supports unified, reproducible, and efficient evaluations for GraphNAS.
Specifically, we construct a unified, expressive yet compact search space, covering 26,206 unique graph neural network (GNN) architectures.
Based on our proposed benchmark, the performance of GNN architectures can be directly obtained by a look-up table without any further computation.
- Score: 55.75621026447599
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graph neural architecture search (GraphNAS) has recently aroused considerable
attention in both academia and industry. However, two key challenges seriously
hinder the further research of GraphNAS. First, since there is no consensus for
the experimental setting, the empirical results in different research papers
are often not comparable and even not reproducible, leading to unfair
comparisons. Secondly, GraphNAS often needs extensive computations, which makes
it highly inefficient and inaccessible to researchers without access to
large-scale computation. To solve these challenges, we propose NAS-Bench-Graph,
a tailored benchmark that supports unified, reproducible, and efficient
evaluations for GraphNAS. Specifically, we construct a unified, expressive yet
compact search space, covering 26,206 unique graph neural network (GNN)
architectures and propose a principled evaluation protocol. To avoid
unnecessary repetitive training, we have trained and evaluated all of these
architectures on nine representative graph datasets, recording detailed metrics
including train, validation, and test performance in each epoch, the latency,
the number of parameters, etc. Based on our proposed benchmark, the performance
of GNN architectures can be directly obtained by a look-up table without any
further computation, which enables fair, fully reproducible, and efficient
comparisons. To demonstrate its usage, we make in-depth analyses of our
proposed NAS-Bench-Graph, revealing several interesting findings for GraphNAS.
We also showcase how the benchmark can be easily compatible with GraphNAS open
libraries such as AutoGL and NNI. To the best of our knowledge, our work is the
first benchmark for graph neural architecture search.
Related papers
- Towards Lightweight Graph Neural Network Search with Curriculum Graph Sparsification [48.334100429553644]
This paper proposes to design a joint graph data and architecture mechanism, which identifies important sub-architectures via the valuable graph data.
To search for optimal lightweight Graph Neural Networks (GNNs), we propose a Lightweight Graph Neural Architecture Search with Graph SparsIfication and Network Pruning (GASSIP) method.
Our method achieves on-par or even higher node classification performance with half or fewer model parameters of searched GNNs and a sparser graph.
arXiv Detail & Related papers (2024-06-24T06:53:37Z) - Efficient and Explainable Graph Neural Architecture Search via
Monte-Carlo Tree Search [5.076419064097733]
Graph neural networks (GNNs) are powerful tools for performing data science tasks in various domains.
To save human efforts and computational costs, graph neural architecture search (Graph NAS) has been used to search for a sub-optimal GNN architecture.
We propose ExGNAS, which consists of (i) a simple search space that can adapt to various graphs and (ii) a search algorithm that makes the decision process explainable.
arXiv Detail & Related papers (2023-08-30T03:21:45Z) - GraphPNAS: Learning Distribution of Good Neural Architectures via Deep
Graph Generative Models [48.57083463364353]
We study neural architecture search (NAS) through the lens of learning random graph models.
We propose GraphPNAS a deep graph generative model that learns a distribution of well-performing architectures.
We show that our proposed graph generator consistently outperforms RNN-based one and achieves better or comparable performances than state-of-the-art NAS methods.
arXiv Detail & Related papers (2022-11-28T09:09:06Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Arch-Graph: Acyclic Architecture Relation Predictor for
Task-Transferable Neural Architecture Search [96.31315520244605]
Arch-Graph is a transferable NAS method that predicts task-specific optimal architectures.
We show Arch-Graph's transferability and high sample efficiency across numerous tasks.
It is able to find top 0.16% and 0.29% architectures on average on two search spaces under the budget of only 50 models.
arXiv Detail & Related papers (2022-04-12T16:46:06Z) - Scalable Graph Neural Networks for Heterogeneous Graphs [12.44278942365518]
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data.
Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks.
In this work, we ask whether these results can be extended to heterogeneous graphs, which encode multiple types of relationship between different entities.
arXiv Detail & Related papers (2020-11-19T06:03:35Z) - NASGEM: Neural Architecture Search via Graph Embedding Method [41.0658375655084]
We propose NASGEM which stands for Neural Architecture Search via Graph Embedding Method.
It is driven by a novel graph embedding method equipped with similarity measures to capture the graph topology information.
It consistently outperforms networks crafted by existing search methods in classification tasks.
arXiv Detail & Related papers (2020-07-08T21:58:37Z) - Scaling Graph Neural Networks with Approximate PageRank [64.92311737049054]
We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs.
In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings.
We show that training PPRGo and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph.
arXiv Detail & Related papers (2020-07-03T09:30:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.