NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy
- URL: http://arxiv.org/abs/2201.13396v1
- Date: Mon, 31 Jan 2022 18:02:09 GMT
- Title: NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy
- Authors: Yash Mehta, Colin White, Arber Zela, Arjun Krishnakumar, Guri
Zabergja, Shakiba Moradian, Mahmoud Safari, Kaicheng Yu, Frank Hutter
- Abstract summary: We present an in-depth analysis of popular NAS algorithms and performance prediction methods across 25 different combinations of search spaces and datasets.
We introduce NAS-Bench-Suite, a comprehensive and collection of NAS benchmarks, accessible through a unified interface.
- Score: 37.72015163462501
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The release of tabular benchmarks, such as NAS-Bench-101 and NAS-Bench-201,
has significantly lowered the computational overhead for conducting scientific
research in neural architecture search (NAS). Although they have been widely
adopted and used to tune real-world NAS algorithms, these benchmarks are
limited to small search spaces and focus solely on image classification.
Recently, several new NAS benchmarks have been introduced that cover
significantly larger search spaces over a wide range of tasks, including object
detection, speech recognition, and natural language processing. However,
substantial differences among these NAS benchmarks have so far prevented their
widespread adoption, limiting researchers to using just a few benchmarks. In
this work, we present an in-depth analysis of popular NAS algorithms and
performance prediction methods across 25 different combinations of search
spaces and datasets, finding that many conclusions drawn from a few NAS
benchmarks do not generalize to other benchmarks. To help remedy this problem,
we introduce NAS-Bench-Suite, a comprehensive and extensible collection of NAS
benchmarks, accessible through a unified interface, created with the aim to
facilitate reproducible, generalizable, and rapid NAS research. Our code is
available at https://github.com/automl/naslib.
Related papers
- How Much Is Hidden in the NAS Benchmarks? Few-Shot Adaptation of a NAS
Predictor [22.87207410692821]
We borrow from the rich field of meta-learning for few-shot adaptation and study applicability of those methods to NAS.
Our meta-learning approach not only shows superior (or matching) performance in the cross-validation experiments but also successful extrapolation to a new search space and tasks.
arXiv Detail & Related papers (2023-11-30T10:51:46Z) - Generalization Properties of NAS under Activation and Skip Connection
Search [66.8386847112332]
We study the generalization properties of Neural Architecture Search (NAS) under a unifying framework.
We derive the lower (and upper) bounds of the minimum eigenvalue of the Neural Tangent Kernel (NTK) under the (in)finite-width regime.
We show how the derived results can guide NAS to select the top-performing architectures, even in the case without training.
arXiv Detail & Related papers (2022-09-15T12:11:41Z) - UnrealNAS: Can We Search Neural Architectures with Unreal Data? [84.78460976605425]
Neural architecture search (NAS) has shown great success in the automatic design of deep neural networks (DNNs)
Previous work has analyzed the necessity of having ground-truth labels in NAS and inspired broad interest.
We take a further step to question whether real data is necessary for NAS to be effective.
arXiv Detail & Related papers (2022-05-04T16:30:26Z) - NAS-Bench-360: Benchmarking Diverse Tasks for Neural Architecture Search [18.9676056830197]
Most existing neural architecture search (NAS) benchmarks and algorithms prioritize performance on well-studied tasks.
We present NAS-Bench-360, a benchmark suite for evaluating state-of-the-art NAS methods for convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-10-12T01:13:18Z) - Surrogate NAS Benchmarks: Going Beyond the Limited Search Spaces of
Tabular NAS Benchmarks [41.73906939640346]
We propose a methodology to create cheap NAS surrogate benchmarks for arbitrary search spaces.
We show that surrogate NAS benchmarks can lead to faithful estimates of how well different NAS methods work on the original non-surrogate benchmark.
We believe that surrogate NAS benchmarks are an indispensable tool to extend scientifically sound work on NAS to large and exciting search spaces.
arXiv Detail & Related papers (2020-08-22T08:15:52Z) - DSNAS: Direct Neural Architecture Search without Parameter Retraining [112.02966105995641]
We propose a new problem definition for NAS, task-specific end-to-end, based on this observation.
We propose DSNAS, an efficient differentiable NAS framework that simultaneously optimize architecture and parameters with a low-biased Monte Carlo estimate.
DSNAS successfully discovers networks with comparable accuracy (74.4%) on ImageNet in 420 GPU hours, reducing the total time by more than 34%.
arXiv Detail & Related papers (2020-02-21T04:41:47Z) - NAS-Bench-1Shot1: Benchmarking and Dissecting One-shot Neural
Architecture Search [42.82951139084501]
One-shot neural architecture search (NAS) has played a crucial role in making NAS methods computationally feasible in practice.
We introduce a general framework for one-shot NAS that can be instantiated to many recently-introduced variants and introduce a general benchmarking framework.
arXiv Detail & Related papers (2020-01-28T15:50:22Z) - NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture
Search [55.12928953187342]
We propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information.
NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms.
We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.
arXiv Detail & Related papers (2020-01-02T05:28:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.