NAS-Bench-360: Benchmarking Diverse Tasks for Neural Architecture Search
- URL: http://arxiv.org/abs/2110.05668v1
- Date: Tue, 12 Oct 2021 01:13:18 GMT
- Title: NAS-Bench-360: Benchmarking Diverse Tasks for Neural Architecture Search
- Authors: Renbo Tu, Mikhail Khodak, Nicholas Roberts, Ameet Talwalkar
- Abstract summary: Most existing neural architecture search (NAS) benchmarks and algorithms prioritize performance on well-studied tasks.
We present NAS-Bench-360, a benchmark suite for evaluating state-of-the-art NAS methods for convolutional neural networks (CNNs)
- Score: 18.9676056830197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most existing neural architecture search (NAS) benchmarks and algorithms
prioritize performance on well-studied tasks, e.g., image classification on
CIFAR and ImageNet. This makes the applicability of NAS approaches in more
diverse areas inadequately understood. In this paper, we present NAS-Bench-360,
a benchmark suite for evaluating state-of-the-art NAS methods for convolutional
neural networks (CNNs). To construct it, we curate a collection of ten tasks
spanning a diverse array of application domains, dataset sizes, problem
dimensionalities, and learning objectives. By carefully selecting tasks that
can both interoperate with modern CNN-based search methods but that are also
far-afield from their original development domain, we can use NAS-Bench-360 to
investigate the following central question: do existing state-of-the-art NAS
methods perform well on diverse tasks? Our experiments show that a modern NAS
procedure designed for image classification can indeed find good architectures
for tasks with other dimensionalities and learning objectives; however, the
same method struggles against more task-specific methods and performs
catastrophically poorly on classification in non-vision domains. The case for
NAS robustness becomes even more dire in a resource-constrained setting, where
a recent NAS method provides little-to-no benefit over much simpler baselines.
These results demonstrate the need for a benchmark such as NAS-Bench-360 to
help develop NAS approaches that work well on a variety of tasks, a crucial
component of a truly robust and automated pipeline. We conclude with a
demonstration of the kind of future research our suite of tasks will enable.
All data and code is made publicly available.
Related papers
- DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - How Much Is Hidden in the NAS Benchmarks? Few-Shot Adaptation of a NAS
Predictor [22.87207410692821]
We borrow from the rich field of meta-learning for few-shot adaptation and study applicability of those methods to NAS.
Our meta-learning approach not only shows superior (or matching) performance in the cross-validation experiments but also successful extrapolation to a new search space and tasks.
arXiv Detail & Related papers (2023-11-30T10:51:46Z) - Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets [55.2118691522524]
Distillation-aware Neural Architecture Search (DaNAS) aims to search for an optimal student architecture.
We propose a distillation-aware meta accuracy prediction model, DaSS (Distillation-aware Student Search), which can predict a given architecture's final performances on a dataset.
arXiv Detail & Related papers (2023-05-26T14:00:35Z) - Generalization Properties of NAS under Activation and Skip Connection
Search [66.8386847112332]
We study the generalization properties of Neural Architecture Search (NAS) under a unifying framework.
We derive the lower (and upper) bounds of the minimum eigenvalue of the Neural Tangent Kernel (NTK) under the (in)finite-width regime.
We show how the derived results can guide NAS to select the top-performing architectures, even in the case without training.
arXiv Detail & Related papers (2022-09-15T12:11:41Z) - UnrealNAS: Can We Search Neural Architectures with Unreal Data? [84.78460976605425]
Neural architecture search (NAS) has shown great success in the automatic design of deep neural networks (DNNs)
Previous work has analyzed the necessity of having ground-truth labels in NAS and inspired broad interest.
We take a further step to question whether real data is necessary for NAS to be effective.
arXiv Detail & Related papers (2022-05-04T16:30:26Z) - Meta-Learning of NAS for Few-shot Learning in Medical Image Applications [10.666687733540668]
Neural Architecture Search (NAS) has motivated various applications in medical imaging.
NAS requires the availability of large annotated data, considerable resources, and pre-defined tasks.
We introduce various NAS approaches in medical imaging with different applications such as classification, segmentation, detection, reconstruction.
arXiv Detail & Related papers (2022-03-16T21:21:51Z) - NAS-Bench-Suite: NAS Evaluation is (Now) Surprisingly Easy [37.72015163462501]
We present an in-depth analysis of popular NAS algorithms and performance prediction methods across 25 different combinations of search spaces and datasets.
We introduce NAS-Bench-Suite, a comprehensive and collection of NAS benchmarks, accessible through a unified interface.
arXiv Detail & Related papers (2022-01-31T18:02:09Z) - TransNAS-Bench-101: Improving Transferability and Generalizability of
Cross-Task Neural Architecture Search [98.22779489340869]
We propose TransNAS-Bench-101, a benchmark dataset containing network performance across seven vision tasks.
We explore two fundamentally different types of search space: cell-level search space and macro-level search space.
With 7,352 backbones evaluated on seven tasks, 51,464 trained models with detailed training information are provided.
arXiv Detail & Related papers (2021-05-25T12:15:21Z) - NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture
Search [55.12928953187342]
We propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information.
NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms.
We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.
arXiv Detail & Related papers (2020-01-02T05:28:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.