NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and
Size
- URL: http://arxiv.org/abs/2009.00437v6
- Date: Tue, 26 Jan 2021 02:33:39 GMT
- Title: NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and
Size
- Authors: Xuanyi Dong, Lu Liu, Katarzyna Musial, Bogdan Gabrys
- Abstract summary: We propose NATS-Bench, a unified benchmark on searching for both architecture topology and size.
NATS-Bench includes the search space of 15,625 neural cell candidates for architecture topology and 32,768 for architecture size on three datasets.
- Score: 31.903475598150152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural architecture search (NAS) has attracted a lot of attention and has
been illustrated to bring tangible benefits in a large number of applications
in the past few years. Architecture topology and architecture size have been
regarded as two of the most important aspects for the performance of deep
learning models and the community has spawned lots of searching algorithms for
both aspects of the neural architectures. However, the performance gain from
these searching algorithms is achieved under different search spaces and
training setups. This makes the overall performance of the algorithms to some
extent incomparable and the improvement from a sub-module of the searching
model unclear. In this paper, we propose NATS-Bench, a unified benchmark on
searching for both topology and size, for (almost) any up-to-date NAS
algorithm. NATS-Bench includes the search space of 15,625 neural cell
candidates for architecture topology and 32,768 for architecture size on three
datasets. We analyze the validity of our benchmark in terms of various criteria
and performance comparison of all candidates in the search space. We also show
the versatility of NATS-Bench by benchmarking 13 recent state-of-the-art NAS
algorithms on it. All logs and diagnostic information trained using the same
setup for each candidate are provided. This facilitates a much larger community
of researchers to focus on developing better NAS algorithms in a more
comparable and computationally cost friendly environment. All codes are
publicly available at: https://xuanyidong.com/assets/projects/NATS-Bench.
Related papers
- DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - When NAS Meets Trees: An Efficient Algorithm for Neural Architecture
Search [117.89827740405694]
Key challenge in neural architecture search (NAS) is designing how to explore wisely in the huge search space.
We propose a new NAS method called TNAS (NAS with trees), which improves search efficiency by exploring only a small number of architectures.
TNAS finds the global optimal architecture on CIFAR-10 with test accuracy of 94.37% in four GPU hours in NAS-Bench-201.
arXiv Detail & Related papers (2022-04-11T07:34:21Z) - Neural Architecture Ranker [19.21631623578852]
Architecture ranking has recently been advocated to design an efficient and effective performance predictor for Neural Architecture Search (NAS)
Inspired by the stratification stratification, we propose a predictor, namely Neural Ranker (NAR)
arXiv Detail & Related papers (2022-01-30T04:54:59Z) - Going Beyond Neural Architecture Search with Sampling-based Neural
Ensemble Search [31.059040393415003]
We present two novel sampling algorithms under our Neural Ensemble Search via Sampling (NESS) framework.
Our NESS algorithms are shown to be able to achieve improved performance in both classification and adversarial defense tasks.
arXiv Detail & Related papers (2021-09-06T15:18:37Z) - OPANAS: One-Shot Path Aggregation Network Architecture Search for Object
Detection [82.04372532783931]
Recently, neural architecture search (NAS) has been exploited to design feature pyramid networks (FPNs)
We propose a novel One-Shot Path Aggregation Network Architecture Search (OPANAS) algorithm, which significantly improves both searching efficiency and detection accuracy.
arXiv Detail & Related papers (2021-03-08T01:48:53Z) - Hierarchical Neural Architecture Search for Deep Stereo Matching [131.94481111956853]
We propose the first end-to-end hierarchical NAS framework for deep stereo matching.
Our framework incorporates task-specific human knowledge into the neural architecture search framework.
It is ranked at the top 1 accuracy on KITTI stereo 2012, 2015 and Middlebury benchmarks, as well as the top 1 on SceneFlow dataset.
arXiv Detail & Related papers (2020-10-26T11:57:37Z) - Fine-Grained Stochastic Architecture Search [6.277767522867666]
Fine-Grained Architecture Search (FiGS) is a differentiable search method that searches over a much larger set of candidate architectures.
FiGS simultaneously selects and modifies operators in the search space by applying a structured sparse regularization penalty.
We show results across 3 existing search spaces, matching or outperforming the original search algorithms.
arXiv Detail & Related papers (2020-06-17T01:04:14Z) - Local Search is a Remarkably Strong Baseline for Neural Architecture
Search [0.0]
We consider, for the first time, a simple Local Search (LS) algorithm for Neural Architecture Search (NAS)
We release two benchmark datasets, named MacroNAS-C10 and MacroNAS-C100, containing 200K saved network evaluations for two established image classification tasks.
arXiv Detail & Related papers (2020-04-20T00:08:34Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z) - NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture
Search [55.12928953187342]
We propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information.
NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms.
We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.
arXiv Detail & Related papers (2020-01-02T05:28:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.