BLOX: Macro Neural Architecture Search Benchmark and Algorithms
- URL: http://arxiv.org/abs/2210.07271v1
- Date: Thu, 13 Oct 2022 18:06:39 GMT
- Title: BLOX: Macro Neural Architecture Search Benchmark and Algorithms
- Authors: Thomas Chun Pong Chau, {\L}ukasz Dudziak, Hongkai Wen, Nicholas Donald
Lane, Mohamed S Abdelfattah
- Abstract summary: Neural architecture search (NAS) has been successfully used to design numerous high-performance neural networks.
NAS is typically compute-intensive, so most existing approaches restrict the search to decide the operations and topological structure of a single block only.
Recent studies show that a macro search space, which allows blocks in a model to be different, can lead to better performance.
- Score: 16.296454205012733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural architecture search (NAS) has been successfully used to design
numerous high-performance neural networks. However, NAS is typically
compute-intensive, so most existing approaches restrict the search to decide
the operations and topological structure of a single block only, then the same
block is stacked repeatedly to form an end-to-end model. Although such an
approach reduces the size of search space, recent studies show that a macro
search space, which allows blocks in a model to be different, can lead to
better performance. To provide a systematic study of the performance of NAS
algorithms on a macro search space, we release Blox - a benchmark that consists
of 91k unique models trained on the CIFAR-100 dataset. The dataset also
includes runtime measurements of all the models on a diverse set of hardware
platforms. We perform extensive experiments to compare existing algorithms that
are well studied on cell-based search spaces, with the emerging blockwise
approaches that aim to make NAS scalable to much larger macro search spaces.
The benchmark and code are available at https://github.com/SamsungLabs/blox.
Related papers
- DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - MathNAS: If Blocks Have a Role in Mathematical Architecture Design [0.0]
We introduce MathNAS, a general NAS framework based on mathematical programming.
In MathNAS, the performances of the $m*n$ possible building blocks in the search space are calculated first, and then the performance of a network is directly predicted.
Our approach effectively reduces the complexity of network performance evaluation.
arXiv Detail & Related papers (2023-11-08T04:34:18Z) - Towards Less Constrained Macro-Neural Architecture Search [2.685668802278155]
Neural Architecture Search (NAS) networks achieve state-of-the-art performance in a variety of tasks.
Most NAS methods rely heavily on human-defined assumptions that constrain the search.
We present experiments showing that LCMNAS generates state-of-the-art architectures from scratch with minimal GPU computation.
arXiv Detail & Related papers (2022-03-10T17:53:03Z) - Prioritized Architecture Sampling with Monto-Carlo Tree Search [54.72096546595955]
One-shot neural architecture search (NAS) methods significantly reduce the search cost by considering the whole search space as one network.
In this paper, we introduce a sampling strategy based on Monte Carlo tree search (MCTS) with the search space modeled as a Monte Carlo tree (MCT)
For a fair comparison, we construct an open-source NAS benchmark of a macro search space evaluated on CIFAR-10, namely NAS-Bench-Macro.
arXiv Detail & Related papers (2021-03-22T15:09:29Z) - NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and
Size [31.903475598150152]
We propose NATS-Bench, a unified benchmark on searching for both architecture topology and size.
NATS-Bench includes the search space of 15,625 neural cell candidates for architecture topology and 32,768 for architecture size on three datasets.
arXiv Detail & Related papers (2020-08-28T21:34:56Z) - Local Search is a Remarkably Strong Baseline for Neural Architecture
Search [0.0]
We consider, for the first time, a simple Local Search (LS) algorithm for Neural Architecture Search (NAS)
We release two benchmark datasets, named MacroNAS-C10 and MacroNAS-C100, containing 200K saved network evaluations for two established image classification tasks.
arXiv Detail & Related papers (2020-04-20T00:08:34Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z) - NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture
Search [55.12928953187342]
We propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information.
NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms.
We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.
arXiv Detail & Related papers (2020-01-02T05:28:26Z) - Scalable NAS with Factorizable Architectural Parameters [102.51428615447703]
Neural Architecture Search (NAS) is an emerging topic in machine learning and computer vision.
This paper presents a scalable algorithm by factorizing a large set of candidate operators into smaller subspaces.
With a small increase in search costs and no extra costs in re-training, we find interesting architectures that were not explored before.
arXiv Detail & Related papers (2019-12-31T10:26:56Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.