Reducing Neural Architecture Search Spaces with Training-Free Statistics
and Computational Graph Clustering
- URL: http://arxiv.org/abs/2204.14103v1
- Date: Fri, 29 Apr 2022 13:52:35 GMT
- Title: Reducing Neural Architecture Search Spaces with Training-Free Statistics
and Computational Graph Clustering
- Authors: Thorir Mar Ingolfsson, Mark Vero, Xiaying Wang, Lorenzo Lamberti, Luca
Benini, Matteo Spallanzani
- Abstract summary: Clustering-Based REDuction (C-BRED) is a new technique to reduce the size of NAS search spaces.
C-BRED selects a subset with 70% average accuracy instead of the whole space's 64% average accuracy.
- Score: 12.588898262943218
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The computational demands of neural architecture search (NAS) algorithms are
usually directly proportional to the size of their target search spaces. Thus,
limiting the search to high-quality subsets can greatly reduce the
computational load of NAS algorithms. In this paper, we present
Clustering-Based REDuction (C-BRED), a new technique to reduce the size of NAS
search spaces. C-BRED reduces a NAS space by clustering the computational
graphs associated with its architectures and selecting the most promising
cluster using proxy statistics correlated with network accuracy. When
considering the NAS-Bench-201 (NB201) data set and the CIFAR-100 task, C-BRED
selects a subset with 70% average accuracy instead of the whole space's 64%
average accuracy.
Related papers
- Graph is all you need? Lightweight data-agnostic neural architecture search without training [45.79667238486864]
Neural architecture search (NAS) enables the automatic design of neural network models.
Our method, dubbed nasgraph, remarkably reduces the computational costs by converting neural architectures to graphs.
It can find the best architecture among 200 randomly sampled architectures from NAS-Bench201 in 217 CPU seconds.
arXiv Detail & Related papers (2024-05-02T14:12:58Z) - DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - Less is More: Proxy Datasets in NAS approaches [4.266320191208303]
Neural Architecture Search (NAS) defines the design of Neural Networks as a search problem.
NAS is computationally intensive because of various possibilities depending on the number of elements in the design.
We extensively analyze the role of the dataset size based on several sampling approaches for reducing the dataset size.
arXiv Detail & Related papers (2022-03-14T07:58:12Z) - BossNAS: Exploring Hybrid CNN-transformers with Block-wisely
Self-supervised Neural Architecture Search [100.28980854978768]
We present Block-wisely Self-supervised Neural Architecture Search (BossNAS)
We factorize the search space into blocks and utilize a novel self-supervised training scheme, named ensemble bootstrapping, to train each block separately.
We also present HyTra search space, a fabric-like hybrid CNN-transformer search space with searchable down-sampling positions.
arXiv Detail & Related papers (2021-03-23T10:05:58Z) - Efficient Sampling for Predictor-Based Neural Architecture Search [3.287802528135173]
We study predictor-based NAS algorithms for neural architecture search.
We show that the sample efficiency of predictor-based algorithms decreases dramatically if the proxy is only computed for a subset of the search space.
This is an important step to make predictor-based NAS algorithms useful, in practice.
arXiv Detail & Related papers (2020-11-24T11:36:36Z) - Neural Architecture Search as Sparse Supernet [78.09905626281046]
This paper aims at enlarging the problem of Neural Architecture Search (NAS) from Single-Path and Multi-Path Search to automated Mixed-Path Search.
We model the NAS problem as a sparse supernet using a new continuous architecture representation with a mixture of sparsity constraints.
The sparse supernet enables us to automatically achieve sparsely-mixed paths upon a compact set of nodes.
arXiv Detail & Related papers (2020-07-31T14:51:52Z) - DC-NAS: Divide-and-Conquer Neural Architecture Search [108.57785531758076]
We present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.
We achieve a $75.1%$ top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space.
arXiv Detail & Related papers (2020-05-29T09:02:16Z) - Local Search is a Remarkably Strong Baseline for Neural Architecture
Search [0.0]
We consider, for the first time, a simple Local Search (LS) algorithm for Neural Architecture Search (NAS)
We release two benchmark datasets, named MacroNAS-C10 and MacroNAS-C100, containing 200K saved network evaluations for two established image classification tasks.
arXiv Detail & Related papers (2020-04-20T00:08:34Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z) - NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture
Search [55.12928953187342]
We propose an extension to NAS-Bench-101: NAS-Bench-201 with a different search space, results on multiple datasets, and more diagnostic information.
NAS-Bench-201 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms.
We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms.
arXiv Detail & Related papers (2020-01-02T05:28:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.