AutoSpace: Neural Architecture Search with Less Human Interference
- URL: http://arxiv.org/abs/2103.11833v1
- Date: Mon, 22 Mar 2021 13:28:56 GMT
- Title: AutoSpace: Neural Architecture Search with Less Human Interference
- Authors: Daquan Zhou, Xiaojie Jin, Xiaochen Lian, Linjie Yang, Yujing Xue,
Qibin Hou, Jiashi Feng
- Abstract summary: Current neural architecture search (NAS) algorithms still require expert knowledge and effort to design a search space for network construction.
We propose a novel differentiable evolutionary framework named AutoSpace, which evolves the search space to an optimal one.
With the learned search space, the performance of recent NAS algorithms can be improved significantly compared with using previously manually designed spaces.
- Score: 84.42680793945007
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current neural architecture search (NAS) algorithms still require expert
knowledge and effort to design a search space for network construction. In this
paper, we consider automating the search space design to minimize human
interference, which however faces two challenges: the explosive complexity of
the exploration space and the expensive computation cost to evaluate the
quality of different search spaces. To solve them, we propose a novel
differentiable evolutionary framework named AutoSpace, which evolves the search
space to an optimal one with following novel techniques: a differentiable
fitness scoring function to efficiently evaluate the performance of cells and a
reference architecture to speedup the evolution procedure and avoid falling
into sub-optimal solutions. The framework is generic and compatible with
additional computational constraints, making it feasible to learn specialized
search spaces that fit different computational budgets. With the learned search
space, the performance of recent NAS algorithms can be improved significantly
compared with using previously manually designed spaces. Remarkably, the models
generated from the new search space achieve 77.8% top-1 accuracy on ImageNet
under the mobile setting (MAdds < 500M), out-performing previous SOTA
EfficientNet-B0 by 0.7%. All codes will be made public.
Related papers
- LISSNAS: Locality-based Iterative Search Space Shrinkage for Neural
Architecture Search [30.079267927860347]
We propose an automated algorithm that shrinks a large space into a diverse, small search space with SOTA search performance.
Our method achieves a SOTA Top-1 accuracy of 77.6% in ImageNet under mobile constraints, best-in-class Kendal-Tau, architectural diversity, and search space size.
arXiv Detail & Related papers (2023-07-06T16:28:51Z) - Searching a High-Performance Feature Extractor for Text Recognition
Network [92.12492627169108]
We design a domain-specific search space by exploring principles for having good feature extractors.
As the space is huge and complexly structured, no existing NAS algorithms can be applied.
We propose a two-stage algorithm to effectively search in the space.
arXiv Detail & Related papers (2022-09-27T03:49:04Z) - DAAS: Differentiable Architecture and Augmentation Policy Search [107.53318939844422]
This work considers the possible coupling between neural architectures and data augmentation and proposes an effective algorithm jointly searching for them.
Our approach achieves 97.91% accuracy on CIFAR-10 and 76.6% Top-1 accuracy on ImageNet dataset, showing the outstanding performance of our search algorithm.
arXiv Detail & Related papers (2021-09-30T17:15:17Z) - BossNAS: Exploring Hybrid CNN-transformers with Block-wisely
Self-supervised Neural Architecture Search [100.28980854978768]
We present Block-wisely Self-supervised Neural Architecture Search (BossNAS)
We factorize the search space into blocks and utilize a novel self-supervised training scheme, named ensemble bootstrapping, to train each block separately.
We also present HyTra search space, a fabric-like hybrid CNN-transformer search space with searchable down-sampling positions.
arXiv Detail & Related papers (2021-03-23T10:05:58Z) - Evolving Search Space for Neural Architecture Search [70.71153433676024]
We present a Neural Search-space Evolution (NSE) scheme that amplifies the results from the previous effort by maintaining an optimized search space subset.
We achieve 77.3% top-1 retrain accuracy on ImageNet with 333M FLOPs, which yielded a state-of-the-art performance.
When the latency constraint is adopted, our result also performs better than the previous best-performing mobile models with a 77.9% Top-1 retrain accuracy.
arXiv Detail & Related papers (2020-11-22T01:11:19Z) - ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse
Coding [86.40042104698792]
We formulate neural architecture search as a sparse coding problem.
In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for search.
Our one-stage method produces state-of-the-art performances on both CIFAR-10 and ImageNet at the cost of only evaluation time.
arXiv Detail & Related papers (2020-10-13T04:34:24Z) - Neural Architecture Generator Optimization [9.082931889304723]
We are first to investigate casting NAS as a problem of finding the optimal network generator.
We propose a new, hierarchical and graph-based search space capable of representing an extremely large variety of network types.
arXiv Detail & Related papers (2020-04-03T06:38:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.