Breaking the Curse of Space Explosion: Towards Efficient NAS with
Curriculum Search
- URL: http://arxiv.org/abs/2007.07197v2
- Date: Wed, 5 Aug 2020 08:56:56 GMT
- Title: Breaking the Curse of Space Explosion: Towards Efficient NAS with
Curriculum Search
- Authors: Yong Guo, Yaofo Chen, Yin Zheng, Peilin Zhao, Jian Chen, Junzhou
Huang, Mingkui Tan
- Abstract summary: We propose a curriculum search method that starts from a small search space and gradually incorporates the learned knowledge to guide the search in a large space.
With the proposed search strategy, our Curriculum Neural Architecture Search (CNAS) method significantly improves the search efficiency and finds better architectures than existing NAS methods.
- Score: 94.46818035655943
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural architecture search (NAS) has become an important approach to
automatically find effective architectures. To cover all possible good
architectures, we need to search in an extremely large search space with
billions of candidate architectures. More critically, given a large search
space, we may face a very challenging issue of space explosion. However, due to
the limitation of computational resources, we can only sample a very small
proportion of the architectures, which provides insufficient information for
the training. As a result, existing methods may often produce suboptimal
architectures. To alleviate this issue, we propose a curriculum search method
that starts from a small search space and gradually incorporates the learned
knowledge to guide the search in a large space. With the proposed search
strategy, our Curriculum Neural Architecture Search (CNAS) method significantly
improves the search efficiency and finds better architectures than existing NAS
methods. Extensive experiments on CIFAR-10 and ImageNet demonstrate the
effectiveness of the proposed method.
Related papers
- LISSNAS: Locality-based Iterative Search Space Shrinkage for Neural
Architecture Search [30.079267927860347]
We propose an automated algorithm that shrinks a large space into a diverse, small search space with SOTA search performance.
Our method achieves a SOTA Top-1 accuracy of 77.6% in ImageNet under mobile constraints, best-in-class Kendal-Tau, architectural diversity, and search space size.
arXiv Detail & Related papers (2023-07-06T16:28:51Z) - Construction of Hierarchical Neural Architecture Search Spaces based on
Context-free Grammars [66.05096551112932]
We introduce a unifying search space design framework based on context-free grammars.
By enhancing and using their properties, we effectively enable search over the complete architecture.
We show that our search strategy can be superior to existing Neural Architecture Search approaches.
arXiv Detail & Related papers (2022-11-03T14:23:00Z) - Automated Dominative Subspace Mining for Efficient Neural Architecture Search [36.06889021273405]
We propose a novel Neural Architecture Search method via Dominative Subspace Mining (DSM-NAS)
DSM-NAS finds promising architectures in automatically mined subspaces.
Experimental results demonstrate that DSM-NAS not only reduces the search cost but also discovers better architectures than state-of-the-art methods in various benchmark search spaces.
arXiv Detail & Related papers (2022-10-31T09:54:28Z) - Towards Less Constrained Macro-Neural Architecture Search [2.685668802278155]
Neural Architecture Search (NAS) networks achieve state-of-the-art performance in a variety of tasks.
Most NAS methods rely heavily on human-defined assumptions that constrain the search.
We present experiments showing that LCMNAS generates state-of-the-art architectures from scratch with minimal GPU computation.
arXiv Detail & Related papers (2022-03-10T17:53:03Z) - Poisoning the Search Space in Neural Architecture Search [0.0]
We evaluate the robustness of one such algorithm known as Efficient NAS against data poisoning attacks on the original search space.
Our results provide insights into the challenges to surmount in using NAS for more adversarially robust architecture search.
arXiv Detail & Related papers (2021-06-28T05:45:57Z) - Memory-Efficient Hierarchical Neural Architecture Search for Image
Restoration [68.6505473346005]
We propose a memory-efficient hierarchical NAS HiNAS (HiNAS) for image denoising and image super-resolution tasks.
With a single GTX1080Ti GPU, it takes only about 1 hour for searching for denoising network on BSD 500 and 3.5 hours for searching for the super-resolution structure on DIV2K.
arXiv Detail & Related papers (2020-12-24T12:06:17Z) - ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse
Coding [86.40042104698792]
We formulate neural architecture search as a sparse coding problem.
In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for search.
Our one-stage method produces state-of-the-art performances on both CIFAR-10 and ImageNet at the cost of only evaluation time.
arXiv Detail & Related papers (2020-10-13T04:34:24Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z) - RC-DARTS: Resource Constrained Differentiable Architecture Search [162.7199952019152]
We propose the resource constrained differentiable architecture search (RC-DARTS) method to learn architectures that are significantly smaller and faster.
We show that the RC-DARTS method learns lightweight neural architectures which have smaller model size and lower computational complexity.
arXiv Detail & Related papers (2019-12-30T05:02:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.