UENAS: A Unified Evolution-based NAS Framework
- URL: http://arxiv.org/abs/2203.04300v1
- Date: Tue, 8 Mar 2022 09:14:37 GMT
- Title: UENAS: A Unified Evolution-based NAS Framework
- Authors: Zimian Wei, Hengyue Pan, Xin Niu, Peijie Dong, Dongsheng Li
- Abstract summary: UENAS is an evolution-based NAS framework with a broader search space.
We propose three strategies to alleviate the huge search cost caused by the expanded search space.
UENAS achieves error rates of 2.81% on CIFAR-10, 20.24% on CIFAR-100, and 33% on Tiny-ImageNet.
- Score: 11.711427415684955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural architecture search (NAS) has gained significant attention for
automatic network design in recent years. Previous NAS methods suffer from
limited search spaces, which may lead to sub-optimal results. In this paper, we
propose UENAS, an evolution-based NAS framework with a broader search space
that supports optimizing network architectures, pruning strategies, and
hyperparameters simultaneously. To alleviate the huge search cost caused by the
expanded search space, three strategies are adopted: First, an adaptive pruning
strategy that iteratively trims the average model size in the population
without compromising performance. Second, child networks share weights of
overlapping layers with pre-trained parent networks to reduce the training
epochs. Third, an online predictor scores the joint representations of
architecture, pruning strategy, and hyperparameters to filter out inferior
combos. By the proposed three strategies, the search efficiency is
significantly improved and more well-performed compact networks with tailored
hyper-parameters are derived. In experiments, UENAS achieves error rates of
2.81% on CIFAR-10, 20.24% on CIFAR-100, and 33% on Tiny-ImageNet, which shows
the effectiveness of our method.
Related papers
- HEP-NAS: Towards Efficient Few-shot Neural Architecture Search via Hierarchical Edge Partitioning [8.484729345263153]
One-shot methods have advanced the field of neural architecture search (NAS) by adopting weight-sharing strategy to reduce search costs.
Few-shot methods divide the entire supernet into individual sub-supernets by splitting edge by edge to alleviate this issue.
We introduce HEP-NAS, a hierarchy-wise partition algorithm designed to further enhance accuracy.
arXiv Detail & Related papers (2024-12-14T07:42:56Z) - TopoNAS: Boosting Search Efficiency of Gradient-based NAS via Topological Simplification [11.08910129925713]
TopoNAS is a model-agnostic approach for gradient-based one-shot NAS.
It significantly reduces searching time and memory usage by topological simplification of searchable paths.
arXiv Detail & Related papers (2024-08-02T15:01:29Z) - Pruning-as-Search: Efficient Neural Architecture Search via Channel
Pruning and Structural Reparameterization [50.50023451369742]
Pruning-as-Search (PaS) is an end-to-end channel pruning method to search out desired sub-network automatically and efficiently.
Our proposed architecture outperforms prior arts by around $1.0%$ top-1 accuracy on ImageNet-1000 classification task.
arXiv Detail & Related papers (2022-06-02T17:58:54Z) - $\alpha$NAS: Neural Architecture Search using Property Guided Synthesis [1.2746672439030722]
We develop techniques that enable efficient neural architecture search (NAS) in a significantly larger design space.
Our key insights are as follows: (1) the abstract search space is significantly smaller than the original search space, and (2) architectures with similar program properties also have similar performance.
We implement our approach, $alpha$NAS, within an evolutionary framework, where the mutations are guided by the program properties.
arXiv Detail & Related papers (2022-05-08T21:48:03Z) - PRE-NAS: Predictor-assisted Evolutionary Neural Architecture Search [34.06028035262884]
We propose a novel evolutionary-based NAS strategy, Predictor-assisted E-NAS (PRE-NAS)
PRE-NAS leverages new evolutionary search strategies and integrates high-fidelity weight inheritance over generations.
Experiments on NAS-Bench-201 and DARTS search spaces show that PRE-NAS can outperform state-of-the-art NAS methods.
arXiv Detail & Related papers (2022-04-27T06:40:39Z) - Generalizing Few-Shot NAS with Gradient Matching [165.5690495295074]
One-Shot methods train one supernet to approximate the performance of every architecture in the search space via weight-sharing.
Few-Shot NAS reduces the level of weight-sharing by splitting the One-Shot supernet into multiple separated sub-supernets.
It significantly outperforms its Few-Shot counterparts while surpassing previous comparable methods in terms of the accuracy of derived architectures.
arXiv Detail & Related papers (2022-03-29T03:06:16Z) - Trilevel Neural Architecture Search for Efficient Single Image
Super-Resolution [127.92235484598811]
This paper proposes a trilevel neural architecture search (NAS) method for efficient single image super-resolution (SR)
For modeling the discrete search space, we apply a new continuous relaxation on the discrete search spaces to build a hierarchical mixture of network-path, cell-operations, and kernel-width.
An efficient search algorithm is proposed to perform optimization in a hierarchical supernet manner.
arXiv Detail & Related papers (2021-01-17T12:19:49Z) - DrNAS: Dirichlet Neural Architecture Search [88.56953713817545]
We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution.
With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based generalization.
To alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme.
arXiv Detail & Related papers (2020-06-18T08:23:02Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z) - BNAS:An Efficient Neural Architecture Search Approach Using Broad
Scalable Architecture [62.587982139871976]
We propose Broad Neural Architecture Search (BNAS) where we elaborately design broad scalable architecture dubbed Broad Convolutional Neural Network (BCNN)
BNAS delivers 0.19 days which is 2.37x less expensive than ENAS who ranks the best in reinforcement learning-based NAS approaches.
arXiv Detail & Related papers (2020-01-18T15:07:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.