Rethinking Performance Estimation in Neural Architecture Search
- URL: http://arxiv.org/abs/2005.09917v1
- Date: Wed, 20 May 2020 09:01:44 GMT
- Title: Rethinking Performance Estimation in Neural Architecture Search
- Authors: Xiawu Zheng, Rongrong Ji, Qiang Wang, Qixiang Ye, Zhenguo Li, Yonghong
Tian, Qi Tian
- Abstract summary: We provide a novel yet systematic rethinking of performance estimation (PE) in a resource constrained regime.
By combining BPE with various search algorithms including reinforcement learning, evolution algorithm, random search, and differentiable architecture search, we achieve 1, 000x of NAS speed up with a negligible performance drop.
- Score: 191.08960589460173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural architecture search (NAS) remains a challenging problem, which is
attributed to the indispensable and time-consuming component of performance
estimation (PE). In this paper, we provide a novel yet systematic rethinking of
PE in a resource constrained regime, termed budgeted PE (BPE), which precisely
and effectively estimates the performance of an architecture sampled from an
architecture space. Since searching an optimal BPE is extremely time-consuming
as it requires to train a large number of networks for evaluation, we propose a
Minimum Importance Pruning (MIP) approach. Given a dataset and a BPE search
space, MIP estimates the importance of hyper-parameters using random forest and
subsequently prunes the minimum one from the next iteration. In this way, MIP
effectively prunes less important hyper-parameters to allocate more
computational resource on more important ones, thus achieving an effective
exploration. By combining BPE with various search algorithms including
reinforcement learning, evolution algorithm, random search, and differentiable
architecture search, we achieve 1, 000x of NAS speed up with a negligible
performance drop comparing to the SOTA
Related papers
- A Pairwise Comparison Relation-assisted Multi-objective Evolutionary Neural Architecture Search Method with Multi-population Mechanism [58.855741970337675]
Neural architecture search (NAS) enables re-searchers to automatically explore vast search spaces and find efficient neural networks.
NAS suffers from a key bottleneck, i.e., numerous architectures need to be evaluated during the search process.
We propose the SMEM-NAS, a pairwise com-parison relation-assisted multi-objective evolutionary algorithm based on a multi-population mechanism.
arXiv Detail & Related papers (2024-07-22T12:46:22Z) - Efficient Architecture Search via Bi-level Data Pruning [70.29970746807882]
This work pioneers an exploration into the critical role of dataset characteristics for DARTS bi-level optimization.
We introduce a new progressive data pruning strategy that utilizes supernet prediction dynamics as the metric.
Comprehensive evaluations on the NAS-Bench-201 search space, DARTS search space, and MobileNet-like search space validate that BDP reduces search costs by over 50%.
arXiv Detail & Related papers (2023-12-21T02:48:44Z) - Shapley-NAS: Discovering Operation Contribution for Neural Architecture
Search [96.20505710087392]
We propose a Shapley value based method to evaluate operation contribution (Shapley-NAS) for neural architecture search.
We show that our method outperforms the state-of-the-art methods by a considerable margin with light search cost.
arXiv Detail & Related papers (2022-06-20T14:41:49Z) - Efficient Model Performance Estimation via Feature Histories [27.008927077173553]
An important step in the task of neural network design is the evaluation of a model's performance.
In this work, we use the evolution history of features of a network during the early stages of training to build a proxy classifier.
We show that our method can be combined with multiple search algorithms to find better solutions to a wide range of tasks.
arXiv Detail & Related papers (2021-03-07T20:41:57Z) - Multi-objective Neural Architecture Search with Almost No Training [9.93048700248444]
We propose an effective alternative, dubbed Random-Weight Evaluation (RWE), to rapidly estimate the performance of network architectures.
RWE reduces the computational cost of evaluating an architecture from hours to seconds.
When integrated within an evolutionary multi-objective algorithm, RWE obtains a set of efficient architectures with state-of-the-art performance on CIFAR-10 with less than two hours' searching on a single GPU card.
arXiv Detail & Related papers (2020-11-27T07:39:17Z) - Neural Architecture Search with an Efficient Multiobjective Evolutionary
Framework [0.0]
We propose EMONAS, an Efficient MultiObjective Neural Architecture Search framework.
EMONAS is composed of a search space that considers both the macro- and micro-structure of the architecture.
It is evaluated on the task of 3D cardiac segmentation from the MICCAI ACDC challenge.
arXiv Detail & Related papers (2020-11-09T14:41:10Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.