EcoNAS: Finding Proxies for Economical Neural Architecture Search
- URL: http://arxiv.org/abs/2001.01233v2
- Date: Thu, 27 Feb 2020 02:42:45 GMT
- Title: EcoNAS: Finding Proxies for Economical Neural Architecture Search
- Authors: Dongzhan Zhou, Xinchi Zhou, Wenwei Zhang, Chen Change Loy, Shuai Yi,
Xuesen Zhang, Wanli Ouyang
- Abstract summary: In this paper, we observe that most existing proxies exhibit different behaviors in maintaining the rank consistency among network candidates.
Inspired by these observations, we present a reliable proxy and further formulate a hierarchical proxy strategy.
The strategy spends more computations on candidate networks that are potentially more accurate, while discards unpromising ones in early stage with a fast proxy.
- Score: 130.59673917196994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Architecture Search (NAS) achieves significant progress in many
computer vision tasks. While many methods have been proposed to improve the
efficiency of NAS, the search progress is still laborious because training and
evaluating plausible architectures over large search space is time-consuming.
Assessing network candidates under a proxy (i.e., computationally reduced
setting) thus becomes inevitable. In this paper, we observe that most existing
proxies exhibit different behaviors in maintaining the rank consistency among
network candidates. In particular, some proxies can be more reliable -- the
rank of candidates does not differ much comparing their reduced setting
performance and final performance. In this paper, we systematically investigate
some widely adopted reduction factors and report our observations. Inspired by
these observations, we present a reliable proxy and further formulate a
hierarchical proxy strategy. The strategy spends more computations on candidate
networks that are potentially more accurate, while discards unpromising ones in
early stage with a fast proxy. This leads to an economical evolutionary-based
NAS (EcoNAS), which achieves an impressive 400x search time reduction in
comparison to the evolutionary-based state of the art (8 vs. 3150 GPU days).
Some new proxies led by our observations can also be applied to accelerate
other NAS methods while still able to discover good candidate networks with
performance matching those found by previous proxy strategies.
Related papers
- Zero-Shot NAS via the Suppression of Local Entropy Decrease [21.100745856699277]
Architecture performance evaluation is the most time-consuming part of neural architecture search (NAS)
Zero-Shot NAS accelerates the evaluation by utilizing zero-cost proxies instead of training.
architectural topologies are used to evaluate the performance of networks in this study.
arXiv Detail & Related papers (2024-11-09T17:36:53Z) - TG-NAS: Leveraging Zero-Cost Proxies with Transformer and Graph Convolution Networks for Efficient Neural Architecture Search [1.30891455653235]
TG-NAS aims to create training-free proxies for architecture performance prediction.
We introduce TG-NAS, a novel model-based universal proxy that leverages a transformer-based operator embedding generator and a graph convolution network (GCN) to predict architecture performance.
TG-NAS achieves up to 300X improvements in search efficiency compared to previous SOTA ZC proxy methods.
arXiv Detail & Related papers (2024-03-30T07:25:30Z) - AZ-NAS: Assembling Zero-Cost Proxies for Network Architecture Search [30.64117903216323]
Training-free network architecture search (NAS) aims to discover high-performing networks with zero-cost proxies.
We propose AZ-NAS, a novel approach that leverages the ensemble of various zero-cost proxies to enhance the correlation between a predicted ranking of networks and the ground truth.
Results conclusively demonstrate the efficacy and efficiency of AZ-NAS, outperforming state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2024-03-28T08:44:36Z) - Zero-Shot Neural Architecture Search: Challenges, Solutions, and Opportunities [58.67514819895494]
Key idea behind zero-shot NAS approaches is to design proxies that can predict the accuracy of some given networks without training the network parameters.
This paper aims to comprehensively review and compare the state-of-the-art (SOTA) zero-shot NAS approaches.
arXiv Detail & Related papers (2023-07-05T03:07:00Z) - $\beta$-DARTS++: Bi-level Regularization for Proxy-robust Differentiable
Architecture Search [96.99525100285084]
Regularization method, Beta-Decay, is proposed to regularize the DARTS-based NAS searching process (i.e., $beta$-DARTS)
In-depth theoretical analyses on how it works and why it works are provided.
arXiv Detail & Related papers (2023-01-16T12:30:32Z) - Extensible Proxy for Efficient NAS [38.124755703499886]
We propose a new approach to design deep neural networks (DNNs) called Neural Architecture Search (NAS)
NAS proxies are proposed to address the demanding computational issues of NAS, where each candidate architecture network only requires one iteration of backpropagation.
Our experiments confirm the effectiveness of both Eproxy and Eproxy+DPS.
arXiv Detail & Related papers (2022-10-17T22:18:22Z) - RankNAS: Efficient Neural Architecture Search by Pairwise Ranking [30.890612901949307]
We propose a performance ranking method (RankNAS) via pairwise ranking.
It enables efficient architecture search using much fewer training examples.
It can design high-performance architectures while being orders of magnitude faster than state-of-the-art NAS systems.
arXiv Detail & Related papers (2021-09-15T15:43:08Z) - Understanding and Accelerating Neural Architecture Search with
Training-Free and Theory-Grounded Metrics [117.4281417428145]
This work targets designing a principled and unified training-free framework for Neural Architecture Search (NAS)
NAS has been explosively studied to automate the discovery of top-performer neural networks, but suffers from heavy resource consumption and often incurs search bias due to truncated training or approximations.
We present a unified framework to understand and accelerate NAS, by disentangling "TEG" characteristics of searched networks.
arXiv Detail & Related papers (2021-08-26T17:52:07Z) - CATCH: Context-based Meta Reinforcement Learning for Transferrable
Architecture Search [102.67142711824748]
CATCH is a novel Context-bAsed meTa reinforcement learning algorithm for transferrable arChitecture searcH.
The combination of meta-learning and RL allows CATCH to efficiently adapt to new tasks while being agnostic to search spaces.
It is also capable of handling cross-domain architecture search as competitive networks on ImageNet, COCO, and Cityscapes are identified.
arXiv Detail & Related papers (2020-07-18T09:35:53Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.