ZiCo: Zero-shot NAS via Inverse Coefficient of Variation on Gradients
- URL: http://arxiv.org/abs/2301.11300v3
- Date: Wed, 12 Apr 2023 22:45:09 GMT
- Title: ZiCo: Zero-shot NAS via Inverse Coefficient of Variation on Gradients
- Authors: Guihong Li, Yuedong Yang, Kartikeya Bhardwaj, Radu Marculescu
- Abstract summary: We propose a new zero-shot proxy, ZiCo, that works consistently better than #Params.
ZiCo-based NAS can find optimal architectures with 78.1%, 79.4%, and 80.4% test accuracy under inference budgets of 450M, 600M, and 1000M FLOPs, respectively.
- Score: 17.139381064317778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Architecture Search (NAS) is widely used to automatically obtain the
neural network with the best performance among a large number of candidate
architectures. To reduce the search time, zero-shot NAS aims at designing
training-free proxies that can predict the test performance of a given
architecture. However, as shown recently, none of the zero-shot proxies
proposed to date can actually work consistently better than a naive proxy,
namely, the number of network parameters (#Params). To improve this state of
affairs, as the main theoretical contribution, we first reveal how some
specific gradient properties across different samples impact the convergence
rate and generalization capacity of neural networks. Based on this theoretical
analysis, we propose a new zero-shot proxy, ZiCo, the first proxy that works
consistently better than #Params. We demonstrate that ZiCo works better than
State-Of-The-Art (SOTA) proxies on several popular NAS-Benchmarks (NASBench101,
NATSBench-SSS/TSS, TransNASBench-101) for multiple applications (e.g., image
classification/reconstruction and pixel-level prediction). Finally, we
demonstrate that the optimal architectures found via ZiCo are as competitive as
the ones found by one-shot and multi-shot NAS methods, but with much less
search time. For example, ZiCo-based NAS can find optimal architectures with
78.1%, 79.4%, and 80.4% test accuracy under inference budgets of 450M, 600M,
and 1000M FLOPs, respectively, on ImageNet within 0.4 GPU days. Our code is
available at https://github.com/SLDGroup/ZiCo.
Related papers
- TG-NAS: Leveraging Zero-Cost Proxies with Transformer and Graph Convolution Networks for Efficient Neural Architecture Search [1.30891455653235]
TG-NAS aims to create training-free proxies for architecture performance prediction.
We introduce TG-NAS, a novel model-based universal proxy that leverages a transformer-based operator embedding generator and a graph convolution network (GCN) to predict architecture performance.
TG-NAS achieves up to 300X improvements in search efficiency compared to previous SOTA ZC proxy methods.
arXiv Detail & Related papers (2024-03-30T07:25:30Z) - Zero-Shot Neural Architecture Search: Challenges, Solutions, and Opportunities [58.67514819895494]
Key idea behind zero-shot NAS approaches is to design proxies that can predict the accuracy of some given networks without training the network parameters.
This paper aims to comprehensively review and compare the state-of-the-art (SOTA) zero-shot NAS approaches.
arXiv Detail & Related papers (2023-07-05T03:07:00Z) - Are Neural Architecture Search Benchmarks Well Designed? A Deeper Look
Into Operation Importance [5.065947993017157]
We conduct an empirical analysis of the widely used NAS-Bench-101, NAS-Bench-201 and TransNAS-Bench-101 benchmarks.
We found that only a subset of the operation pool is required to generate architectures close to the upper-bound of the performance range.
We consistently found convolution layers to have the highest impact on the architecture's performance.
arXiv Detail & Related papers (2023-03-29T18:03:28Z) - BaLeNAS: Differentiable Architecture Search via the Bayesian Learning
Rule [95.56873042777316]
Differentiable Architecture Search (DARTS) has received massive attention in recent years, mainly because it significantly reduces the computational cost.
This paper formulates the neural architecture search as a distribution learning problem through relaxing the architecture weights into Gaussian distributions.
We demonstrate how the differentiable NAS benefits from Bayesian principles, enhancing exploration and improving stability.
arXiv Detail & Related papers (2021-11-25T18:13:42Z) - FNAS: Uncertainty-Aware Fast Neural Architecture Search [54.49650267859032]
Reinforcement learning (RL)-based neural architecture search (NAS) generally guarantees better convergence yet suffers from the requirement of huge computational resources.
We propose a general pipeline to accelerate the convergence of the rollout process as well as the RL process in NAS.
Experiments on the Mobile Neural Architecture Search (MNAS) search space show the proposed Fast Neural Architecture Search (FNAS) accelerates standard RL-based NAS process by 10x.
arXiv Detail & Related papers (2021-05-25T06:32:52Z) - Weak NAS Predictors Are All You Need [91.11570424233709]
Recent predictor-based NAS approaches attempt to solve the problem with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor.
We shift the paradigm from finding a complicated predictor that covers the whole architecture space to a set of weaker predictors that progressively move towards the high-performance sub-space.
Our method costs fewer samples to find the top-performance architectures on NAS-Bench-101 and NAS-Bench-201, and it achieves the state-of-the-art ImageNet performance on the NASNet search space.
arXiv Detail & Related papers (2021-02-21T01:58:43Z) - Zero-Cost Proxies for Lightweight NAS [19.906217380811373]
We evaluate conventional reduced-training proxies and quantify how well they preserve ranking between multiple models during search.
We propose a series of zero-cost proxies that use just a single minibatch of training data to compute a model's score.
Our zero-cost proxies use 3 orders of magnitude less computation but can match and even outperform conventional proxies.
arXiv Detail & Related papers (2021-01-20T13:59:52Z) - Few-shot Neural Architecture Search [35.28010196935195]
We propose few-shot NAS that uses multiple supernetworks, called sub-supernets, each covering different regions of the search space to alleviate the undesired co-adaption.
With only up to 7 sub-supernets, few-shot NAS establishes new SoTAs: on ImageNet, it finds models that reach 80.5% top-1 accuracy at 600 MB FLOPS and 77.5% top-1 accuracy at 238 MFLOPS.
arXiv Detail & Related papers (2020-06-11T22:36:01Z) - Powering One-shot Topological NAS with Stabilized Share-parameter Proxy [65.09967910722932]
One-shot NAS method has attracted much interest from the research community due to its remarkable training efficiency and capacity to discover high performance models.
In this work, we try to enhance the one-shot NAS by exploring high-performing network architectures in our large-scale Topology Augmented Search Space.
The proposed method achieves state-of-the-art performance under Multiply-Adds (MAdds) constraint on ImageNet.
arXiv Detail & Related papers (2020-05-21T08:18:55Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.