MiLeNAS: Efficient Neural Architecture Search via Mixed-Level
Reformulation
- URL: http://arxiv.org/abs/2003.12238v1
- Date: Fri, 27 Mar 2020 05:06:54 GMT
- Title: MiLeNAS: Efficient Neural Architecture Search via Mixed-Level
Reformulation
- Authors: Chaoyang He, Haishan Ye, Li Shen, Tong Zhang
- Abstract summary: mldas is a mixed-level reformulation for NAS that can be optimized efficiently and reliably.
It is shown that even when using a simple first-order method on the mixed-level formulation, mldas can achieve a lower validation error for NAS problems.
- Score: 25.56562895285528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many recently proposed methods for Neural Architecture Search (NAS) can be
formulated as bilevel optimization. For efficient implementation, its solution
requires approximations of second-order methods. In this paper, we demonstrate
that gradient errors caused by such approximations lead to suboptimality, in
the sense that the optimization procedure fails to converge to a (locally)
optimal solution. To remedy this, this paper proposes \mldas, a mixed-level
reformulation for NAS that can be optimized efficiently and reliably. It is
shown that even when using a simple first-order method on the mixed-level
formulation, \mldas\ can achieve a lower validation error for NAS problems.
Consequently, architectures obtained by our method achieve consistently higher
accuracies than those obtained from bilevel optimization. Moreover, \mldas\
proposes a framework beyond DARTS. It is upgraded via model size-based search
and early stopping strategies to complete the search process in around 5 hours.
Extensive experiments within the convolutional architecture search space
validate the effectiveness of our approach.
Related papers
- Provably Faster Algorithms for Bilevel Optimization via Without-Replacement Sampling [96.47086913559289]
gradient-based algorithms are widely used in bilevel optimization.
We introduce a without-replacement sampling based algorithm which achieves a faster convergence rate.
We validate our algorithms over both synthetic and real-world applications.
arXiv Detail & Related papers (2024-11-07T17:05:31Z) - Towards Differentiable Multilevel Optimization: A Gradient-Based Approach [1.6114012813668932]
This paper introduces a novel gradient-based approach for multilevel optimization.
Our method significantly reduces computational complexity while improving both solution accuracy and convergence speed.
To the best of our knowledge, this is one of the first algorithms to provide a general version of implicit differentiation.
arXiv Detail & Related papers (2024-10-15T06:17:59Z) - ZARTS: On Zero-order Optimization for Neural Architecture Search [94.41017048659664]
Differentiable architecture search (DARTS) has been a popular one-shot paradigm for NAS due to its high efficiency.
This work turns to zero-order optimization and proposes a novel NAS scheme, called ZARTS, to search without enforcing the above approximation.
In particular, results on 12 benchmarks verify the outstanding robustness of ZARTS, where the performance of DARTS collapses due to its known instability issue.
arXiv Detail & Related papers (2021-10-10T09:35:15Z) - iDARTS: Differentiable Architecture Search with Stochastic Implicit
Gradients [75.41173109807735]
Differentiable ARchiTecture Search (DARTS) has recently become the mainstream of neural architecture search (NAS)
We tackle the hypergradient computation in DARTS based on the implicit function theorem.
We show that the architecture optimisation with the proposed method, named iDARTS, is expected to converge to a stationary point.
arXiv Detail & Related papers (2021-06-21T00:44:11Z) - Generalization Guarantees for Neural Architecture Search with
Train-Validation Split [48.265305046655996]
This paper explores the statistical aspects of such problems with train-validation splits.
We show that refined properties of the validation loss such as risk and hyper-gradients are indicative of those of the true test loss.
We also highlight rigorous connections between NAS, multiple kernel learning, and low-rank matrix learning.
arXiv Detail & Related papers (2021-04-29T06:11:00Z) - Effective, Efficient and Robust Neural Architecture Search [4.273005643715522]
Recent advances in adversarial attacks show the vulnerability of deep neural networks searched by Neural Architecture Search (NAS)
We propose an Effective, Efficient, and Robust Neural Architecture Search (E2RNAS) method to search a neural network architecture by taking the performance, robustness, and resource constraint into consideration.
Experiments on benchmark datasets show that the proposed E2RNAS method can find adversarially robust architectures with optimized model size and comparable classification accuracy.
arXiv Detail & Related papers (2020-11-19T13:46:23Z) - ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse
Coding [86.40042104698792]
We formulate neural architecture search as a sparse coding problem.
In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for search.
Our one-stage method produces state-of-the-art performances on both CIFAR-10 and ImageNet at the cost of only evaluation time.
arXiv Detail & Related papers (2020-10-13T04:34:24Z) - DrNAS: Dirichlet Neural Architecture Search [88.56953713817545]
We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution.
With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based generalization.
To alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme.
arXiv Detail & Related papers (2020-06-18T08:23:02Z) - Geometry-Aware Gradient Algorithms for Neural Architecture Search [41.943045315986744]
We argue for the study of single-level empirical risk minimization to understand NAS with weight-sharing.
We present a geometry-aware framework that exploits the underlying structure of this optimization to return sparse architectural parameters.
We achieve state-of-the-art accuracy on the latest NAS benchmarks in computer vision.
arXiv Detail & Related papers (2020-04-16T17:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.