GOLD-NAS: Gradual, One-Level, Differentiable
- URL: http://arxiv.org/abs/2007.03331v1
- Date: Tue, 7 Jul 2020 10:37:49 GMT
- Title: GOLD-NAS: Gradual, One-Level, Differentiable
- Authors: Kaifeng Bi, Lingxi Xie, Xin Chen, Longhui Wei, Qi Tian
- Abstract summary: We propose a novel algorithm named Gradual One-Level Differentiable Neural Architecture Search (GOLD-NAS)
It introduces a variable resource constraint to one-level optimization so that the weak operators are gradually pruned out from the super-network.
- Score: 100.12492801459105
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There has been a large literature of neural architecture search, but most
existing work made use of heuristic rules that largely constrained the search
flexibility. In this paper, we first relax these manually designed constraints
and enlarge the search space to contain more than $10^{160}$ candidates. In the
new space, most existing differentiable search methods can fail dramatically.
We then propose a novel algorithm named Gradual One-Level Differentiable Neural
Architecture Search (GOLD-NAS) which introduces a variable resource constraint
to one-level optimization so that the weak operators are gradually pruned out
from the super-network. In standard image classification benchmarks, GOLD-NAS
can find a series of Pareto-optimal architectures within a single search
procedure. Most of the discovered architectures were never studied before, yet
they achieve a nice tradeoff between recognition accuracy and model complexity.
We believe the new space and search algorithm can advance the search of
differentiable NAS.
Related papers
- Efficient NAS with FaDE on Hierarchical Spaces [0.6372911857214884]
We present FaDE which uses differentiable architecture search to obtain relative performance predictions on finite regions of a hierarchical NAS space.
FaDE is especially suited on deep hierarchical, respectively multi-cell search spaces.
arXiv Detail & Related papers (2024-04-24T21:33:17Z) - Construction of Hierarchical Neural Architecture Search Spaces based on
Context-free Grammars [66.05096551112932]
We introduce a unifying search space design framework based on context-free grammars.
By enhancing and using their properties, we effectively enable search over the complete architecture.
We show that our search strategy can be superior to existing Neural Architecture Search approaches.
arXiv Detail & Related papers (2022-11-03T14:23:00Z) - Searching a High-Performance Feature Extractor for Text Recognition
Network [92.12492627169108]
We design a domain-specific search space by exploring principles for having good feature extractors.
As the space is huge and complexly structured, no existing NAS algorithms can be applied.
We propose a two-stage algorithm to effectively search in the space.
arXiv Detail & Related papers (2022-09-27T03:49:04Z) - Zero-Cost Proxies Meet Differentiable Architecture Search [20.957570100784988]
Differentiable neural architecture search (NAS) has attracted significant attention in recent years.
Despite its success, DARTS lacks robustness in certain cases.
We propose a novel operation selection paradigm in the context of differentiable NAS.
arXiv Detail & Related papers (2021-06-12T15:33:36Z) - One-Shot Neural Ensemble Architecture Search by Diversity-Guided Search
Space Shrinking [97.60915598958968]
We propose a one-shot neural ensemble architecture search (NEAS) solution that addresses the two challenges.
For the first challenge, we introduce a novel diversity-based metric to guide search space shrinking.
For the second challenge, we enable a new search dimension to learn layer sharing among different models for efficiency purposes.
arXiv Detail & Related papers (2021-04-01T16:29:49Z) - Towards Improving the Consistency, Efficiency, and Flexibility of
Differentiable Neural Architecture Search [84.4140192638394]
Most differentiable neural architecture search methods construct a super-net for search and derive a target-net as its sub-graph for evaluation.
In this paper, we introduce EnTranNAS that is composed of Engine-cells and Transit-cells.
Our method also spares much memory and computation cost, which speeds up the search process.
arXiv Detail & Related papers (2021-01-27T12:16:47Z) - ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse
Coding [86.40042104698792]
We formulate neural architecture search as a sparse coding problem.
In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for search.
Our one-stage method produces state-of-the-art performances on both CIFAR-10 and ImageNet at the cost of only evaluation time.
arXiv Detail & Related papers (2020-10-13T04:34:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.