FEAR: A Simple Lightweight Method to Rank Architectures
- URL: http://arxiv.org/abs/2106.04010v1
- Date: Mon, 7 Jun 2021 23:38:21 GMT
- Title: FEAR: A Simple Lightweight Method to Rank Architectures
- Authors: Debadeepta Dey, Shital Shah, Sebastien Bubeck
- Abstract summary: We propose a simple but powerful method which we call FEAR, for ranking architectures in any search space.
FEAR can cut down the search time by approximately 2.4X without losing accuracy.
We additionally empirically study very recently proposed zero-cost measures for ranking and find that they breakdown in ranking performance as training proceeds.
- Score: 14.017656480004955
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The fundamental problem in Neural Architecture Search (NAS) is to efficiently
find high-performing architectures from a given search space. We propose a
simple but powerful method which we call FEAR, for ranking architectures in any
search space. FEAR leverages the viewpoint that neural networks are powerful
non-linear feature extractors. First, we train different architectures in the
search space to the same training or validation error. Then, we compare the
usefulness of the features extracted by each architecture. We do so with a
quick training keeping most of the architecture frozen. This gives fast
estimates of the relative performance. We validate FEAR on Natsbench topology
search space on three different datasets against competing baselines and show
strong ranking correlation especially compared to recently proposed zero-cost
methods. FEAR particularly excels at ranking high-performance architectures in
the search space. When used in the inner loop of discrete search algorithms
like random search, FEAR can cut down the search time by approximately 2.4X
without losing accuracy. We additionally empirically study very recently
proposed zero-cost measures for ranking and find that they breakdown in ranking
performance as training proceeds and also that data-agnostic ranking scores
which ignore the dataset do not generalize across dissimilar datasets.
Related papers
- Searching a High-Performance Feature Extractor for Text Recognition
Network [92.12492627169108]
We design a domain-specific search space by exploring principles for having good feature extractors.
As the space is huge and complexly structured, no existing NAS algorithms can be applied.
We propose a two-stage algorithm to effectively search in the space.
arXiv Detail & Related papers (2022-09-27T03:49:04Z) - $\beta$-DARTS: Beta-Decay Regularization for Differentiable Architecture
Search [85.84110365657455]
We propose a simple-but-efficient regularization method, termed as Beta-Decay, to regularize the DARTS-based NAS searching process.
Experimental results on NAS-Bench-201 show that our proposed method can help to stabilize the searching process and makes the searched network more transferable across different datasets.
arXiv Detail & Related papers (2022-03-03T11:47:14Z) - Neural Architecture Ranker [19.21631623578852]
Architecture ranking has recently been advocated to design an efficient and effective performance predictor for Neural Architecture Search (NAS)
Inspired by the stratification stratification, we propose a predictor, namely Neural Ranker (NAR)
arXiv Detail & Related papers (2022-01-30T04:54:59Z) - RankNAS: Efficient Neural Architecture Search by Pairwise Ranking [30.890612901949307]
We propose a performance ranking method (RankNAS) via pairwise ranking.
It enables efficient architecture search using much fewer training examples.
It can design high-performance architectures while being orders of magnitude faster than state-of-the-art NAS systems.
arXiv Detail & Related papers (2021-09-15T15:43:08Z) - Making Differentiable Architecture Search less local [9.869449181400466]
Differentiable neural architecture search (DARTS) is a promising NAS approach that dramatically increases search efficiency.
It has been shown to suffer from performance collapse, where the search often leads to detrimental architectures.
We develop a more global optimisation scheme that is able to better explore the space without changing the DARTS problem formulation.
arXiv Detail & Related papers (2021-04-21T10:36:43Z) - OPANAS: One-Shot Path Aggregation Network Architecture Search for Object
Detection [82.04372532783931]
Recently, neural architecture search (NAS) has been exploited to design feature pyramid networks (FPNs)
We propose a novel One-Shot Path Aggregation Network Architecture Search (OPANAS) algorithm, which significantly improves both searching efficiency and detection accuracy.
arXiv Detail & Related papers (2021-03-08T01:48:53Z) - Towards Improving the Consistency, Efficiency, and Flexibility of
Differentiable Neural Architecture Search [84.4140192638394]
Most differentiable neural architecture search methods construct a super-net for search and derive a target-net as its sub-graph for evaluation.
In this paper, we introduce EnTranNAS that is composed of Engine-cells and Transit-cells.
Our method also spares much memory and computation cost, which speeds up the search process.
arXiv Detail & Related papers (2021-01-27T12:16:47Z) - ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse
Coding [86.40042104698792]
We formulate neural architecture search as a sparse coding problem.
In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for search.
Our one-stage method produces state-of-the-art performances on both CIFAR-10 and ImageNet at the cost of only evaluation time.
arXiv Detail & Related papers (2020-10-13T04:34:24Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.