Neural Architecture Search using Progressive Evolution
- URL: http://arxiv.org/abs/2203.01559v1
- Date: Thu, 3 Mar 2022 08:15:14 GMT
- Title: Neural Architecture Search using Progressive Evolution
- Authors: Nilotpal Sinha, Kuan-Wen Chen
- Abstract summary: We propose a method called pEvoNAS for neural architecture search using evolutionary algorithms.
The whole neural architecture search space is progressively reduced to smaller search space regions with good architectures.
pEvoNAS gives better results on CIFAR-10 and CIFAR-100 while using significantly less computational resources as compared to previous EA-based methods.
- Score: 6.8129169853808795
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Vanilla neural architecture search using evolutionary algorithms (EA)
involves evaluating each architecture by training it from scratch, which is
extremely time-consuming. This can be reduced by using a supernet to estimate
the fitness of every architecture in the search space due to its weight sharing
nature. However, the estimated fitness is very noisy due to the co-adaptation
of the operations in the supernet. In this work, we propose a method called
pEvoNAS wherein the whole neural architecture search space is progressively
reduced to smaller search space regions with good architectures. This is
achieved by using a trained supernet for architecture evaluation during the
architecture search using genetic algorithm to find search space regions with
good architectures. Upon reaching the final reduced search space, the supernet
is then used to search for the best architecture in that search space using
evolution. The search is also enhanced by using weight inheritance wherein the
supernet for the smaller search space inherits its weights from previous
trained supernet for the bigger search space. Exerimentally, pEvoNAS gives
better results on CIFAR-10 and CIFAR-100 while using significantly less
computational resources as compared to previous EA-based methods. The code for
our paper can be found in https://github.com/nightstorm0909/pEvoNAS
Related papers
- Boosting Order-Preserving and Transferability for Neural Architecture Search: a Joint Architecture Refined Search and Fine-tuning Approach [57.175488207316654]
We propose a novel concept of Supernet Shifting, a refined search strategy combining architecture searching with supernet fine-tuning.
We show that Supernet Shifting can fulfill transferring supernet to a new dataset.
Comprehensive experiments show that our method has better order-preserving ability and can find a dominating architecture.
arXiv Detail & Related papers (2024-03-18T00:13:41Z) - Novelty Driven Evolutionary Neural Architecture Search [6.8129169853808795]
Evolutionary algorithms (EA) based neural architecture search (NAS) involves evaluating each architecture by training it from scratch, which is extremely time-consuming.
We propose a method called NEvoNAS wherein the NAS problem is posed as a multi-objective problem with 2 objectives: (i) maximize architecture novelty, (ii) maximize architecture fitness/accuracy.
NSGA-II is used for finding the textitpareto optimal front for the NAS problem and the best architecture in the pareto front is returned as the searched architecture.
arXiv Detail & Related papers (2022-04-01T03:32:55Z) - D-DARTS: Distributed Differentiable Architecture Search [75.12821786565318]
Differentiable ARchiTecture Search (DARTS) is one of the most trending Neural Architecture Search (NAS) methods.
We propose D-DARTS, a novel solution that addresses this problem by nesting several neural networks at cell-level.
arXiv Detail & Related papers (2021-08-20T09:07:01Z) - FEAR: A Simple Lightweight Method to Rank Architectures [14.017656480004955]
We propose a simple but powerful method which we call FEAR, for ranking architectures in any search space.
FEAR can cut down the search time by approximately 2.4X without losing accuracy.
We additionally empirically study very recently proposed zero-cost measures for ranking and find that they breakdown in ranking performance as training proceeds.
arXiv Detail & Related papers (2021-06-07T23:38:21Z) - Towards Improving the Consistency, Efficiency, and Flexibility of
Differentiable Neural Architecture Search [84.4140192638394]
Most differentiable neural architecture search methods construct a super-net for search and derive a target-net as its sub-graph for evaluation.
In this paper, we introduce EnTranNAS that is composed of Engine-cells and Transit-cells.
Our method also spares much memory and computation cost, which speeds up the search process.
arXiv Detail & Related papers (2021-01-27T12:16:47Z) - ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse
Coding [86.40042104698792]
We formulate neural architecture search as a sparse coding problem.
In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for search.
Our one-stage method produces state-of-the-art performances on both CIFAR-10 and ImageNet at the cost of only evaluation time.
arXiv Detail & Related papers (2020-10-13T04:34:24Z) - GOLD-NAS: Gradual, One-Level, Differentiable [100.12492801459105]
We propose a novel algorithm named Gradual One-Level Differentiable Neural Architecture Search (GOLD-NAS)
It introduces a variable resource constraint to one-level optimization so that the weak operators are gradually pruned out from the super-network.
arXiv Detail & Related papers (2020-07-07T10:37:49Z) - Breaking the Curse of Space Explosion: Towards Efficient NAS with
Curriculum Search [94.46818035655943]
We propose a curriculum search method that starts from a small search space and gradually incorporates the learned knowledge to guide the search in a large space.
With the proposed search strategy, our Curriculum Neural Architecture Search (CNAS) method significantly improves the search efficiency and finds better architectures than existing NAS methods.
arXiv Detail & Related papers (2020-07-07T02:29:06Z) - ADWPNAS: Architecture-Driven Weight Prediction for Neural Architecture
Search [6.458169480971417]
We propose an Architecture-Driven Weight Prediction (ADWP) approach for neural architecture search (NAS)
In our approach, we first design an architecture-intensive search space and then train a HyperNetwork by inputting encoding architecture parameters.
Results show that one search procedure can be completed in 4.0 GPU hours on CIFAR-10.
arXiv Detail & Related papers (2020-03-03T05:06:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.