Novelty Driven Evolutionary Neural Architecture Search
- URL: http://arxiv.org/abs/2204.00188v1
- Date: Fri, 1 Apr 2022 03:32:55 GMT
- Title: Novelty Driven Evolutionary Neural Architecture Search
- Authors: Nilotpal Sinha, Kuan-Wen Chen
- Abstract summary: Evolutionary algorithms (EA) based neural architecture search (NAS) involves evaluating each architecture by training it from scratch, which is extremely time-consuming.
We propose a method called NEvoNAS wherein the NAS problem is posed as a multi-objective problem with 2 objectives: (i) maximize architecture novelty, (ii) maximize architecture fitness/accuracy.
NSGA-II is used for finding the textitpareto optimal front for the NAS problem and the best architecture in the pareto front is returned as the searched architecture.
- Score: 6.8129169853808795
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Evolutionary algorithms (EA) based neural architecture search (NAS) involves
evaluating each architecture by training it from scratch, which is extremely
time-consuming. This can be reduced by using a supernet for estimating the
fitness of an architecture due to weight sharing among all architectures in the
search space. However, the estimated fitness is very noisy due to the
co-adaptation of the operations in the supernet which results in NAS methods
getting trapped in local optimum. In this paper, we propose a method called
NEvoNAS wherein the NAS problem is posed as a multi-objective problem with 2
objectives: (i) maximize architecture novelty, (ii) maximize architecture
fitness/accuracy. The novelty search is used for maintaining a diverse set of
solutions at each generation which helps avoiding local optimum traps while the
architecture fitness is calculated using supernet. NSGA-II is used for finding
the \textit{pareto optimal front} for the NAS problem and the best architecture
in the pareto front is returned as the searched architecture. Exerimentally,
NEvoNAS gives better results on 2 different search spaces while using
significantly less computational resources as compared to previous EA-based
methods. The code for our paper can be found in
https://github.com/nightstorm0909/NEvoNAS.
Related papers
- DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit
CNNs [53.82853297675979]
1-bit convolutional neural networks (CNNs) with binary weights and activations show their potential for resource-limited embedded devices.
One natural approach is to use 1-bit CNNs to reduce the computation and memory cost of NAS.
We introduce Discrepant Child-Parent Neural Architecture Search (DCP-NAS) to efficiently search 1-bit CNNs.
arXiv Detail & Related papers (2023-06-27T11:28:29Z) - GeNAS: Neural Architecture Search with Better Generalization [14.92869716323226]
Recent neural architecture search (NAS) approaches rely on validation loss or accuracy to find the superior network for the target data.
In this paper, we investigate a new neural architecture search measure for excavating architectures with better generalization.
arXiv Detail & Related papers (2023-05-15T12:44:54Z) - Neural Architecture Search using Progressive Evolution [6.8129169853808795]
We propose a method called pEvoNAS for neural architecture search using evolutionary algorithms.
The whole neural architecture search space is progressively reduced to smaller search space regions with good architectures.
pEvoNAS gives better results on CIFAR-10 and CIFAR-100 while using significantly less computational resources as compared to previous EA-based methods.
arXiv Detail & Related papers (2022-03-03T08:15:14Z) - BaLeNAS: Differentiable Architecture Search via the Bayesian Learning
Rule [95.56873042777316]
Differentiable Architecture Search (DARTS) has received massive attention in recent years, mainly because it significantly reduces the computational cost.
This paper formulates the neural architecture search as a distribution learning problem through relaxing the architecture weights into Gaussian distributions.
We demonstrate how the differentiable NAS benefits from Bayesian principles, enhancing exploration and improving stability.
arXiv Detail & Related papers (2021-11-25T18:13:42Z) - L$^{2}$NAS: Learning to Optimize Neural Architectures via
Continuous-Action Reinforcement Learning [23.25155249879658]
Differentiable architecture search (NAS) achieved remarkable results in deep neural network design.
We show that L$2$ achieves state-of-theart results on DART201 benchmark as well as NASS and Once-for-All search policies.
arXiv Detail & Related papers (2021-09-25T19:26:30Z) - Weak NAS Predictors Are All You Need [91.11570424233709]
Recent predictor-based NAS approaches attempt to solve the problem with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor.
We shift the paradigm from finding a complicated predictor that covers the whole architecture space to a set of weaker predictors that progressively move towards the high-performance sub-space.
Our method costs fewer samples to find the top-performance architectures on NAS-Bench-101 and NAS-Bench-201, and it achieves the state-of-the-art ImageNet performance on the NASNet search space.
arXiv Detail & Related papers (2021-02-21T01:58:43Z) - ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse
Coding [86.40042104698792]
We formulate neural architecture search as a sparse coding problem.
In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for search.
Our one-stage method produces state-of-the-art performances on both CIFAR-10 and ImageNet at the cost of only evaluation time.
arXiv Detail & Related papers (2020-10-13T04:34:24Z) - Smooth Variational Graph Embeddings for Efficient Neural Architecture
Search [41.62970837629573]
We propose a two-sided variational graph autoencoder, which allows to smoothly encode and accurately reconstruct neural architectures from various search spaces.
We evaluate the proposed approach on neural architectures defined by the ENAS approach, the NAS-Bench-101 and the NAS-Bench-201 search spaces.
arXiv Detail & Related papers (2020-10-09T17:05:41Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.