Guided Evolutionary Neural Architecture Search With Efficient
Performance Estimation
- URL: http://arxiv.org/abs/2208.06475v1
- Date: Fri, 22 Jul 2022 10:58:32 GMT
- Title: Guided Evolutionary Neural Architecture Search With Efficient
Performance Estimation
- Authors: Vasco Lopes, Miguel Santos, Bruno Degardin, Lu\'is A. Alexandre
- Abstract summary: This paper proposes GEA, a novel approach for guided Neural Architecture Search (NAS)
GEA guides the evolution by generating and evaluating several architectures in each generation at initialisation stage.
Results show that GEA achieves state-of-the-art results on all data sets of NAS-Bench-101, NAS-Bench-201 and TransNAS-Bench-101 benchmarks.
- Score: 4.637328271312329
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Neural Architecture Search (NAS) methods have been successfully applied to
image tasks with excellent results. However, NAS methods are often complex and
tend to converge to local minima as soon as generated architectures seem to
yield good results. This paper proposes GEA, a novel approach for guided NAS.
GEA guides the evolution by exploring the search space by generating and
evaluating several architectures in each generation at initialisation stage
using a zero-proxy estimator, where only the highest-scoring architecture is
trained and kept for the next generation. Subsequently, GEA continuously
extracts knowledge about the search space without increased complexity by
generating several off-springs from an existing architecture at each
generation. More, GEA forces exploitation of the most performant architectures
by descendant generation while simultaneously driving exploration through
parent mutation and favouring younger architectures to the detriment of older
ones. Experimental results demonstrate the effectiveness of the proposed
method, and extensive ablation studies evaluate the importance of different
parameters. Results show that GEA achieves state-of-the-art results on all data
sets of NAS-Bench-101, NAS-Bench-201 and TransNAS-Bench-101 benchmarks.
Related papers
- Knowledge-aware Evolutionary Graph Neural Architecture Search [49.13787973318586]
Graph neural architecture search (GNAS) can customize high-performance graph neural network architectures for specific graph tasks or datasets.
Existing GNAS methods begin searching for architectures from a zero-knowledge state, ignoring the prior knowledge that may improve the search efficiency.
This study proposes exploiting such prior knowledge to accelerate the multi-objective evolutionary search on a new graph dataset.
arXiv Detail & Related papers (2024-11-26T11:32:45Z) - GeNAS: Neural Architecture Search with Better Generalization [14.92869716323226]
Recent neural architecture search (NAS) approaches rely on validation loss or accuracy to find the superior network for the target data.
In this paper, we investigate a new neural architecture search measure for excavating architectures with better generalization.
arXiv Detail & Related papers (2023-05-15T12:44:54Z) - BaLeNAS: Differentiable Architecture Search via the Bayesian Learning
Rule [95.56873042777316]
Differentiable Architecture Search (DARTS) has received massive attention in recent years, mainly because it significantly reduces the computational cost.
This paper formulates the neural architecture search as a distribution learning problem through relaxing the architecture weights into Gaussian distributions.
We demonstrate how the differentiable NAS benefits from Bayesian principles, enhancing exploration and improving stability.
arXiv Detail & Related papers (2021-11-25T18:13:42Z) - Guided Evolution for Neural Architecture Search [1.0499611180329804]
We propose a novel approach for guided evolutionary Neural Architecture Search (NAS)
The rationale behind G-EA, is to explore the search space by generating and evaluating several architectures in each generation.
G-EA forces exploitation of the most performant networks by descendant generation while at the same time forcing exploration by parent mutation.
arXiv Detail & Related papers (2021-10-28T15:43:20Z) - Generative Adversarial Neural Architecture Search [21.05611902967155]
We propose Generative Adversarial NAS (GA-NAS) with theoretically provable convergence guarantees.
We show that GA-NAS can be used to improve already optimized baselines found by other NAS methods.
arXiv Detail & Related papers (2021-05-19T18:54:44Z) - Weak NAS Predictors Are All You Need [91.11570424233709]
Recent predictor-based NAS approaches attempt to solve the problem with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor.
We shift the paradigm from finding a complicated predictor that covers the whole architecture space to a set of weaker predictors that progressively move towards the high-performance sub-space.
Our method costs fewer samples to find the top-performance architectures on NAS-Bench-101 and NAS-Bench-201, and it achieves the state-of-the-art ImageNet performance on the NASNet search space.
arXiv Detail & Related papers (2021-02-21T01:58:43Z) - Off-Policy Reinforcement Learning for Efficient and Effective GAN
Architecture Search [50.40004966087121]
We introduce a new reinforcement learning based neural architecture search (NAS) methodology for generative adversarial network (GAN) architecture search.
The key idea is to formulate the GAN architecture search problem as a Markov decision process (MDP) for smoother architecture sampling.
We exploit an off-policy GAN architecture search algorithm that makes efficient use of the samples generated by previous policies.
arXiv Detail & Related papers (2020-07-17T18:29:17Z) - DrNAS: Dirichlet Neural Architecture Search [88.56953713817545]
We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution.
With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based generalization.
To alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme.
arXiv Detail & Related papers (2020-06-18T08:23:02Z) - Learning Architectures from an Extended Search Space for Language
Modeling [37.79977691127229]
We present a general approach to learn both intra-cell and inter-cell architectures of Neural architecture search (NAS)
For recurrent neural language modeling, it outperforms a strong baseline significantly on the PTB and WikiText data, with a new state-of-the-art on PTB.
The learned architectures show good transferability to other systems.
arXiv Detail & Related papers (2020-05-06T05:02:33Z) - Stage-Wise Neural Architecture Search [65.03109178056937]
Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications.
These networks consist of stages, which are sets of layers that operate on representations in the same resolution.
It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network.
However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time.
arXiv Detail & Related papers (2020-04-23T14:16:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.