GP-NAS-ensemble: a model for NAS Performance Prediction
- URL: http://arxiv.org/abs/2301.09231v1
- Date: Mon, 23 Jan 2023 00:17:52 GMT
- Title: GP-NAS-ensemble: a model for NAS Performance Prediction
- Authors: Kunlong Chen, Liu Yang, Yitian Chen, Kunjin Chen, Yidan Xu, Lujun Li
- Abstract summary: GP-NAS-ensemble is proposed to predict the performance of a neural network architecture with a small training dataset.
Our method ranks second in the CVPR2022 second lightweight NAS challenge performance prediction track.
- Score: 6.785608131249699
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is of great significance to estimate the performance of a given model
architecture without training in the application of Neural Architecture Search
(NAS) as it may take a lot of time to evaluate the performance of an
architecture. In this paper, a novel NAS framework called GP-NAS-ensemble is
proposed to predict the performance of a neural network architecture with a
small training dataset. We make several improvements on the GP-NAS model to
make it share the advantage of ensemble learning methods. Our method ranks
second in the CVPR2022 second lightweight NAS challenge performance prediction
track.
Related papers
- FlatNAS: optimizing Flatness in Neural Architecture Search for
Out-of-Distribution Robustness [3.724847012963521]
This study introduces a novel NAS solution, called Flat Neural Architecture Search (FlatNAS)
It explores the interplay between a novel figure of merit based on robustness to weight perturbations and single NN optimization with Sharpness-Aware Minimization (SAM)
The OOD robustness of the NAS-designed models is evaluated by focusing on robustness to input data corruptions, using popular benchmark datasets in the literature.
arXiv Detail & Related papers (2024-02-29T12:33:14Z) - DiffusionNAG: Predictor-guided Neural Architecture Generation with Diffusion Models [56.584561770857306]
We propose a novel conditional Neural Architecture Generation (NAG) framework based on diffusion models, dubbed DiffusionNAG.
Specifically, we consider the neural architectures as directed graphs and propose a graph diffusion model for generating them.
We validate the effectiveness of DiffusionNAG through extensive experiments in two predictor-based NAS scenarios: Transferable NAS and Bayesian Optimization (BO)-based NAS.
When integrated into a BO-based algorithm, DiffusionNAG outperforms existing BO-based NAS approaches, particularly in the large MobileNetV3 search space on the ImageNet 1K dataset.
arXiv Detail & Related papers (2023-05-26T13:58:18Z) - GeNAS: Neural Architecture Search with Better Generalization [14.92869716323226]
Recent neural architecture search (NAS) approaches rely on validation loss or accuracy to find the superior network for the target data.
In this paper, we investigate a new neural architecture search measure for excavating architectures with better generalization.
arXiv Detail & Related papers (2023-05-15T12:44:54Z) - PredNAS: A Universal and Sample Efficient Neural Architecture Search
Framework [20.59478264338981]
We present a general and effective framework for Neural Architecture Search (NAS) named PredNAS.
We adopt a neural predictor as the performance predictor. Surprisingly, PredNAS can achieve state-of-the-art performances on NAS benchmarks with only a few training samples.
arXiv Detail & Related papers (2022-10-26T04:15:58Z) - BaLeNAS: Differentiable Architecture Search via the Bayesian Learning
Rule [95.56873042777316]
Differentiable Architecture Search (DARTS) has received massive attention in recent years, mainly because it significantly reduces the computational cost.
This paper formulates the neural architecture search as a distribution learning problem through relaxing the architecture weights into Gaussian distributions.
We demonstrate how the differentiable NAS benefits from Bayesian principles, enhancing exploration and improving stability.
arXiv Detail & Related papers (2021-11-25T18:13:42Z) - Weak NAS Predictors Are All You Need [91.11570424233709]
Recent predictor-based NAS approaches attempt to solve the problem with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor.
We shift the paradigm from finding a complicated predictor that covers the whole architecture space to a set of weaker predictors that progressively move towards the high-performance sub-space.
Our method costs fewer samples to find the top-performance architectures on NAS-Bench-101 and NAS-Bench-201, and it achieves the state-of-the-art ImageNet performance on the NASNet search space.
arXiv Detail & Related papers (2021-02-21T01:58:43Z) - Neural Architecture Performance Prediction Using Graph Neural Networks [17.224223176258334]
We propose a surrogate model for neural architecture performance prediction built upon Graph Neural Networks (GNN)
We demonstrate the effectiveness of this surrogate model on neural architecture performance prediction for structurally unknown architectures.
arXiv Detail & Related papers (2020-10-19T09:33:57Z) - Binarized Neural Architecture Search for Efficient Object Recognition [120.23378346337311]
Binarized neural architecture search (BNAS) produces extremely compressed models to reduce huge computational cost on embedded devices for edge computing.
An accuracy of $96.53%$ vs. $97.22%$ is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a $40%$ faster search than the state-of-the-art PC-DARTS.
arXiv Detail & Related papers (2020-09-08T15:51:23Z) - BRP-NAS: Prediction-based NAS using GCNs [21.765796576990137]
BRP-NAS is an efficient hardware-aware NAS enabled by an accurate performance predictor-based on graph convolutional network (GCN)
We show that our proposed method outperforms all prior methods on NAS-Bench-101 and NAS-Bench-201.
We also release LatBench -- a latency dataset of NAS-Bench-201 models running on a broad range of devices.
arXiv Detail & Related papers (2020-07-16T21:58:43Z) - BNAS:An Efficient Neural Architecture Search Approach Using Broad
Scalable Architecture [62.587982139871976]
We propose Broad Neural Architecture Search (BNAS) where we elaborately design broad scalable architecture dubbed Broad Convolutional Neural Network (BCNN)
BNAS delivers 0.19 days which is 2.37x less expensive than ENAS who ranks the best in reinforcement learning-based NAS approaches.
arXiv Detail & Related papers (2020-01-18T15:07:55Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.