Learning Where To Look -- Generative NAS is Surprisingly Efficient
- URL: http://arxiv.org/abs/2203.08734v1
- Date: Wed, 16 Mar 2022 16:27:11 GMT
- Title: Learning Where To Look -- Generative NAS is Surprisingly Efficient
- Authors: Jovita Lukasik, Steffen Jung, Margret Keuper
- Abstract summary: We propose a generative model, paired with a surrogate predictor, that iteratively learns to generate samples from increasingly promising latent subspaces.
This approach leads to very effective and efficient architecture search, while keeping the query amount low.
- Score: 11.83842808044211
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The efficient, automated search for well-performing neural architectures
(NAS) has drawn increasing attention in the recent past. Thereby, the
predominant research objective is to reduce the necessity of costly evaluations
of neural architectures while efficiently exploring large search spaces. To
this aim, surrogate models embed architectures in a latent space and predict
their performance, while generative models for neural architectures enable
optimization-based search within the latent space the generator draws from.
Both, surrogate and generative models, have the aim of facilitating
query-efficient search in a well-structured latent space. In this paper, we
further improve the trade-off between query-efficiency and promising
architecture generation by leveraging advantages from both, efficient surrogate
models and generative design. To this end, we propose a generative model,
paired with a surrogate predictor, that iteratively learns to generate samples
from increasingly promising latent subspaces. This approach leads to very
effective and efficient architecture search, while keeping the query amount
low. In addition, our approach allows in a straightforward manner to jointly
optimize for multiple objectives such as accuracy and hardware latency. We show
the benefit of this approach not only w.r.t. the optimization of architectures
for highest classification accuracy but also in the context of hardware
constraints and outperform state-of-the art methods on several NAS benchmarks
for single and multiple objectives. We also achieve state-of-the-art
performance on ImageNet.
Related papers
- POMONAG: Pareto-Optimal Many-Objective Neural Architecture Generator [4.09225917049674]
Transferable NAS has emerged, generalizing the search process from dataset-dependent to task-dependent.
This paper introduces POMONAG, extending DiffusionNAG via a many-optimal diffusion process.
Results were validated on two search spaces -- NAS201 and MobileNetV3 -- and evaluated across 15 image classification datasets.
arXiv Detail & Related papers (2024-09-30T16:05:29Z) - A Pairwise Comparison Relation-assisted Multi-objective Evolutionary Neural Architecture Search Method with Multi-population Mechanism [58.855741970337675]
Neural architecture search (NAS) enables re-searchers to automatically explore vast search spaces and find efficient neural networks.
NAS suffers from a key bottleneck, i.e., numerous architectures need to be evaluated during the search process.
We propose the SMEM-NAS, a pairwise com-parison relation-assisted multi-objective evolutionary algorithm based on a multi-population mechanism.
arXiv Detail & Related papers (2024-07-22T12:46:22Z) - Learning Interpretable Models Through Multi-Objective Neural
Architecture Search [0.9990687944474739]
We propose a framework to optimize for both task performance and "introspectability," a surrogate metric for aspects of interpretability.
We demonstrate that jointly optimizing for task error and introspectability leads to more disentangled and debuggable architectures that perform within error.
arXiv Detail & Related papers (2021-12-16T05:50:55Z) - iDARTS: Differentiable Architecture Search with Stochastic Implicit
Gradients [75.41173109807735]
Differentiable ARchiTecture Search (DARTS) has recently become the mainstream of neural architecture search (NAS)
We tackle the hypergradient computation in DARTS based on the implicit function theorem.
We show that the architecture optimisation with the proposed method, named iDARTS, is expected to converge to a stationary point.
arXiv Detail & Related papers (2021-06-21T00:44:11Z) - Evolving Search Space for Neural Architecture Search [70.71153433676024]
We present a Neural Search-space Evolution (NSE) scheme that amplifies the results from the previous effort by maintaining an optimized search space subset.
We achieve 77.3% top-1 retrain accuracy on ImageNet with 333M FLOPs, which yielded a state-of-the-art performance.
When the latency constraint is adopted, our result also performs better than the previous best-performing mobile models with a 77.9% Top-1 retrain accuracy.
arXiv Detail & Related papers (2020-11-22T01:11:19Z) - Smooth Variational Graph Embeddings for Efficient Neural Architecture
Search [41.62970837629573]
We propose a two-sided variational graph autoencoder, which allows to smoothly encode and accurately reconstruct neural architectures from various search spaces.
We evaluate the proposed approach on neural architectures defined by the ENAS approach, the NAS-Bench-101 and the NAS-Bench-201 search spaces.
arXiv Detail & Related papers (2020-10-09T17:05:41Z) - Off-Policy Reinforcement Learning for Efficient and Effective GAN
Architecture Search [50.40004966087121]
We introduce a new reinforcement learning based neural architecture search (NAS) methodology for generative adversarial network (GAN) architecture search.
The key idea is to formulate the GAN architecture search problem as a Markov decision process (MDP) for smoother architecture sampling.
We exploit an off-policy GAN architecture search algorithm that makes efficient use of the samples generated by previous policies.
arXiv Detail & Related papers (2020-07-17T18:29:17Z) - Interpretable Neural Architecture Search via Bayesian Optimisation with
Weisfeiler-Lehman Kernels [17.945881805452288]
Current neural architecture search (NAS) strategies focus on finding a single, good, architecture.
We propose a Bayesian optimisation approach for NAS that combines the Weisfeiler-Lehman graph kernel with a Gaussian process surrogate.
Our method affords interpretability by discovering useful network features and their corresponding impact on the network performance.
arXiv Detail & Related papers (2020-06-13T04:10:34Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Stage-Wise Neural Architecture Search [65.03109178056937]
Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications.
These networks consist of stages, which are sets of layers that operate on representations in the same resolution.
It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network.
However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time.
arXiv Detail & Related papers (2020-04-23T14:16:39Z) - Neural Architecture Generator Optimization [9.082931889304723]
We are first to investigate casting NAS as a problem of finding the optimal network generator.
We propose a new, hierarchical and graph-based search space capable of representing an extremely large variety of network types.
arXiv Detail & Related papers (2020-04-03T06:38:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.