Operation Embeddings for Neural Architecture Search
- URL: http://arxiv.org/abs/2105.04885v1
- Date: Tue, 11 May 2021 09:17:10 GMT
- Title: Operation Embeddings for Neural Architecture Search
- Authors: Michail Chatzianastasis, George Dasoulas, Georgios Siolas, Michalis
Vazirgiannis
- Abstract summary: We propose the replacement of fixed operator encoding with learnable representations in the optimization process.
Our method produces top-performing architectures that share similar operation and graph patterns.
- Score: 15.033712726016255
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Architecture Search (NAS) has recently gained increased attention, as
a class of approaches that automatically searches in an input space of network
architectures. A crucial part of the NAS pipeline is the encoding of the
architecture that consists of the applied computational blocks, namely the
operations and the links between them. Most of the existing approaches either
fail to capture the structural properties of the architectures or use a
hand-engineered vector to encode the operator information. In this paper, we
propose the replacement of fixed operator encoding with learnable
representations in the optimization process. This approach, which effectively
captures the relations of different operations, leads to smoother and more
accurate representations of the architectures and consequently to improved
performance of the end task. Our extensive evaluation in ENAS benchmark
demonstrates the effectiveness of the proposed operation embeddings to the
generation of highly accurate models, achieving state-of-the-art performance.
Finally, our method produces top-performing architectures that share similar
operation and graph patterns, highlighting a strong correlation between
architecture's structural properties and performance.
Related papers
- AsCAN: Asymmetric Convolution-Attention Networks for Efficient Recognition and Generation [48.82264764771652]
We introduce AsCAN -- a hybrid architecture, combining both convolutional and transformer blocks.
AsCAN supports a variety of tasks: recognition, segmentation, class-conditional image generation.
We then scale the same architecture to solve a large-scale text-to-image task and show state-of-the-art performance.
arXiv Detail & Related papers (2024-11-07T18:43:17Z) - Simple and Efficient Architectures for Semantic Segmentation [50.1563637917129]
We show that a simple encoder-decoder architecture with a ResNet-like backbone and a small multi-scale head, performs on-par or better than complex semantic segmentation architectures such as HRNet, FANet and DDRNet.
We present a family of such simple architectures for desktop as well as mobile targets, which match or exceed the performance of complex models on the Cityscapes dataset.
arXiv Detail & Related papers (2022-06-16T15:08:34Z) - Learning Interpretable Models Through Multi-Objective Neural
Architecture Search [0.9990687944474739]
We propose a framework to optimize for both task performance and "introspectability," a surrogate metric for aspects of interpretability.
We demonstrate that jointly optimizing for task error and introspectability leads to more disentangled and debuggable architectures that perform within error.
arXiv Detail & Related papers (2021-12-16T05:50:55Z) - Rethinking Architecture Selection in Differentiable NAS [74.61723678821049]
Differentiable Neural Architecture Search is one of the most popular NAS methods for its search efficiency and simplicity.
We propose an alternative perturbation-based architecture selection that directly measures each operation's influence on the supernet.
We find that several failure modes of DARTS can be greatly alleviated with the proposed selection method.
arXiv Detail & Related papers (2021-08-10T00:53:39Z) - Towards Accurate and Compact Architectures via Neural Architecture
Transformer [95.4514639013144]
It is necessary to optimize the operations inside an architecture to improve the performance without introducing extra computational cost.
We have proposed a Neural Architecture Transformer (NAT) method which casts the optimization problem into a Markov Decision Process (MDP)
We propose a Neural Architecture Transformer++ (NAT++) method which further enlarges the set of candidate transitions to improve the performance of architecture optimization.
arXiv Detail & Related papers (2021-02-20T09:38:10Z) - Neural Architecture Optimization with Graph VAE [21.126140965779534]
We propose an efficient NAS approach to optimize network architectures in a continuous space.
The framework jointly learns four components: the encoder, the performance predictor, the complexity predictor and the decoder.
arXiv Detail & Related papers (2020-06-18T07:05:48Z) - Interpretable Neural Architecture Search via Bayesian Optimisation with
Weisfeiler-Lehman Kernels [17.945881805452288]
Current neural architecture search (NAS) strategies focus on finding a single, good, architecture.
We propose a Bayesian optimisation approach for NAS that combines the Weisfeiler-Lehman graph kernel with a Gaussian process surrogate.
Our method affords interpretability by discovering useful network features and their corresponding impact on the network performance.
arXiv Detail & Related papers (2020-06-13T04:10:34Z) - Does Unsupervised Architecture Representation Learning Help Neural
Architecture Search? [22.63641173256389]
Existing Neural Architecture Search (NAS) methods either encode neural architectures using discrete encodings that do not scale well, or adopt supervised learning-based methods to jointly learn architecture representations and optimize architecture search on such representations which incurs search bias.
We observe that the structural properties of neural architectures are hard to preserve in the latent space if architecture representation learning and search are coupled, resulting in less effective search performance.
arXiv Detail & Related papers (2020-06-12T04:15:34Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Stage-Wise Neural Architecture Search [65.03109178056937]
Modern convolutional networks such as ResNet and NASNet have achieved state-of-the-art results in many computer vision applications.
These networks consist of stages, which are sets of layers that operate on representations in the same resolution.
It has been demonstrated that increasing the number of layers in each stage improves the prediction ability of the network.
However, the resulting architecture becomes computationally expensive in terms of floating point operations, memory requirements and inference time.
arXiv Detail & Related papers (2020-04-23T14:16:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.