Homogeneous Architecture Augmentation for Neural Predictor
- URL: http://arxiv.org/abs/2107.13153v1
- Date: Wed, 28 Jul 2021 03:46:33 GMT
- Title: Homogeneous Architecture Augmentation for Neural Predictor
- Authors: Yuqiao Liu, Yehui Tang, Yanan Sun
- Abstract summary: Neural Architecture Search (NAS) can automatically design well-performed architectures of Deep Neural Networks (DNNs) for the tasks at hand.
One bottleneck of NAS is the computational cost largely due to the expensive performance evaluation.
Despite their popularity, they also suffer a severe limitation: the shortage of annotated DNN architectures for effectively training the neural predictors.
- Score: 13.35821898997164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Architecture Search (NAS) can automatically design well-performed
architectures of Deep Neural Networks (DNNs) for the tasks at hand. However,
one bottleneck of NAS is the prohibitively computational cost largely due to
the expensive performance evaluation. The neural predictors can directly
estimate the performance without any training of the DNNs to be evaluated, thus
have drawn increasing attention from researchers. Despite their popularity,
they also suffer a severe limitation: the shortage of annotated DNN
architectures for effectively training the neural predictors. In this paper, we
proposed Homogeneous Architecture Augmentation for Neural Predictor (HAAP) of
DNN architectures to address the issue aforementioned. Specifically, a
homogeneous architecture augmentation algorithm is proposed in HAAP to generate
sufficient training data taking the use of homogeneous representation.
Furthermore, the one-hot encoding strategy is introduced into HAAP to make the
representation of DNN architectures more effective. The experiments have been
conducted on both NAS-Benchmark-101 and NAS-Bench-201 dataset. The experimental
results demonstrate that the proposed HAAP algorithm outperforms the state of
the arts compared, yet with much less training data. In addition, the ablation
studies on both benchmark datasets have also shown the universality of the
homogeneous architecture augmentation.
Related papers
- A Pairwise Comparison Relation-assisted Multi-objective Evolutionary Neural Architecture Search Method with Multi-population Mechanism [58.855741970337675]
Neural architecture search (NAS) enables re-searchers to automatically explore vast search spaces and find efficient neural networks.
NAS suffers from a key bottleneck, i.e., numerous architectures need to be evaluated during the search process.
We propose the SMEM-NAS, a pairwise com-parison relation-assisted multi-objective evolutionary algorithm based on a multi-population mechanism.
arXiv Detail & Related papers (2024-07-22T12:46:22Z) - CAP: A Context-Aware Neural Predictor for NAS [4.8761456288582945]
We propose a context-aware neural predictor (CAP) which only needs a few annotated architectures for training.
Experimental results in different search spaces demonstrate the superior performance of CAP compared with state-of-the-art neural predictors.
arXiv Detail & Related papers (2024-06-04T07:37:47Z) - FR-NAS: Forward-and-Reverse Graph Predictor for Efficient Neural Architecture Search [10.699485270006601]
We introduce a novel Graph Neural Networks (GNN) predictor for Neural Architecture Search (NAS)
This predictor renders neural architectures into vector representations by combining both the conventional and inverse graph views.
The experimental results showcase a significant improvement in prediction accuracy, with a 3%--16% increase in Kendall-tau correlation.
arXiv Detail & Related papers (2024-04-24T03:22:49Z) - NAR-Former: Neural Architecture Representation Learning towards Holistic
Attributes Prediction [37.357949900603295]
We propose a neural architecture representation model that can be used to estimate attributes holistically.
Experiment results show that our proposed framework can be used to predict the latency and accuracy attributes of both cell architectures and whole deep neural networks.
arXiv Detail & Related papers (2022-11-15T10:15:21Z) - Neural Architecture Search for Speech Emotion Recognition [72.1966266171951]
We propose to apply neural architecture search (NAS) techniques to automatically configure the SER models.
We show that NAS can improve SER performance (54.89% to 56.28%) while maintaining model parameter sizes.
arXiv Detail & Related papers (2022-03-31T10:16:10Z) - Self-Learning for Received Signal Strength Map Reconstruction with
Neural Architecture Search [63.39818029362661]
We present a model based on Neural Architecture Search (NAS) and self-learning for received signal strength ( RSS) map reconstruction.
The approach first finds an optimal NN architecture and simultaneously train the deduced model over some ground-truth measurements of a given ( RSS) map.
Experimental results show that signal predictions of this second model outperforms non-learning based state-of-the-art techniques and NN models with no architecture search.
arXiv Detail & Related papers (2021-05-17T12:19:22Z) - Weak NAS Predictors Are All You Need [91.11570424233709]
Recent predictor-based NAS approaches attempt to solve the problem with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor.
We shift the paradigm from finding a complicated predictor that covers the whole architecture space to a set of weaker predictors that progressively move towards the high-performance sub-space.
Our method costs fewer samples to find the top-performance architectures on NAS-Bench-101 and NAS-Bench-201, and it achieves the state-of-the-art ImageNet performance on the NASNet search space.
arXiv Detail & Related papers (2021-02-21T01:58:43Z) - Neural Architecture Performance Prediction Using Graph Neural Networks [17.224223176258334]
We propose a surrogate model for neural architecture performance prediction built upon Graph Neural Networks (GNN)
We demonstrate the effectiveness of this surrogate model on neural architecture performance prediction for structurally unknown architectures.
arXiv Detail & Related papers (2020-10-19T09:33:57Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.