Self-supervised Representation Learning for Evolutionary Neural
Architecture Search
- URL: http://arxiv.org/abs/2011.00186v1
- Date: Sat, 31 Oct 2020 04:57:16 GMT
- Title: Self-supervised Representation Learning for Evolutionary Neural
Architecture Search
- Authors: Chen Wei, Yiping Tang, Chuang Niu, Haihong Hu, Yue Wang and Jimin
Liang
- Abstract summary: Recently proposed neural architecture search (NAS) algorithms adopt neural predictors to accelerate the architecture search.
How to obtain a neural predictor with high prediction accuracy using a small amount of training data is a central problem to neural predictor-based NAS.
We devise two self-supervised learning methods to pre-train the architecture embedding part of neural predictors.
We achieve state-of-the-art performance on the NASBench-101 and NASBench201 benchmarks when integrating the pre-trained neural predictors with an evolutionary NAS algorithm.
- Score: 9.038625856798227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently proposed neural architecture search (NAS) algorithms adopt neural
predictors to accelerate the architecture search. The capability of neural
predictors to accurately predict the performance metrics of neural architecture
is critical to NAS, and the acquisition of training datasets for neural
predictors is time-consuming. How to obtain a neural predictor with high
prediction accuracy using a small amount of training data is a central problem
to neural predictor-based NAS. Here, we firstly design a new architecture
encoding scheme that overcomes the drawbacks of existing vector-based
architecture encoding schemes to calculate the graph edit distance of neural
architectures. To enhance the predictive performance of neural predictors, we
devise two self-supervised learning methods from different perspectives to
pre-train the architecture embedding part of neural predictors to generate a
meaningful representation of neural architectures. The first one is to train a
carefully designed two branch graph neural network model to predict the graph
edit distance of two input neural architectures. The second method is inspired
by the prevalently contrastive learning, and we present a new contrastive
learning algorithm that utilizes a central feature vector as a proxy to
contrast positive pairs against negative pairs. Experimental results illustrate
that the pre-trained neural predictors can achieve comparable or superior
performance compared with their supervised counterparts with several times less
training samples. We achieve state-of-the-art performance on the NASBench-101
and NASBench201 benchmarks when integrating the pre-trained neural predictors
with an evolutionary NAS algorithm.
Related papers
- FR-NAS: Forward-and-Reverse Graph Predictor for Efficient Neural Architecture Search [10.699485270006601]
We introduce a novel Graph Neural Networks (GNN) predictor for Neural Architecture Search (NAS)
This predictor renders neural architectures into vector representations by combining both the conventional and inverse graph views.
The experimental results showcase a significant improvement in prediction accuracy, with a 3%--16% increase in Kendall-tau correlation.
arXiv Detail & Related papers (2024-04-24T03:22:49Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Set-based Neural Network Encoding Without Weight Tying [91.37161634310819]
We propose a neural network weight encoding method for network property prediction.
Our approach is capable of encoding neural networks in a model zoo of mixed architecture.
We introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture.
arXiv Detail & Related papers (2023-05-26T04:34:28Z) - NAR-Former: Neural Architecture Representation Learning towards Holistic
Attributes Prediction [37.357949900603295]
We propose a neural architecture representation model that can be used to estimate attributes holistically.
Experiment results show that our proposed framework can be used to predict the latency and accuracy attributes of both cell architectures and whole deep neural networks.
arXiv Detail & Related papers (2022-11-15T10:15:21Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Differentiable Neural Architecture Learning for Efficient Neural Network
Design [31.23038136038325]
We introduce a novel emph architecture parameterisation based on scaled sigmoid function.
We then propose a general emphiable Neural Architecture Learning (DNAL) method to optimize the neural architecture without the need to evaluate candidate neural networks.
arXiv Detail & Related papers (2021-03-03T02:03:08Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - NPENAS: Neural Predictor Guided Evolution for Neural Architecture Search [9.038625856798227]
We propose a neural predictor guided evolutionary algorithm to enhance the exploration ability of EA for Neural architecture search (NAS)
NPENAS-BO and NPENAS-NP outperform most existing NAS algorithms.
arXiv Detail & Related papers (2020-03-28T17:56:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.