Optimizing Neural Architecture Search using Limited GPU Time in a
Dynamic Search Space: A Gene Expression Programming Approach
- URL: http://arxiv.org/abs/2005.07669v1
- Date: Fri, 15 May 2020 17:32:30 GMT
- Title: Optimizing Neural Architecture Search using Limited GPU Time in a
Dynamic Search Space: A Gene Expression Programming Approach
- Authors: Jeovane Honorio Alves, Lucas Ferrari de Oliveira
- Abstract summary: We propose an evolutionary-based neural architecture search approach for efficient discovery of convolutional models.
With its efficient search environment and phenotype representation, Gene Expression Programming is adapted for network's cell generation.
Our proposal achieved similar state-of-the-art to manually-designed convolutional networks and also NAS-generated ones, even beating similar constrained evolutionary-based NAS works.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient identification of people and objects, segmentation of regions of
interest and extraction of relevant data in images, texts, audios and videos
are evolving considerably in these past years, which deep learning methods,
combined with recent improvements in computational resources, contributed
greatly for this achievement. Although its outstanding potential, development
of efficient architectures and modules requires expert knowledge and amount of
resource time available. In this paper, we propose an evolutionary-based neural
architecture search approach for efficient discovery of convolutional models in
a dynamic search space, within only 24 GPU hours. With its efficient search
environment and phenotype representation, Gene Expression Programming is
adapted for network's cell generation. Despite having limited GPU resource time
and broad search space, our proposal achieved similar state-of-the-art to
manually-designed convolutional networks and also NAS-generated ones, even
beating similar constrained evolutionary-based NAS works. The best cells in
different runs achieved stable results, with a mean error of 2.82% in CIFAR-10
dataset (which the best model achieved an error of 2.67%) and 18.83% for
CIFAR-100 (best model with 18.16%). For ImageNet in the mobile setting, our
best model achieved top-1 and top-5 errors of 29.51% and 10.37%, respectively.
Although evolutionary-based NAS works were reported to require a considerable
amount of GPU time for architecture search, our approach obtained promising
results in little time, encouraging further experiments in evolutionary-based
NAS, for search and network representation improvements.
Related papers
- A Pairwise Comparison Relation-assisted Multi-objective Evolutionary Neural Architecture Search Method with Multi-population Mechanism [58.855741970337675]
Neural architecture search (NAS) enables re-searchers to automatically explore vast search spaces and find efficient neural networks.
NAS suffers from a key bottleneck, i.e., numerous architectures need to be evaluated during the search process.
We propose the SMEM-NAS, a pairwise com-parison relation-assisted multi-objective evolutionary algorithm based on a multi-population mechanism.
arXiv Detail & Related papers (2024-07-22T12:46:22Z) - DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit
CNNs [53.82853297675979]
1-bit convolutional neural networks (CNNs) with binary weights and activations show their potential for resource-limited embedded devices.
One natural approach is to use 1-bit CNNs to reduce the computation and memory cost of NAS.
We introduce Discrepant Child-Parent Neural Architecture Search (DCP-NAS) to efficiently search 1-bit CNNs.
arXiv Detail & Related papers (2023-06-27T11:28:29Z) - $\alpha$NAS: Neural Architecture Search using Property Guided Synthesis [1.2746672439030722]
We develop techniques that enable efficient neural architecture search (NAS) in a significantly larger design space.
Our key insights are as follows: (1) the abstract search space is significantly smaller than the original search space, and (2) architectures with similar program properties also have similar performance.
We implement our approach, $alpha$NAS, within an evolutionary framework, where the mutations are guided by the program properties.
arXiv Detail & Related papers (2022-05-08T21:48:03Z) - Guided Evolution for Neural Architecture Search [1.0499611180329804]
We propose a novel approach for guided evolutionary Neural Architecture Search (NAS)
The rationale behind G-EA, is to explore the search space by generating and evaluating several architectures in each generation.
G-EA forces exploitation of the most performant networks by descendant generation while at the same time forcing exploration by parent mutation.
arXiv Detail & Related papers (2021-10-28T15:43:20Z) - NAS-FCOS: Efficient Search for Object Detection Architectures [113.47766862146389]
We propose an efficient method to obtain better object detectors by searching for the feature pyramid network (FPN) and the prediction head of a simple anchor-free object detector.
With carefully designed search space, search algorithms, and strategies for evaluating network quality, we are able to find top-performing detection architectures within 4 days using 8 V100 GPUs.
arXiv Detail & Related papers (2021-10-24T12:20:04Z) - EEEA-Net: An Early Exit Evolutionary Neural Architecture Search [6.569256728493014]
Search for Convolutional Neural Network (CNN) architectures suitable for an on-device processor with limited computing resources.
New algorithm entitled an Early Exit Population Initialisation (EE-PI) for Evolutionary Algorithm (EA) developed.
EA-Net achieved the lowest error rate among state-of-the-art NAS models, with 2.46% for CIFAR-10, 15.02% for CIFAR-100, and 23.8% for ImageNet dataset.
arXiv Detail & Related papers (2021-08-13T10:23:19Z) - DrNAS: Dirichlet Neural Architecture Search [88.56953713817545]
We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution.
With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based generalization.
To alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme.
arXiv Detail & Related papers (2020-06-18T08:23:02Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.