$\alpha$NAS: Neural Architecture Search using Property Guided Synthesis
- URL: http://arxiv.org/abs/2205.03960v1
- Date: Sun, 8 May 2022 21:48:03 GMT
- Title: $\alpha$NAS: Neural Architecture Search using Property Guided Synthesis
- Authors: Charles Jin, Phitchaya Mangpo Phothilimthana, Sudip Roy
- Abstract summary: We develop techniques that enable efficient neural architecture search (NAS) in a significantly larger design space.
Our key insights are as follows: (1) the abstract search space is significantly smaller than the original search space, and (2) architectures with similar program properties also have similar performance.
We implement our approach, $alpha$NAS, within an evolutionary framework, where the mutations are guided by the program properties.
- Score: 1.2746672439030722
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past few years, neural architecture search (NAS) has become an
increasingly important tool within the deep learning community. Despite the
many recent successes of NAS, current approaches still fall far short of the
dream of automating an entire neural network architecture design from scratch.
Most existing approaches require highly structured design spaces formulated
manually by domain experts. In this work, we develop techniques that enable
efficient NAS in a significantly larger design space. To accomplish this, we
propose to perform NAS in an abstract search space of program properties. Our
key insights are as follows: (1) the abstract search space is significantly
smaller than the original search space, and (2) architectures with similar
program properties also have similar performance; thus, we can search more
efficiently in the abstract search space. To enable this approach, we also
propose an efficient synthesis procedure, which accepts a set of promising
program properties, and returns a satisfying neural architecture. We implement
our approach, $\alpha$NAS, within an evolutionary framework, where the
mutations are guided by the program properties. Starting with a ResNet-34
model, $\alpha$NAS produces a model with slightly improved accuracy on CIFAR-10
but 96% fewer parameters. On ImageNet, $\alpha$NAS is able to improve over
Vision Transformer (30% fewer FLOPS and parameters), ResNet-50 (23% fewer
FLOPS, 14% fewer parameters), and EfficientNet (7% fewer FLOPS and parameters)
without any degradation in accuracy.
Related papers
- Delta-NAS: Difference of Architecture Encoding for Predictor-based Evolutionary Neural Architecture Search [5.1331676121360985]
We craft an algorithm with the capability to perform fine-grain NAS at a low cost.
We propose projecting the problem to a lower dimensional space through predicting the difference in accuracy of a pair of similar networks.
arXiv Detail & Related papers (2024-11-21T02:43:32Z) - DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - Lightweight Neural Architecture Search for Temporal Convolutional
Networks at the Edge [21.72253397805102]
This work focuses in particular on Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing.
We propose the first NAS tool that explicitly targets the optimization of the most peculiar architectural parameters of TCNs.
We test the proposed NAS on four real-world, edge-relevant tasks, involving audio and bio-signals.
arXiv Detail & Related papers (2023-01-24T19:47:40Z) - When NAS Meets Trees: An Efficient Algorithm for Neural Architecture
Search [117.89827740405694]
Key challenge in neural architecture search (NAS) is designing how to explore wisely in the huge search space.
We propose a new NAS method called TNAS (NAS with trees), which improves search efficiency by exploring only a small number of architectures.
TNAS finds the global optimal architecture on CIFAR-10 with test accuracy of 94.37% in four GPU hours in NAS-Bench-201.
arXiv Detail & Related papers (2022-04-11T07:34:21Z) - TND-NAS: Towards Non-differentiable Objectives in Progressive
Differentiable NAS Framework [6.895590095853327]
Differentiable architecture search has gradually become the mainstream research topic in the field of Neural Architecture Search (NAS)
Recent differentiable NAS also aims at further improving the search performance and reducing the GPU-memory consumption.
We propose the TND-NAS, which is with the merits of the high efficiency in differentiable NAS framework and the compatibility among non-differentiable metrics in Multi-objective NAS.
arXiv Detail & Related papers (2021-11-06T14:19:36Z) - NAS-FCOS: Efficient Search for Object Detection Architectures [113.47766862146389]
We propose an efficient method to obtain better object detectors by searching for the feature pyramid network (FPN) and the prediction head of a simple anchor-free object detector.
With carefully designed search space, search algorithms, and strategies for evaluating network quality, we are able to find top-performing detection architectures within 4 days using 8 V100 GPUs.
arXiv Detail & Related papers (2021-10-24T12:20:04Z) - L$^{2}$NAS: Learning to Optimize Neural Architectures via
Continuous-Action Reinforcement Learning [23.25155249879658]
Differentiable architecture search (NAS) achieved remarkable results in deep neural network design.
We show that L$2$ achieves state-of-theart results on DART201 benchmark as well as NASS and Once-for-All search policies.
arXiv Detail & Related papers (2021-09-25T19:26:30Z) - Binarized Neural Architecture Search for Efficient Object Recognition [120.23378346337311]
Binarized neural architecture search (BNAS) produces extremely compressed models to reduce huge computational cost on embedded devices for edge computing.
An accuracy of $96.53%$ vs. $97.22%$ is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a $40%$ faster search than the state-of-the-art PC-DARTS.
arXiv Detail & Related papers (2020-09-08T15:51:23Z) - BNAS:An Efficient Neural Architecture Search Approach Using Broad
Scalable Architecture [62.587982139871976]
We propose Broad Neural Architecture Search (BNAS) where we elaborately design broad scalable architecture dubbed Broad Convolutional Neural Network (BCNN)
BNAS delivers 0.19 days which is 2.37x less expensive than ENAS who ranks the best in reinforcement learning-based NAS approaches.
arXiv Detail & Related papers (2020-01-18T15:07:55Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.