PONAS: Progressive One-shot Neural Architecture Search for Very
Efficient Deployment
- URL: http://arxiv.org/abs/2003.05112v2
- Date: Thu, 9 Apr 2020 05:27:40 GMT
- Title: PONAS: Progressive One-shot Neural Architecture Search for Very
Efficient Deployment
- Authors: Sian-Yao Huang and Wei-Ta Chu
- Abstract summary: We propose Progressive One-shot Neural Architecture Search (PONAS) that combines advantages of progressive NAS and one-shot methods.
PONAS is able to find architecture of a specialized network in around 10 seconds.
In ImageNet classification, 75.2% top-1 accuracy can be obtained, which is comparable with the state of the arts.
- Score: 9.442139459221783
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We achieve very efficient deep learning model deployment that designs neural
network architectures to fit different hardware constraints. Given a
constraint, most neural architecture search (NAS) methods either sample a set
of sub-networks according to a pre-trained accuracy predictor, or adopt the
evolutionary algorithm to evolve specialized networks from the supernet. Both
approaches are time consuming. Here our key idea for very efficient deployment
is, when searching the architecture space, constructing a table that stores the
validation accuracy of all candidate blocks at all layers. For a stricter
hardware constraint, the architecture of a specialized network can be very
efficiently determined based on this table by picking the best candidate blocks
that yield the least accuracy loss. To accomplish this idea, we propose
Progressive One-shot Neural Architecture Search (PONAS) that combines
advantages of progressive NAS and one-shot methods. In PONAS, we propose a
two-stage training scheme, including the meta training stage and the
fine-tuning stage, to make the search process efficient and stable. During
search, we evaluate candidate blocks in different layers and construct the
accuracy table that is to be used in deployment. Comprehensive experiments
verify that PONAS is extremely flexible, and is able to find architecture of a
specialized network in around 10 seconds. In ImageNet classification, 75.2%
top-1 accuracy can be obtained, which is comparable with the state of the arts.
Related papers
- GeNAS: Neural Architecture Search with Better Generalization [14.92869716323226]
Recent neural architecture search (NAS) approaches rely on validation loss or accuracy to find the superior network for the target data.
In this paper, we investigate a new neural architecture search measure for excavating architectures with better generalization.
arXiv Detail & Related papers (2023-05-15T12:44:54Z) - OFA$^2$: A Multi-Objective Perspective for the Once-for-All Neural
Architecture Search [79.36688444492405]
Once-for-All (OFA) is a Neural Architecture Search (NAS) framework designed to address the problem of searching efficient architectures for devices with different resources constraints.
We aim to give one step further in the search for efficiency by explicitly conceiving the search stage as a multi-objective optimization problem.
arXiv Detail & Related papers (2023-03-23T21:30:29Z) - DAS: Neural Architecture Search via Distinguishing Activation Score [21.711985665733653]
Neural Architecture Search (NAS) is an automatic technique that can search for well-performed architectures for a specific task.
We propose a dataset called Darts-training-bench (DTB), which fills the gap that no training states of architecture in existing datasets.
Our proposed method has 1.04$times$ - 1.56$times$ improvements on NAS-Bench-101, Network Design Spaces, and the proposed DTB.
arXiv Detail & Related papers (2022-12-23T04:02:46Z) - Pruning-as-Search: Efficient Neural Architecture Search via Channel
Pruning and Structural Reparameterization [50.50023451369742]
Pruning-as-Search (PaS) is an end-to-end channel pruning method to search out desired sub-network automatically and efficiently.
Our proposed architecture outperforms prior arts by around $1.0%$ top-1 accuracy on ImageNet-1000 classification task.
arXiv Detail & Related papers (2022-06-02T17:58:54Z) - AceNAS: Learning to Rank Ace Neural Architectures with Weak Supervision
of Weight Sharing [6.171090327531059]
We introduce Learning to Rank methods to select the best (ace) architectures from a space.
We also propose to leverage weak supervision from weight sharing by pretraining architecture representation on weak labels obtained from the super-net.
Experiments on NAS benchmarks and large-scale search spaces demonstrate that our approach outperforms SOTA with a significantly reduced search cost.
arXiv Detail & Related papers (2021-08-06T08:31:42Z) - OPANAS: One-Shot Path Aggregation Network Architecture Search for Object
Detection [82.04372532783931]
Recently, neural architecture search (NAS) has been exploited to design feature pyramid networks (FPNs)
We propose a novel One-Shot Path Aggregation Network Architecture Search (OPANAS) algorithm, which significantly improves both searching efficiency and detection accuracy.
arXiv Detail & Related papers (2021-03-08T01:48:53Z) - Weak NAS Predictors Are All You Need [91.11570424233709]
Recent predictor-based NAS approaches attempt to solve the problem with two key steps: sampling some architecture-performance pairs and fitting a proxy accuracy predictor.
We shift the paradigm from finding a complicated predictor that covers the whole architecture space to a set of weaker predictors that progressively move towards the high-performance sub-space.
Our method costs fewer samples to find the top-performance architectures on NAS-Bench-101 and NAS-Bench-201, and it achieves the state-of-the-art ImageNet performance on the NASNet search space.
arXiv Detail & Related papers (2021-02-21T01:58:43Z) - Smooth Variational Graph Embeddings for Efficient Neural Architecture
Search [41.62970837629573]
We propose a two-sided variational graph autoencoder, which allows to smoothly encode and accurately reconstruct neural architectures from various search spaces.
We evaluate the proposed approach on neural architectures defined by the ENAS approach, the NAS-Bench-101 and the NAS-Bench-201 search spaces.
arXiv Detail & Related papers (2020-10-09T17:05:41Z) - DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search [76.9225014200746]
Efficient search is a core issue in Neural Architecture Search (NAS)
We present DA-NAS that can directly search the architecture for large-scale target tasks while allowing a large candidate set in a more efficient manner.
It is 2x faster than previous methods while the accuracy is currently state-of-the-art, at 76.2% under small FLOPs constraint.
arXiv Detail & Related papers (2020-03-27T17:55:21Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.