Binarized Neural Architecture Search for Efficient Object Recognition
- URL: http://arxiv.org/abs/2009.04247v1
- Date: Tue, 8 Sep 2020 15:51:23 GMT
- Title: Binarized Neural Architecture Search for Efficient Object Recognition
- Authors: Hanlin Chen, Li'an Zhuo, Baochang Zhang, Xiawu Zheng, Jianzhuang Liu,
Rongrong Ji, David Doermann, Guodong Guo
- Abstract summary: Binarized neural architecture search (BNAS) produces extremely compressed models to reduce huge computational cost on embedded devices for edge computing.
An accuracy of $96.53%$ vs. $97.22%$ is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a $40%$ faster search than the state-of-the-art PC-DARTS.
- Score: 120.23378346337311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional neural architecture search (NAS) has a significant impact in
computer vision by automatically designing network architectures for various
tasks. In this paper, binarized neural architecture search (BNAS), with a
search space of binarized convolutions, is introduced to produce extremely
compressed models to reduce huge computational cost on embedded devices for
edge computing. The BNAS calculation is more challenging than NAS due to the
learning inefficiency caused by optimization requirements and the huge
architecture space, and the performance loss when handling the wild data in
various computing applications. To address these issues, we introduce operation
space reduction and channel sampling into BNAS to significantly reduce the cost
of searching. This is accomplished through a performance-based strategy that is
robust to wild data, which is further used to abandon less potential
operations. Furthermore, we introduce the Upper Confidence Bound (UCB) to solve
1-bit BNAS. Two optimization methods for binarized neural networks are used to
validate the effectiveness of our BNAS. Extensive experiments demonstrate that
the proposed BNAS achieves a comparable performance to NAS on both CIFAR and
ImageNet databases. An accuracy of $96.53\%$ vs. $97.22\%$ is achieved on the
CIFAR-10 dataset, but with a significantly compressed model, and a $40\%$
faster search than the state-of-the-art PC-DARTS. On the wild face recognition
task, our binarized models achieve a performance similar to their corresponding
full-precision models.
Related papers
- Delta-NAS: Difference of Architecture Encoding for Predictor-based Evolutionary Neural Architecture Search [5.1331676121360985]
We craft an algorithm with the capability to perform fine-grain NAS at a low cost.
We propose projecting the problem to a lower dimensional space through predicting the difference in accuracy of a pair of similar networks.
arXiv Detail & Related papers (2024-11-21T02:43:32Z) - DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit
CNNs [53.82853297675979]
1-bit convolutional neural networks (CNNs) with binary weights and activations show their potential for resource-limited embedded devices.
One natural approach is to use 1-bit CNNs to reduce the computation and memory cost of NAS.
We introduce Discrepant Child-Parent Neural Architecture Search (DCP-NAS) to efficiently search 1-bit CNNs.
arXiv Detail & Related papers (2023-06-27T11:28:29Z) - $\alpha$NAS: Neural Architecture Search using Property Guided Synthesis [1.2746672439030722]
We develop techniques that enable efficient neural architecture search (NAS) in a significantly larger design space.
Our key insights are as follows: (1) the abstract search space is significantly smaller than the original search space, and (2) architectures with similar program properties also have similar performance.
We implement our approach, $alpha$NAS, within an evolutionary framework, where the mutations are guided by the program properties.
arXiv Detail & Related papers (2022-05-08T21:48:03Z) - BaLeNAS: Differentiable Architecture Search via the Bayesian Learning
Rule [95.56873042777316]
Differentiable Architecture Search (DARTS) has received massive attention in recent years, mainly because it significantly reduces the computational cost.
This paper formulates the neural architecture search as a distribution learning problem through relaxing the architecture weights into Gaussian distributions.
We demonstrate how the differentiable NAS benefits from Bayesian principles, enhancing exploration and improving stability.
arXiv Detail & Related papers (2021-11-25T18:13:42Z) - L$^{2}$NAS: Learning to Optimize Neural Architectures via
Continuous-Action Reinforcement Learning [23.25155249879658]
Differentiable architecture search (NAS) achieved remarkable results in deep neural network design.
We show that L$2$ achieves state-of-theart results on DART201 benchmark as well as NASS and Once-for-All search policies.
arXiv Detail & Related papers (2021-09-25T19:26:30Z) - Memory-Efficient Hierarchical Neural Architecture Search for Image
Restoration [68.6505473346005]
We propose a memory-efficient hierarchical NAS HiNAS (HiNAS) for image denoising and image super-resolution tasks.
With a single GTX1080Ti GPU, it takes only about 1 hour for searching for denoising network on BSD 500 and 3.5 hours for searching for the super-resolution structure on DIV2K.
arXiv Detail & Related papers (2020-12-24T12:06:17Z) - AdvantageNAS: Efficient Neural Architecture Search with Credit
Assignment [23.988393741948485]
We propose a novel search strategy for one-shot and sparse propagation NAS, namely AdvantageNAS.
AdvantageNAS is a gradient-based approach that improves the search efficiency by introducing credit assignment in gradient estimation for architecture updates.
Experiments on the NAS-Bench-201 and PTB dataset show that AdvantageNAS discovers an architecture with higher performance under a limited time budget.
arXiv Detail & Related papers (2020-12-11T05:45:03Z) - CP-NAS: Child-Parent Neural Architecture Search for Binary Neural
Networks [27.867108193391633]
We propose a 1-bit convolutional neural network (CNN) to reduce the computation and memory cost of Neural Architecture Search (NAS)
A Child-Parent (CP) model is introduced to a differentiable NAS to search the binarized architecture (Child) under the supervision of a full-precision model (Parent)
It achieves the accuracy of $95.27%$ on CIFAR-10, $64.3%$ on ImageNet with binarized weights and activations, and a $30%$ faster search than prior arts.
arXiv Detail & Related papers (2020-04-30T19:09:55Z) - BNAS:An Efficient Neural Architecture Search Approach Using Broad
Scalable Architecture [62.587982139871976]
We propose Broad Neural Architecture Search (BNAS) where we elaborately design broad scalable architecture dubbed Broad Convolutional Neural Network (BCNN)
BNAS delivers 0.19 days which is 2.37x less expensive than ENAS who ranks the best in reinforcement learning-based NAS approaches.
arXiv Detail & Related papers (2020-01-18T15:07:55Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.