CP-NAS: Child-Parent Neural Architecture Search for Binary Neural
Networks
- URL: http://arxiv.org/abs/2005.00057v2
- Date: Sun, 17 May 2020 15:38:02 GMT
- Title: CP-NAS: Child-Parent Neural Architecture Search for Binary Neural
Networks
- Authors: Li'an Zhuo, Baochang Zhang, Hanlin Chen, Linlin Yang, Chen Chen,
Yanjun Zhu and David Doermann
- Abstract summary: We propose a 1-bit convolutional neural network (CNN) to reduce the computation and memory cost of Neural Architecture Search (NAS)
A Child-Parent (CP) model is introduced to a differentiable NAS to search the binarized architecture (Child) under the supervision of a full-precision model (Parent)
It achieves the accuracy of $95.27%$ on CIFAR-10, $64.3%$ on ImageNet with binarized weights and activations, and a $30%$ faster search than prior arts.
- Score: 27.867108193391633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural architecture search (NAS) proves to be among the best approaches for
many tasks by generating an application-adaptive neural architecture, which is
still challenged by high computational cost and memory consumption. At the same
time, 1-bit convolutional neural networks (CNNs) with binarized weights and
activations show their potential for resource-limited embedded devices. One
natural approach is to use 1-bit CNNs to reduce the computation and memory cost
of NAS by taking advantage of the strengths of each in a unified framework. To
this end, a Child-Parent (CP) model is introduced to a differentiable NAS to
search the binarized architecture (Child) under the supervision of a
full-precision model (Parent). In the search stage, the Child-Parent model uses
an indicator generated by the child and parent model accuracy to evaluate the
performance and abandon operations with less potential. In the training stage,
a kernel-level CP loss is introduced to optimize the binarized network.
Extensive experiments demonstrate that the proposed CP-NAS achieves a
comparable accuracy with traditional NAS on both the CIFAR and ImageNet
databases. It achieves the accuracy of $95.27\%$ on CIFAR-10, $64.3\%$ on
ImageNet with binarized weights and activations, and a $30\%$ faster search
than prior arts.
Related papers
- DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - DCP-NAS: Discrepant Child-Parent Neural Architecture Search for 1-bit
CNNs [53.82853297675979]
1-bit convolutional neural networks (CNNs) with binary weights and activations show their potential for resource-limited embedded devices.
One natural approach is to use 1-bit CNNs to reduce the computation and memory cost of NAS.
We introduce Discrepant Child-Parent Neural Architecture Search (DCP-NAS) to efficiently search 1-bit CNNs.
arXiv Detail & Related papers (2023-06-27T11:28:29Z) - Towards Self-supervised and Weight-preserving Neural Architecture Search [38.497608743382145]
We propose the self-supervised and weight-preserving neural architecture search (SSWP-NAS) as an extension of the current NAS framework.
Experiments show that the architectures searched by the proposed framework achieve state-of-the-art accuracy on CIFAR-10, CIFAR-100, and ImageNet datasets.
arXiv Detail & Related papers (2022-06-08T18:48:05Z) - Evolutionary Neural Cascade Search across Supernetworks [68.8204255655161]
We introduce ENCAS - Evolutionary Neural Cascade Search.
ENCAS can be used to search over multiple pretrained supernetworks.
We test ENCAS on common computer vision benchmarks.
arXiv Detail & Related papers (2022-03-08T11:06:01Z) - Neural Architecture Search on ImageNet in Four GPU Hours: A
Theoretically Inspired Perspective [88.39981851247727]
We propose a novel framework called training-free neural architecture search (TE-NAS)
TE-NAS ranks architectures by analyzing the spectrum of the neural tangent kernel (NTK) and the number of linear regions in the input space.
We show that: (1) these two measurements imply the trainability and expressivity of a neural network; (2) they strongly correlate with the network's test accuracy.
arXiv Detail & Related papers (2021-02-23T07:50:44Z) - Binarized Neural Architecture Search for Efficient Object Recognition [120.23378346337311]
Binarized neural architecture search (BNAS) produces extremely compressed models to reduce huge computational cost on embedded devices for edge computing.
An accuracy of $96.53%$ vs. $97.22%$ is achieved on the CIFAR-10 dataset, but with a significantly compressed model, and a $40%$ faster search than the state-of-the-art PC-DARTS.
arXiv Detail & Related papers (2020-09-08T15:51:23Z) - Fast Neural Network Adaptation via Parameter Remapping and Architecture
Search [35.61441231491448]
Deep neural networks achieve remarkable performance in many computer vision tasks.
Most state-of-the-art (SOTA) semantic segmentation and object detection approaches reuse neural network architectures designed for image classification as the backbone.
One major challenge though, is that ImageNet pre-training of the search space representation incurs huge computational cost.
In this paper, we propose a Fast Neural Network Adaptation (FNA) method, which can adapt both the architecture and parameters of a seed network.
arXiv Detail & Related papers (2020-01-08T13:45:15Z) - DDPNAS: Efficient Neural Architecture Search via Dynamic Distribution
Pruning [135.27931587381596]
We propose an efficient and unified NAS framework termed DDPNAS via dynamic distribution pruning.
In particular, we first sample architectures from a joint categorical distribution. Then the search space is dynamically pruned and its distribution is updated every few epochs.
With the proposed efficient network generation method, we directly obtain the optimal neural architectures on given constraints.
arXiv Detail & Related papers (2019-05-28T06:35:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.