Lite-HRNet: A Lightweight High-Resolution Network
- URL: http://arxiv.org/abs/2104.06403v1
- Date: Tue, 13 Apr 2021 17:59:31 GMT
- Title: Lite-HRNet: A Lightweight High-Resolution Network
- Authors: Changqian Yu, Bin Xiao, Changxin Gao, Lu Yuan, Lei Zhang, Nong Sang,
Jingdong Wang
- Abstract summary: We present an efficient high-resolution network, Lite-HRNet, for human pose estimation.
We find that heavily-used pointwise (1x1) convolutions in shuffle blocks become the computational bottleneck.
We introduce a lightweight unit, conditional channel weighting, to replace costly pointwise (1x1) convolutions in shuffle blocks.
- Score: 97.17242913089464
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an efficient high-resolution network, Lite-HRNet, for human pose
estimation. We start by simply applying the efficient shuffle block in
ShuffleNet to HRNet (high-resolution network), yielding stronger performance
over popular lightweight networks, such as MobileNet, ShuffleNet, and Small
HRNet.
We find that the heavily-used pointwise (1x1) convolutions in shuffle blocks
become the computational bottleneck. We introduce a lightweight unit,
conditional channel weighting, to replace costly pointwise (1x1) convolutions
in shuffle blocks. The complexity of channel weighting is linear w.r.t the
number of channels and lower than the quadratic time complexity for pointwise
convolutions. Our solution learns the weights from all the channels and over
multiple resolutions that are readily available in the parallel branches in
HRNet. It uses the weights as the bridge to exchange information across
channels and resolutions, compensating the role played by the pointwise (1x1)
convolution. Lite-HRNet demonstrates superior results on human pose estimation
over popular lightweight networks. Moreover, Lite-HRNet can be easily applied
to semantic segmentation task in the same lightweight manner. The code and
models have been publicly available at https://github.com/HRNet/Lite-HRNet.
Related papers
- Slimmable Pruned Neural Networks [1.8275108630751844]
The accuracy of each sub-network on S-Net is inferior to that of individually trained networks of the same size.
We propose Slimmable Pruned Neural Networks (SP-Net) which has sub-network structures learned by pruning.
SP-Net can be combined with any kind of channel pruning methods and does not require any complicated processing or time-consuming architecture search like NAS models.
arXiv Detail & Related papers (2022-12-07T02:54:15Z) - Lightweight and Progressively-Scalable Networks for Semantic
Segmentation [100.63114424262234]
Multi-scale learning frameworks have been regarded as a capable class of models to boost semantic segmentation.
In this paper, we thoroughly analyze the design of convolutional blocks and the ways of interactions across multiple scales.
We devise Lightweight and Progressively-Scalable Networks (LPS-Net) that novelly expands the network complexity in a greedy manner.
arXiv Detail & Related papers (2022-07-27T16:00:28Z) - Searching for Network Width with Bilaterally Coupled Network [75.43658047510334]
We introduce a new supernet called Bilaterally Coupled Network (BCNet) to address this issue.
In BCNet, each channel is fairly trained and responsible for the same amount of network widths, thus each network width can be evaluated more accurately.
We propose the first open-source width benchmark on macro structures named Channel-Bench-Macro for the better comparison of width search algorithms.
arXiv Detail & Related papers (2022-03-25T15:32:46Z) - ShuffleBlock: Shuffle to Regularize Deep Convolutional Neural Networks [35.67192058479252]
This paper studies the operation of channel shuffle as a regularization technique in deep convolutional networks.
We show that while random shuffling of channels during training drastically reduce their performance, however, randomly shuffling small patches significantly improves their performance.
The ShuffleBlock module is easy to implement and improves the performance of several baseline networks on the task of image classification on CIFAR and ImageNet datasets.
arXiv Detail & Related papers (2021-06-17T10:23:00Z) - BCNet: Searching for Network Width with Bilaterally Coupled Network [56.14248440683152]
We introduce a new supernet called Bilaterally Coupled Network (BCNet) to address this issue.
In BCNet, each channel is fairly trained and responsible for the same amount of network widths, thus each network width can be evaluated more accurately.
Our method achieves state-of-the-art or competing performance over other baseline methods.
arXiv Detail & Related papers (2021-05-21T18:54:03Z) - Locally Free Weight Sharing for Network Width Search [55.155969155967284]
Searching for network width is an effective way to slim deep neural networks with hardware budgets.
We propose a locally free weight sharing strategy (CafeNet) to better evaluate each width.
Our method can further boost the benchmark NAS network EfficientNet-B0 by 0.41% via searching its width more delicately.
arXiv Detail & Related papers (2021-02-10T04:36:09Z) - EfficientNet-eLite: Extremely Lightweight and Efficient CNN Models for
Edge Devices by Network Candidate Search [13.467017642143583]
We propose a novel of Network Candidate Search (NCS) to study the trade-off between the resource usage and the performance.
In our experiment, we collect candidate CNN models from EfficientNet-B0 to be scaled down in varied way through width, depth, input resolution and compound scaling down.
For further embracing the CNN edge application with Application-Specific Integrated Circuit (ASIC), we adjust the architectures of EfficientNet-eLite to build the more hardware-friendly version, EfficientNet-HF.
arXiv Detail & Related papers (2020-09-16T01:11:10Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.