BiHRNet: A Binary high-resolution network for Human Pose Estimation
- URL: http://arxiv.org/abs/2311.10296v1
- Date: Fri, 17 Nov 2023 03:01:37 GMT
- Title: BiHRNet: A Binary high-resolution network for Human Pose Estimation
- Authors: Zhicheng Zhang, Xueyao Sun, Yonghao Dang, Jianqin Yin
- Abstract summary: We propose a binary human pose estimator named BiHRNet, whose weights and activations are expressed as $pm$1.
BiHRNet retains the keypoint extraction ability of HRNet, while using fewer computing resources by adapting binary neural network (BNN)
We show BiHRNet achieves a PCKh of 87.9 on the MPII dataset, which outperforms all binary pose estimation networks.
- Score: 11.250422970707415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human Pose Estimation (HPE) plays a crucial role in computer vision
applications. However, it is difficult to deploy state-of-the-art models on
resouce-limited devices due to the high computational costs of the networks. In
this work, a binary human pose estimator named BiHRNet(Binary HRNet) is
proposed, whose weights and activations are expressed as $\pm$1. BiHRNet
retains the keypoint extraction ability of HRNet, while using fewer computing
resources by adapting binary neural network (BNN). In order to reduce the
accuracy drop caused by network binarization, two categories of techniques are
proposed in this work. For optimizing the training process for binary pose
estimator, we propose a new loss function combining KL divergence loss with
AWing loss, which makes the binary network obtain more comprehensive output
distribution from its real-valued counterpart to reduce information loss caused
by binarization. For designing more binarization-friendly structures, we
propose a new information reconstruction bottleneck called IR Bottleneck to
retain more information in the initial stage of the network. In addition, we
also propose a multi-scale basic block called MS-Block for information
retention. Our work has less computation cost with few precision drop.
Experimental results demonstrate that BiHRNet achieves a PCKh of 87.9 on the
MPII dataset, which outperforms all binary pose estimation networks. On the
challenging of COCO dataset, the proposed method enables the binary neural
network to achieve 70.8 mAP, which is better than most tested lightweight
full-precision networks.
Related papers
- BiDense: Binarization for Dense Prediction [62.70804353158387]
BiDense is a generalized binary neural network (BNN) designed for efficient and accurate dense prediction tasks.
BiDense incorporates two key techniques: the Distribution-adaptive Binarizer (DAB) and the Channel-adaptive Full-precision Bypass (CFB)
arXiv Detail & Related papers (2024-11-15T16:46:04Z) - ReActXGB: A Hybrid Binary Convolutional Neural Network Architecture for Improved Performance and Computational Efficiency [0.0]
We propose a hybrid model named ReActXGB, where we replace the fully convolutional layer of ReActNet-A with XGBoost.
This modification targets to narrow the performance gap between BCNNs and real-valued networks while maintaining lower computational costs.
arXiv Detail & Related papers (2024-05-11T16:38:50Z) - BiFSMNv2: Pushing Binary Neural Networks for Keyword Spotting to
Real-Network Performance [54.214426436283134]
Deep neural networks, such as the Deep-FSMN, have been widely studied for keyword spotting (KWS) applications.
We present a strong yet efficient binary neural network for KWS, namely BiFSMNv2, pushing it to the real-network accuracy performance.
We highlight that benefiting from the compact architecture and optimized hardware kernel, BiFSMNv2 can achieve an impressive 25.1x speedup and 20.2x storage-saving on edge hardware.
arXiv Detail & Related papers (2022-11-13T18:31:45Z) - IR2Net: Information Restriction and Information Recovery for Accurate
Binary Neural Networks [24.42067007684169]
Weight and activation binarization can efficiently compress deep neural networks and accelerate model inference, but cause severe accuracy degradation.
We propose IR$2$Net to stimulate the potential of BNNs and improve the network accuracy by restricting the input information and recovering the feature information.
Experimental results demonstrate that our approach still achieves comparable accuracy even with $ sim $10x floating-point operations (FLOPs) reduction for ResNet-18.
arXiv Detail & Related papers (2022-10-06T02:03:26Z) - Distribution-sensitive Information Retention for Accurate Binary Neural
Network [49.971345958676196]
We present a novel Distribution-sensitive Information Retention Network (DIR-Net) to retain the information of the forward activations and backward gradients.
Our DIR-Net consistently outperforms the SOTA binarization approaches under mainstream and compact architectures.
We conduct our DIR-Net on real-world resource-limited devices which achieves 11.1 times storage saving and 5.4 times speedup.
arXiv Detail & Related papers (2021-09-25T10:59:39Z) - High-Capacity Expert Binary Networks [56.87581500474093]
Network binarization is a promising hardware-aware direction for creating efficient deep models.
Despite its memory and computational advantages, reducing the accuracy gap between binary models and their real-valued counterparts remains an unsolved challenging research problem.
We propose Expert Binary Convolution, which, for the first time, tailors conditional computing to binary networks by learning to select one data-specific expert binary filter at a time conditioned on input features.
arXiv Detail & Related papers (2020-10-07T17:58:10Z) - ReActNet: Towards Precise Binary Neural Network with Generalized
Activation Functions [76.05981545084738]
We propose several ideas for enhancing a binary network to close its accuracy gap from real-valued networks without incurring any additional computational cost.
We first construct a baseline network by modifying and binarizing a compact real-valued network with parameter-free shortcuts.
We show that the proposed ReActNet outperforms all the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-03-07T02:12:02Z) - Exploring the Connection Between Binary and Spiking Neural Networks [1.329054857829016]
We bridge the recent algorithmic progress in training Binary Neural Networks and Spiking Neural Networks.
We show that training Spiking Neural Networks in the extreme quantization regime results in near full precision accuracies on large-scale datasets.
arXiv Detail & Related papers (2020-02-24T03:46:51Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.