E2FIF: Push the limit of Binarized Deep Imagery Super-resolution using
End-to-end Full-precision Information Flow
- URL: http://arxiv.org/abs/2207.06893v1
- Date: Thu, 14 Jul 2022 13:24:27 GMT
- Title: E2FIF: Push the limit of Binarized Deep Imagery Super-resolution using
End-to-end Full-precision Information Flow
- Authors: Zhiqiang Lang, Lei Zhang, Wei Wei
- Abstract summary: Binary neural network (BNN) provides a promising solution to deploy parameter-intensive deep single image super-resolution (SISR) models onto real devices with limited storage and computational resources.
To achieve comparable performance with the full-precision counterpart, most existing BNNs for SISR mainly focus on compensating the information loss incurred by binarizing weights and activations in the network.
We propose to introduce a full-precision skip connection or its variant over each binarized convolution layer across the entire network, which can increase the forward expressive capability and the accuracy of back-propagated gradient.
- Score: 16.84357146564702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Binary neural network (BNN) provides a promising solution to deploy
parameter-intensive deep single image super-resolution (SISR) models onto real
devices with limited storage and computational resources. To achieve comparable
performance with the full-precision counterpart, most existing BNNs for SISR
mainly focus on compensating the information loss incurred by binarizing
weights and activations in the network through better approximations to the
binarized convolution. In this study, we revisit the difference between BNNs
and their full-precision counterparts and argue that the key for good
generalization performance of BNNs lies on preserving a complete full-precision
information flow as well as an accurate gradient flow passing through each
binarized convolution layer. Inspired by this, we propose to introduce a
full-precision skip connection or its variant over each binarized convolution
layer across the entire network, which can increase the forward expressive
capability and the accuracy of back-propagated gradient, thus enhancing the
generalization performance. More importantly, such a scheme is applicable to
any existing BNN backbones for SISR without introducing any additional
computation cost. To testify its efficacy, we evaluate it using four different
backbones for SISR on four benchmark datasets and report obviously superior
performance over existing BNNs and even some 4-bit competitors.
Related papers
- BiDense: Binarization for Dense Prediction [62.70804353158387]
BiDense is a generalized binary neural network (BNN) designed for efficient and accurate dense prediction tasks.
BiDense incorporates two key techniques: the Distribution-adaptive Binarizer (DAB) and the Channel-adaptive Full-precision Bypass (CFB)
arXiv Detail & Related papers (2024-11-15T16:46:04Z) - ZOBNN: Zero-Overhead Dependable Design of Binary Neural Networks with Deliberately Quantized Parameters [0.0]
In this paper, we introduce a third advantage of very low-precision neural networks: improved fault-tolerance.
We investigate the impact of memory faults on state-of-the-art binary neural networks (BNNs) through comprehensive analysis.
We propose a technique to improve BNN dependability by restricting the range of float parameters through a novel deliberately uniform quantization.
arXiv Detail & Related papers (2024-07-06T05:31:11Z) - ReActXGB: A Hybrid Binary Convolutional Neural Network Architecture for Improved Performance and Computational Efficiency [0.0]
We propose a hybrid model named ReActXGB, where we replace the fully convolutional layer of ReActNet-A with XGBoost.
This modification targets to narrow the performance gap between BCNNs and real-valued networks while maintaining lower computational costs.
arXiv Detail & Related papers (2024-05-11T16:38:50Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Elastic-Link for Binarized Neural Network [9.83865304744923]
"Elastic-Link" (EL) module enrich information flow within a BNN by adaptively adding real-valued input features to the subsequent convolutional output features.
EL produces a significant improvement on the challenging large-scale ImageNet dataset.
With the integration of ReActNet, it yields a new state-of-the-art result of 71.9% top-1 accuracy.
arXiv Detail & Related papers (2021-12-19T13:49:29Z) - Distribution-sensitive Information Retention for Accurate Binary Neural
Network [49.971345958676196]
We present a novel Distribution-sensitive Information Retention Network (DIR-Net) to retain the information of the forward activations and backward gradients.
Our DIR-Net consistently outperforms the SOTA binarization approaches under mainstream and compact architectures.
We conduct our DIR-Net on real-world resource-limited devices which achieves 11.1 times storage saving and 5.4 times speedup.
arXiv Detail & Related papers (2021-09-25T10:59:39Z) - Fully Quantized Image Super-Resolution Networks [81.75002888152159]
We propose a Fully Quantized image Super-Resolution framework (FQSR) to jointly optimize efficiency and accuracy.
We apply our quantization scheme on multiple mainstream super-resolution architectures, including SRResNet, SRGAN and EDSR.
Our FQSR using low bits quantization can achieve on par performance compared with the full-precision counterparts on five benchmark datasets.
arXiv Detail & Related papers (2020-11-29T03:53:49Z) - Distillation Guided Residual Learning for Binary Convolutional Neural
Networks [83.6169936912264]
It is challenging to bridge the performance gap between Binary CNN (BCNN) and Floating point CNN (FCNN)
We observe that, this performance gap leads to substantial residuals between intermediate feature maps of BCNN and FCNN.
To minimize the performance gap, we enforce BCNN to produce similar intermediate feature maps with the ones of FCNN.
This training strategy, i.e., optimizing each binary convolutional block with block-wise distillation loss derived from FCNN, leads to a more effective optimization to BCNN.
arXiv Detail & Related papers (2020-07-10T07:55:39Z) - ReActNet: Towards Precise Binary Neural Network with Generalized
Activation Functions [76.05981545084738]
We propose several ideas for enhancing a binary network to close its accuracy gap from real-valued networks without incurring any additional computational cost.
We first construct a baseline network by modifying and binarizing a compact real-valued network with parameter-free shortcuts.
We show that the proposed ReActNet outperforms all the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-03-07T02:12:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.