Basic Binary Convolution Unit for Binarized Image Restoration Network
- URL: http://arxiv.org/abs/2210.00405v1
- Date: Sun, 2 Oct 2022 01:54:40 GMT
- Title: Basic Binary Convolution Unit for Binarized Image Restoration Network
- Authors: Bin Xia, Yulun Zhang, Yitong Wang, Yapeng Tian, Wenming Yang, Radu
Timofte, and Luc Van Gool
- Abstract summary: In this study, we reconsider components in binary convolution, such as residual connection, BatchNorm, activation function, and structure, for image restoration tasks.
Based on our findings and analyses, we design a simple yet efficient basic binary convolution unit (BBCU)
Our BBCU significantly outperforms other BNNs and lightweight models, which shows that BBCU can serve as a basic unit for binarized IR networks.
- Score: 146.0988597062618
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lighter and faster image restoration (IR) models are crucial for the
deployment on resource-limited devices. Binary neural network (BNN), one of the
most promising model compression methods, can dramatically reduce the
computations and parameters of full-precision convolutional neural networks
(CNN). However, there are different properties between BNN and full-precision
CNN, and we can hardly use the experience of designing CNN to develop BNN. In
this study, we reconsider components in binary convolution, such as residual
connection, BatchNorm, activation function, and structure, for IR tasks. We
conduct systematic analyses to explain each component's role in binary
convolution and discuss the pitfalls. Specifically, we find that residual
connection can reduce the information loss caused by binarization; BatchNorm
can solve the value range gap between residual connection and binary
convolution; The position of the activation function dramatically affects the
performance of BNN. Based on our findings and analyses, we design a simple yet
efficient basic binary convolution unit (BBCU). Furthermore, we divide IR
networks into four parts and specially design variants of BBCU for each part to
explore the benefit of binarizing these parts. We conduct experiments on
different IR tasks, and our BBCU significantly outperforms other BNNs and
lightweight models, which shows that BBCU can serve as a basic unit for
binarized IR networks. All codes and models will be released.
Related papers
- Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Towards Accurate Binary Neural Networks via Modeling Contextual
Dependencies [52.691032025163175]
Existing Binary Neural Networks (BNNs) operate mainly on local convolutions with binarization function.
We present new designs of binary neural modules, which enables leading binary neural modules by a large margin.
arXiv Detail & Related papers (2022-09-03T11:51:04Z) - Sparsifying Binary Networks [3.8350038566047426]
Binary neural networks (BNNs) have demonstrated their ability to solve complex tasks with comparable accuracy as full-precision deep neural networks (DNNs)
Despite the recent improvements, they suffer from a fixed and limited compression factor that may result insufficient for certain devices with very limited resources.
We propose sparse binary neural networks (SBNNs), a novel model and training scheme which introduces sparsity in BNNs and a new quantization function for binarizing the network's weights.
arXiv Detail & Related papers (2022-07-11T15:54:41Z) - Network Binarization via Contrastive Learning [16.274341164897827]
We establish a novel contrastive learning framework while training Binary Neural Networks (BNNs)
MI is introduced as the metric to measure the information shared between binary and FP activations.
Results show that our method can be implemented as a pile-up module on existing state-of-the-art binarization methods.
arXiv Detail & Related papers (2022-07-06T21:04:53Z) - BCNN: Binary Complex Neural Network [16.82755328827758]
Binarized neural networks, or BNNs, show great promise in edge-side applications with resource limited hardware.
We introduce complex representation into the BNNs and propose Binary complex neural network.
BCNN improves BNN by strengthening its learning capability through complex representation and extending its applicability to complex-valued input data.
arXiv Detail & Related papers (2021-03-28T03:35:24Z) - Self-Distribution Binary Neural Networks [18.69165083747967]
We study the binary neural networks (BNNs) of which both the weights and activations are binary (i.e., 1-bit representation)
We propose Self-Distribution Binary Neural Network (SD-BNN)
Experiments on CIFAR-10 and ImageNet datasets show that the proposed SD-BNN consistently outperforms the state-of-the-art (SOTA) BNNs.
arXiv Detail & Related papers (2021-03-03T13:39:52Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - SoFAr: Shortcut-based Fractal Architectures for Binary Convolutional
Neural Networks [7.753767947048147]
We propose two Shortcut-based Fractal Architectures (SoFAr) specifically designed for Binary Convolutional Neural Networks (BCNNs)
Our proposed SoFAr combines the adoption of shortcuts and the fractal architectures in one unified model, which is helpful in the training of BCNNs.
Results show that our proposed SoFAr achieves better accuracy compared with shortcut-based BCNNs.
arXiv Detail & Related papers (2020-09-11T10:00:47Z) - Distillation Guided Residual Learning for Binary Convolutional Neural
Networks [83.6169936912264]
It is challenging to bridge the performance gap between Binary CNN (BCNN) and Floating point CNN (FCNN)
We observe that, this performance gap leads to substantial residuals between intermediate feature maps of BCNN and FCNN.
To minimize the performance gap, we enforce BCNN to produce similar intermediate feature maps with the ones of FCNN.
This training strategy, i.e., optimizing each binary convolutional block with block-wise distillation loss derived from FCNN, leads to a more effective optimization to BCNN.
arXiv Detail & Related papers (2020-07-10T07:55:39Z) - Binarizing MobileNet via Evolution-based Searching [66.94247681870125]
We propose a use of evolutionary search to facilitate the construction and training scheme when binarizing MobileNet.
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs)
Our objective is to come up with a tiny yet efficient binary neural architecture by exploring the best candidates of the group convolution.
arXiv Detail & Related papers (2020-05-13T13:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.