AdaBin: Improving Binary Neural Networks with Adaptive Binary Sets
- URL: http://arxiv.org/abs/2208.08084v1
- Date: Wed, 17 Aug 2022 05:43:33 GMT
- Title: AdaBin: Improving Binary Neural Networks with Adaptive Binary Sets
- Authors: Zhijun Tu, Xinghao Chen, Pengju Ren, Yunhe Wang
- Abstract summary: This paper studies the Binary Neural Networks (BNNs) in which weights and activations are both binarized into 1-bit values.
We present a simple yet effective approach called AdaBin to adaptively obtain the optimal binary sets.
Experimental results on benchmark models and datasets demonstrate that the proposed AdaBin is able to achieve state-of-the-art performance.
- Score: 27.022212653067367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies the Binary Neural Networks (BNNs) in which weights and
activations are both binarized into 1-bit values, thus greatly reducing the
memory usage and computational complexity. Since the modern deep neural
networks are of sophisticated design with complex architecture for the accuracy
reason, the diversity on distributions of weights and activations is very high.
Therefore, the conventional sign function cannot be well used for effectively
binarizing full-precision values in BNNs. To this end, we present a simple yet
effective approach called AdaBin to adaptively obtain the optimal binary sets
$\{b_1, b_2\}$ ($b_1, b_2\in \mathbb{R}$) of weights and activations for each
layer instead of a fixed set (i.e., $\{-1, +1\}$). In this way, the proposed
method can better fit different distributions and increase the representation
ability of binarized features. In practice, we use the center position and
distance of 1-bit values to define a new binary quantization function. For the
weights, we propose an equalization method to align the symmetrical center of
binary distribution to real-valued distribution, and minimize the
Kullback-Leibler divergence of them. Meanwhile, we introduce a gradient-based
optimization method to get these two parameters for activations, which are
jointly trained in an end-to-end manner. Experimental results on benchmark
models and datasets demonstrate that the proposed AdaBin is able to achieve
state-of-the-art performance. For instance, we obtain a 66.4\% Top-1 accuracy
on the ImageNet using ResNet-18 architecture, and a 69.4 mAP on PASCAL VOC
using SSD300.
Related papers
- Binarized Spectral Compressive Imaging [59.18636040850608]
Existing deep learning models for hyperspectral image (HSI) reconstruction achieve good performance but require powerful hardwares with enormous memory and computational resources.
We propose a novel method, Binarized Spectral-Redistribution Network (BiSRNet)
BiSRNet is derived by using the proposed techniques to binarize the base model.
arXiv Detail & Related papers (2023-05-17T15:36:08Z) - Compacting Binary Neural Networks by Sparse Kernel Selection [58.84313343190488]
This paper is motivated by a previously revealed phenomenon that the binary kernels in successful BNNs are nearly power-law distributed.
We develop the Permutation Straight-Through Estimator (PSTE) that is able to not only optimize the selection process end-to-end but also maintain the non-repetitive occupancy of selected codewords.
Experiments verify that our method reduces both the model size and bit-wise computational costs, and achieves accuracy improvements compared with state-of-the-art BNNs under comparable budgets.
arXiv Detail & Related papers (2023-03-25T13:53:02Z) - Partial Binarization of Neural Networks for Budget-Aware Efficient
Learning [10.613066533991292]
Binarization is a powerful compression technique for neural networks.
We propose a controlled approach to partial binarization, creating a budgeted binary neural network (B2NN) with our MixBin strategy.
arXiv Detail & Related papers (2022-11-12T20:30:38Z) - Towards Accurate Binary Neural Networks via Modeling Contextual
Dependencies [52.691032025163175]
Existing Binary Neural Networks (BNNs) operate mainly on local convolutions with binarization function.
We present new designs of binary neural modules, which enables leading binary neural modules by a large margin.
arXiv Detail & Related papers (2022-09-03T11:51:04Z) - Bimodal Distributed Binarized Neural Networks [3.0778860202909657]
Binarization techniques, however, suffer from ineligible performance degradation compared to their full-precision counterparts.
We propose a Bi-Modal Distributed binarization method (methodname)
That imposes bi-modal distribution of the network weights by kurtosis regularization.
arXiv Detail & Related papers (2022-04-05T06:07:05Z) - Exact Backpropagation in Binary Weighted Networks with Group Weight
Transformations [0.0]
Quantization based model compression serves as high performing and fast approach for inference.
Models that constrain the weights to binary values enable efficient implementation of the ubiquitous dot product.
arXiv Detail & Related papers (2021-07-03T10:29:34Z) - A Bop and Beyond: A Second Order Optimizer for Binarized Neural Networks [0.0]
optimization of Binary Neural Networks (BNNs) relies on approximating the real-valued weights with their binarized representations.
In this paper, we take an approach parallel to Adam which also uses the second raw moment estimate to normalize the first raw moment before doing the comparison with the threshold.
We present two versions of the proposed: a biased one and a bias-corrected one, each with its own applications.
arXiv Detail & Related papers (2021-04-11T22:20:09Z) - Binarization Methods for Motor-Imagery Brain-Computer Interface
Classification [18.722731794073756]
We propose methods for transforming real-valued weights to binary numbers for efficient inference.
By tuning the dimension of the binary embedding, we achieve almost the same accuracy in 4-class MI ($leq$1.27% lower) compared to models with float16 weights.
Our method replaces the fully connected layer of CNNs with a binary augmented memory using bipolar random projection.
arXiv Detail & Related papers (2020-10-14T12:28:18Z) - Distillation Guided Residual Learning for Binary Convolutional Neural
Networks [83.6169936912264]
It is challenging to bridge the performance gap between Binary CNN (BCNN) and Floating point CNN (FCNN)
We observe that, this performance gap leads to substantial residuals between intermediate feature maps of BCNN and FCNN.
To minimize the performance gap, we enforce BCNN to produce similar intermediate feature maps with the ones of FCNN.
This training strategy, i.e., optimizing each binary convolutional block with block-wise distillation loss derived from FCNN, leads to a more effective optimization to BCNN.
arXiv Detail & Related papers (2020-07-10T07:55:39Z) - Binarizing MobileNet via Evolution-based Searching [66.94247681870125]
We propose a use of evolutionary search to facilitate the construction and training scheme when binarizing MobileNet.
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs)
Our objective is to come up with a tiny yet efficient binary neural architecture by exploring the best candidates of the group convolution.
arXiv Detail & Related papers (2020-05-13T13:25:51Z) - Training Binary Neural Networks with Real-to-Binary Convolutions [52.91164959767517]
We show how to train binary networks to within a few percent points of the full precision counterpart.
We show how to build a strong baseline, which already achieves state-of-the-art accuracy.
We show that, when putting all of our improvements together, the proposed model beats the current state of the art by more than 5% top-1 accuracy on ImageNet.
arXiv Detail & Related papers (2020-03-25T17:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.