Elastic-Link for Binarized Neural Network
- URL: http://arxiv.org/abs/2112.10149v1
- Date: Sun, 19 Dec 2021 13:49:29 GMT
- Title: Elastic-Link for Binarized Neural Network
- Authors: Jie Hu, Wu Ziheng, Vince Tan, Zhilin Lu, Mengze Zeng, Enhua Wu
- Abstract summary: "Elastic-Link" (EL) module enrich information flow within a BNN by adaptively adding real-valued input features to the subsequent convolutional output features.
EL produces a significant improvement on the challenging large-scale ImageNet dataset.
With the integration of ReActNet, it yields a new state-of-the-art result of 71.9% top-1 accuracy.
- Score: 9.83865304744923
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work has shown that Binarized Neural Networks (BNNs) are able to
greatly reduce computational costs and memory footprints, facilitating model
deployment on resource-constrained devices. However, in comparison to their
full-precision counterparts, BNNs suffer from severe accuracy degradation.
Research aiming to reduce this accuracy gap has thus far largely focused on
specific network architectures with few or no 1x1 convolutional layers, for
which standard binarization methods do not work well. Because 1x1 convolutions
are common in the design of modern architectures (e.g. GoogleNet, ResNet,
DenseNet), it is crucial to develop a method to binarize them effectively for
BNNs to be more widely adopted. In this work, we propose an "Elastic-Link" (EL)
module to enrich information flow within a BNN by adaptively adding real-valued
input features to the subsequent convolutional output features. The proposed EL
module is easily implemented and can be used in conjunction with other methods
for BNNs. We demonstrate that adding EL to BNNs produces a significant
improvement on the challenging large-scale ImageNet dataset. For example, we
raise the top-1 accuracy of binarized ResNet26 from 57.9% to 64.0%. EL also
aids convergence in the training of binarized MobileNet, for which a top-1
accuracy of 56.4% is achieved. Finally, with the integration of ReActNet, it
yields a new state-of-the-art result of 71.9% top-1 accuracy.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - ReActXGB: A Hybrid Binary Convolutional Neural Network Architecture for Improved Performance and Computational Efficiency [0.0]
We propose a hybrid model named ReActXGB, where we replace the fully convolutional layer of ReActNet-A with XGBoost.
This modification targets to narrow the performance gap between BCNNs and real-valued networks while maintaining lower computational costs.
arXiv Detail & Related papers (2024-05-11T16:38:50Z) - Boosting Binary Neural Networks via Dynamic Thresholds Learning [21.835748440099586]
We introduce DySign to reduce information loss and boost representative capacity of BNNs.
For DCNNs, DyBCNNs based on two backbones achieve 71.2% and 67.4% top1-accuracy on ImageNet dataset.
For ViTs, DyCCT presents the superiority of the convolutional embedding layer in fully binarized ViTs and 56.1% on the ImageNet dataset.
arXiv Detail & Related papers (2022-11-04T07:18:21Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - Dynamic Binary Neural Network by learning channel-wise thresholds [9.432747511001246]
We propose a dynamic BNN (DyBNN) incorporating dynamic learnable channel-wise thresholds of Sign function and shift parameters of PReLU.
The DyBNN based on two backbones of ReActNet (MobileNetV1 and ResNet18) achieve 71.2% and 67.4% top1-accuracy on ImageNet dataset.
arXiv Detail & Related papers (2021-10-08T17:41:36Z) - Self-Distribution Binary Neural Networks [18.69165083747967]
We study the binary neural networks (BNNs) of which both the weights and activations are binary (i.e., 1-bit representation)
We propose Self-Distribution Binary Neural Network (SD-BNN)
Experiments on CIFAR-10 and ImageNet datasets show that the proposed SD-BNN consistently outperforms the state-of-the-art (SOTA) BNNs.
arXiv Detail & Related papers (2021-03-03T13:39:52Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - FTBNN: Rethinking Non-linearity for 1-bit CNNs and Going Beyond [23.5996182207431]
We show that binarized convolution process owns an increasing linearity towards the target of minimizing such error, which in turn hampers BNN's discriminative ability.
We re-investigate and tune proper non-linear modules to fix that contradiction, leading to a strong baseline which achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-10-19T08:11:48Z) - Distillation Guided Residual Learning for Binary Convolutional Neural
Networks [83.6169936912264]
It is challenging to bridge the performance gap between Binary CNN (BCNN) and Floating point CNN (FCNN)
We observe that, this performance gap leads to substantial residuals between intermediate feature maps of BCNN and FCNN.
To minimize the performance gap, we enforce BCNN to produce similar intermediate feature maps with the ones of FCNN.
This training strategy, i.e., optimizing each binary convolutional block with block-wise distillation loss derived from FCNN, leads to a more effective optimization to BCNN.
arXiv Detail & Related papers (2020-07-10T07:55:39Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.