INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold
- URL: http://arxiv.org/abs/2204.07439v3
- Date: Thu, 19 Oct 2023 15:26:56 GMT
- Title: INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold
- Authors: Changhun Lee, Hyungjun Kim, Eunhyeok Park, Jae-Joon Kim
- Abstract summary: We propose a novel BNN design called Binary Neural Network with INSTAnce-aware threshold (INSTA-BNN)
INSTA-BNN controls the quantization threshold dynamically in an input-dependent or instance-aware manner.
Our study shows that INSTA-BNN outperforms the baseline by 3.0% and 2.8% on the ImageNet classification task with comparable computing cost.
- Score: 16.890849856271185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Binary Neural Networks (BNNs) have emerged as a promising solution for
reducing the memory footprint and compute costs of deep neural networks, but
they suffer from quality degradation due to the lack of freedom as activations
and weights are constrained to the binary values. To compensate for the
accuracy drop, we propose a novel BNN design called Binary Neural Network with
INSTAnce-aware threshold (INSTA-BNN), which controls the quantization threshold
dynamically in an input-dependent or instance-aware manner. According to our
observation, higher-order statistics can be a representative metric to estimate
the characteristics of the input distribution. INSTA-BNN is designed to adjust
the threshold dynamically considering various information, including
higher-order statistics, but it is also optimized judiciously to realize
minimal overhead on a real device. Our extensive study shows that INSTA-BNN
outperforms the baseline by 3.0% and 2.8% on the ImageNet classification task
with comparable computing cost, achieving 68.5% and 72.2% top-1 accuracy on
ResNet-18 and MobileNetV1 based models, respectively.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks [13.271286153792058]
Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case.
This paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties.
arXiv Detail & Related papers (2023-07-29T06:27:28Z) - Boosting Binary Neural Networks via Dynamic Thresholds Learning [21.835748440099586]
We introduce DySign to reduce information loss and boost representative capacity of BNNs.
For DCNNs, DyBCNNs based on two backbones achieve 71.2% and 67.4% top1-accuracy on ImageNet dataset.
For ViTs, DyCCT presents the superiority of the convolutional embedding layer in fully binarized ViTs and 56.1% on the ImageNet dataset.
arXiv Detail & Related papers (2022-11-04T07:18:21Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking
Neural Networks? [3.2108350580418166]
Spiking neural networks (SNNs) operate via binary spikes distributed over time.
SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN)
We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN.
arXiv Detail & Related papers (2021-12-22T18:47:45Z) - Elastic-Link for Binarized Neural Network [9.83865304744923]
"Elastic-Link" (EL) module enrich information flow within a BNN by adaptively adding real-valued input features to the subsequent convolutional output features.
EL produces a significant improvement on the challenging large-scale ImageNet dataset.
With the integration of ReActNet, it yields a new state-of-the-art result of 71.9% top-1 accuracy.
arXiv Detail & Related papers (2021-12-19T13:49:29Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - Dynamic Binary Neural Network by learning channel-wise thresholds [9.432747511001246]
We propose a dynamic BNN (DyBNN) incorporating dynamic learnable channel-wise thresholds of Sign function and shift parameters of PReLU.
The DyBNN based on two backbones of ReActNet (MobileNetV1 and ResNet18) achieve 71.2% and 67.4% top1-accuracy on ImageNet dataset.
arXiv Detail & Related papers (2021-10-08T17:41:36Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - FATNN: Fast and Accurate Ternary Neural Networks [89.07796377047619]
Ternary Neural Networks (TNNs) have received much attention due to being potentially orders of magnitude faster in inference, as well as more power efficient, than full-precision counterparts.
In this work, we show that, under some mild constraints, computational complexity of the ternary inner product can be reduced by a factor of 2.
We elaborately design an implementation-dependent ternary quantization algorithm to mitigate the performance gap.
arXiv Detail & Related papers (2020-08-12T04:26:18Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.