A comprehensive review of Binary Neural Network
- URL: http://arxiv.org/abs/2110.06804v1
- Date: Mon, 11 Oct 2021 22:44:15 GMT
- Title: A comprehensive review of Binary Neural Network
- Authors: Chunyu Yuan and Sos S. Agaian
- Abstract summary: Binary Neural Network (BNN) method is an extreme application of convolutional neural network (CNN) parameter quantization.
Recent developments in BNN have led to a lot of algorithms and solutions that have helped address this issue.
- Score: 2.918940961856197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Binary Neural Network (BNN) method is an extreme application of convolutional
neural network (CNN) parameter quantization. As opposed to the original CNN
methods which employed floating-point computation with full-precision weights
and activations, BBN uses 1-bit activations and weights. With BBNs, a
significant amount of storage, network complexity and energy consumption can be
reduced, and neural networks can be implemented more efficiently in embedded
applications. Unfortunately, binarization causes severe information loss. A gap
still exists between full-precision CNN models and their binarized
counterparts. The recent developments in BNN have led to a lot of algorithms
and solutions that have helped address this issue. This article provides a full
overview of recent developments in BNN. The present paper focuses exclusively
on 1-bit activations and weights networks, as opposed to previous surveys in
which low-bit works are mixed in. In this paper, we conduct a complete
investigation of BNN's development from their predecessors to the latest BNN
algorithms and techniques, presenting a broad design pipeline, and discussing
each module's variants. Along the way, this paper examines BNN (a) purpose:
their early successes and challenges; (b) BNN optimization: selected
representative works that contain key optimization techniques; (c) deployment:
open-source frameworks for BNN modeling and development; (d) terminal:
efficient computing architectures and devices for BNN and (e) applications:
diverse applications with BNN. Moreover, this paper discusses potential
directions and future research opportunities for the latest BNN algorithms and
techniques, presents a broad design pipeline, and discusses each module's
variants.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - Projected Stochastic Gradient Descent with Quantum Annealed Binary Gradients [51.82488018573326]
We present QP-SBGD, a novel layer-wise optimiser tailored towards training neural networks with binary weights.
BNNs reduce the computational requirements and energy consumption of deep learning models with minimal loss in accuracy.
Our algorithm is implemented layer-wise, making it suitable to train larger networks on resource-limited quantum hardware.
arXiv Detail & Related papers (2023-10-23T17:32:38Z) - Binary domain generalization for sparsifying binary neural networks [3.2462411268263964]
Binary neural networks (BNNs) are an attractive solution for developing and deploying deep neural network (DNN)-based applications in resource constrained devices.
Weight pruning of BNNs leads to performance degradation, which suggests that the standard binarization domain of BNNs is not well adapted for the task.
This work proposes a novel more general binary domain that extends the standard binary one that is more robust to pruning techniques.
arXiv Detail & Related papers (2023-06-23T14:32:16Z) - Compacting Binary Neural Networks by Sparse Kernel Selection [58.84313343190488]
This paper is motivated by a previously revealed phenomenon that the binary kernels in successful BNNs are nearly power-law distributed.
We develop the Permutation Straight-Through Estimator (PSTE) that is able to not only optimize the selection process end-to-end but also maintain the non-repetitive occupancy of selected codewords.
Experiments verify that our method reduces both the model size and bit-wise computational costs, and achieves accuracy improvements compared with state-of-the-art BNNs under comparable budgets.
arXiv Detail & Related papers (2023-03-25T13:53:02Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - A Mixed Integer Programming Approach for Verifying Properties of
Binarized Neural Networks [44.44006029119672]
We propose a mixed integer programming formulation for BNN verification.
We demonstrate our approach by verifying properties of BNNs trained on the MNIST dataset and an aircraft collision avoidance controller.
arXiv Detail & Related papers (2022-03-11T01:11:29Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - BCNN: Binary Complex Neural Network [16.82755328827758]
Binarized neural networks, or BNNs, show great promise in edge-side applications with resource limited hardware.
We introduce complex representation into the BNNs and propose Binary complex neural network.
BCNN improves BNN by strengthening its learning capability through complex representation and extending its applicability to complex-valued input data.
arXiv Detail & Related papers (2021-03-28T03:35:24Z) - BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized
Neural Networks [7.844146033635129]
We study verification problems for Binarized Neural Networks (BNNs), the 1-bit quantization of general real-numbered neural networks.
Our approach is to encode BNNs into Binary Decision Diagrams (BDDs), which is done by exploiting the internal structure of the BNNs.
Based on the encoding, we develop a quantitative verification framework for BNNs where precise and comprehensive analysis of BNNs can be performed.
arXiv Detail & Related papers (2021-03-12T12:02:41Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.