BoolNet: Minimizing The Energy Consumption of Binary Neural Networks
- URL: http://arxiv.org/abs/2106.06991v1
- Date: Sun, 13 Jun 2021 13:37:31 GMT
- Title: BoolNet: Minimizing The Energy Consumption of Binary Neural Networks
- Authors: Nianhui Guo, Joseph Bethge, Haojin Yang, Kai Zhong, Xuefei Ning,
Christoph Meinel and Yu Wang
- Abstract summary: We propose a novel BNN architecture without most commonly used 32-bit components: textitBoolNet.
We show that BoolNet can achieve 4.6x energy reduction coupled with 1.2% higher accuracy than the commonly used BNN architecture Bi-RealNet.
- Score: 21.78874958231522
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works on Binary Neural Networks (BNNs) have made promising progress in
narrowing the accuracy gap of BNNs to their 32-bit counterparts. However, the
accuracy gains are often based on specialized model designs using additional
32-bit components. Furthermore, almost all previous BNNs use 32-bit for feature
maps and the shortcuts enclosing the corresponding binary convolution blocks,
which helps to effectively maintain the accuracy, but is not friendly to
hardware accelerators with limited memory, energy, and computing resources.
Thus, we raise the following question: How can accuracy and energy consumption
be balanced in a BNN network design? We extensively study this fundamental
problem in this work and propose a novel BNN architecture without most commonly
used 32-bit components: \textit{BoolNet}. Experimental results on ImageNet
demonstrate that BoolNet can achieve 4.6x energy reduction coupled with 1.2\%
higher accuracy than the commonly used BNN architecture Bi-RealNet. Code and
trained models are available at: https://github.com/hpi-xnor/BoolNet.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - Basic Binary Convolution Unit for Binarized Image Restoration Network [146.0988597062618]
In this study, we reconsider components in binary convolution, such as residual connection, BatchNorm, activation function, and structure, for image restoration tasks.
Based on our findings and analyses, we design a simple yet efficient basic binary convolution unit (BBCU)
Our BBCU significantly outperforms other BNNs and lightweight models, which shows that BBCU can serve as a basic unit for binarized IR networks.
arXiv Detail & Related papers (2022-10-02T01:54:40Z) - Towards Accurate Binary Neural Networks via Modeling Contextual
Dependencies [52.691032025163175]
Existing Binary Neural Networks (BNNs) operate mainly on local convolutions with binarization function.
We present new designs of binary neural modules, which enables leading binary neural modules by a large margin.
arXiv Detail & Related papers (2022-09-03T11:51:04Z) - Elastic-Link for Binarized Neural Network [9.83865304744923]
"Elastic-Link" (EL) module enrich information flow within a BNN by adaptively adding real-valued input features to the subsequent convolutional output features.
EL produces a significant improvement on the challenging large-scale ImageNet dataset.
With the integration of ReActNet, it yields a new state-of-the-art result of 71.9% top-1 accuracy.
arXiv Detail & Related papers (2021-12-19T13:49:29Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - A comprehensive review of Binary Neural Network [2.918940961856197]
Binary Neural Network (BNN) method is an extreme application of convolutional neural network (CNN) parameter quantization.
Recent developments in BNN have led to a lot of algorithms and solutions that have helped address this issue.
arXiv Detail & Related papers (2021-10-11T22:44:15Z) - BCNN: Binary Complex Neural Network [16.82755328827758]
Binarized neural networks, or BNNs, show great promise in edge-side applications with resource limited hardware.
We introduce complex representation into the BNNs and propose Binary complex neural network.
BCNN improves BNN by strengthening its learning capability through complex representation and extending its applicability to complex-valued input data.
arXiv Detail & Related papers (2021-03-28T03:35:24Z) - AdderNet and its Minimalist Hardware Design for Energy-Efficient
Artificial Intelligence [111.09105910265154]
We present a novel minimalist hardware architecture using adder convolutional neural network (AdderNet)
The whole AdderNet can practically achieve 16% enhancement in speed.
We conclude the AdderNet is able to surpass all the other competitors.
arXiv Detail & Related papers (2021-01-25T11:31:52Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - MeliusNet: Can Binary Neural Networks Achieve MobileNet-level Accuracy? [12.050205584630922]
Binary Neural Networks (BNNs) are neural networks which use binary weights and activations instead of the typical 32-bit floating point values.
In this paper, we present an architectural approach: MeliusNet. It consists of alternating a DenseBlock, which increases the feature capacity, and our proposed ImprovementBlock, which increases the feature quality.
arXiv Detail & Related papers (2020-01-16T16:56:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.