Exploiting Kernel Compression on BNNs
- URL: http://arxiv.org/abs/2212.00608v1
- Date: Thu, 1 Dec 2022 16:05:10 GMT
- Title: Exploiting Kernel Compression on BNNs
- Authors: Franyell Silfa, Jose Maria Arnau, Antonio Gonz\'alez
- Abstract summary: In this work, we observe that the number of unique sequences representing a set of weights is typically low.
We propose a clustering scheme to identify the most common sequences of bits and replace the less common ones with some similar common sequences.
Our experimental results show that our technique can reduce memory requirement by 1.32x and improve performance by 1.35x.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Binary Neural Networks (BNNs) are showing tremendous success on realistic
image classification tasks. Notably, their accuracy is similar to the
state-of-the-art accuracy obtained by full-precision models tailored to edge
devices. In this regard, BNNs are very amenable to edge devices since they
employ 1-bit to store the inputs and weights, and thus, their storage
requirements are low. Also, BNNs computations are mainly done using xnor and
pop-counts operations which are implemented very efficiently using simple
hardware structures. Nonetheless, supporting BNNs efficiently on mobile CPUs is
far from trivial since their benefits are hindered by frequent memory accesses
to load weights and inputs.
In BNNs, a weight or an input is stored using one bit, and aiming to increase
storage and computation efficiency, several of them are packed together as a
sequence of bits. In this work, we observe that the number of unique sequences
representing a set of weights is typically low. Also, we have seen that during
the evaluation of a BNN layer, a small group of unique sequences is employed
more frequently than others. Accordingly, we propose exploiting this
observation by using Huffman Encoding to encode the bit sequences and then
using an indirection table to decode them during the BNN evaluation. Also, we
propose a clustering scheme to identify the most common sequences of bits and
replace the less common ones with some similar common sequences. Hence, we
decrease the storage requirements and memory accesses since common sequences
are encoded with fewer bits.
We extend a mobile CPU by adding a small hardware structure that can
efficiently cache and decode the compressed sequence of bits. We evaluate our
scheme using the ReAacNet model with the Imagenet dataset. Our experimental
results show that our technique can reduce memory requirement by 1.32x and
improve performance by 1.35x.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - Compacting Binary Neural Networks by Sparse Kernel Selection [58.84313343190488]
This paper is motivated by a previously revealed phenomenon that the binary kernels in successful BNNs are nearly power-law distributed.
We develop the Permutation Straight-Through Estimator (PSTE) that is able to not only optimize the selection process end-to-end but also maintain the non-repetitive occupancy of selected codewords.
Experiments verify that our method reduces both the model size and bit-wise computational costs, and achieves accuracy improvements compared with state-of-the-art BNNs under comparable budgets.
arXiv Detail & Related papers (2023-03-25T13:53:02Z) - Fast matrix multiplication for binary and ternary CNNs on ARM CPU [0.9135092203041721]
We propose fast algorithms of ternary, ternary-binary, and binary matrix multiplication for mobile devices with ARM architecture.
Our algorithms can be used to implement inference of convolutional and fully connected layers of TNNs, TBNs, and BNNs.
We evaluate them experimentally on ARM Cortex-A73 CPU and compare their inference speed to efficient implementations of full-precision, 8-bit, and 4-bit quantized matrix multiplications.
arXiv Detail & Related papers (2022-05-18T14:52:34Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Accelerating Binarized Neural Networks via Bit-Tensor-Cores in Turing
GPUs [15.02711144514149]
Binarized neural networks (BNNs) have tremendous speedups over conventional deep neural networks.
We show that the latest tensorcores in NVIDIA Turing GPUs start to experimentally support bit computation.
Our BTC-BNN design can process ImageNet at a rate of 5.6K images per second, 77% faster than state-of-the-art.
arXiv Detail & Related papers (2020-06-30T07:32:02Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.