RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of
Quantized CNNs
- URL: http://arxiv.org/abs/2301.06193v1
- Date: Sun, 15 Jan 2023 21:27:35 GMT
- Title: RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of
Quantized CNNs
- Authors: Andr\'e Santos, Jo\~ao Dinis Ferreira, Onur Mutlu, Gabriel Falcao
- Abstract summary: Convolutional Neural Networks (CNNs) have become the standard class of deep neural network for image processing, classification and segmentation tasks.
RedBit is an open-source framework that provides a transparent, easy-to-use interface to evaluate the effectiveness of different algorithms on network accuracy.
- Score: 9.807687918954763
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, Convolutional Neural Networks (CNNs) have become the
standard class of deep neural network for image processing, classification and
segmentation tasks. However, the large strides in accuracy obtained by CNNs
have been derived from increasing the complexity of network topologies, which
incurs sizeable performance and energy penalties in the training and inference
of CNNs. Many recent works have validated the effectiveness of parameter
quantization, which consists in reducing the bit width of the network's
parameters, to enable the attainment of considerable performance and energy
efficiency gains without significantly compromising accuracy. However, it is
difficult to compare the relative effectiveness of different quantization
methods. To address this problem, we introduce RedBit, an open-source framework
that provides a transparent, extensible and easy-to-use interface to evaluate
the effectiveness of different algorithms and parameter configurations on
network accuracy. We use RedBit to perform a comprehensive survey of five
state-of-the-art quantization methods applied to the MNIST, CIFAR-10 and
ImageNet datasets. We evaluate a total of 2300 individual bit width
combinations, independently tuning the width of the network's weight and input
activation parameters, from 32 bits down to 1 bit (e.g., 8/8, 2/2, 1/32, 1/1,
for weights/activations). Upwards of 20000 hours of computing time in a pool of
state-of-the-art GPUs were used to generate all the results in this paper. For
1-bit quantization, the accuracy losses for the MNIST, CIFAR-10 and ImageNet
datasets range between [0.26%, 0.79%], [9.74%, 32.96%] and [10.86%, 47.36%]
top-1, respectively. We actively encourage the reader to download the source
code and experiment with RedBit, and to submit their own observed results to
our public repository, available at https://github.com/IT-Coimbra/RedBit.
Related papers
- OMPQ: Orthogonal Mixed Precision Quantization [64.59700856607017]
Mixed precision quantization takes advantage of hardware's multiple bit-width arithmetic operations to unleash the full potential of network quantization.
We propose to optimize a proxy metric, the concept of networkity, which is highly correlated with the loss of the integer programming.
This approach reduces the search time and required data amount by orders of magnitude, with little compromise on quantization accuracy.
arXiv Detail & Related papers (2021-09-16T10:59:33Z) - HANT: Hardware-Aware Network Transformation [82.54824188745887]
We propose hardware-aware network transformation (HANT)
HANT replaces inefficient operations with more efficient alternatives using a neural architecture search like approach.
Our results on accelerating the EfficientNet family show that HANT can accelerate them by up to 3.6x with 0.4% drop in the top-1 accuracy on the ImageNet dataset.
arXiv Detail & Related papers (2021-07-12T18:46:34Z) - PocketNet: A Smaller Neural Network for 3D Medical Image Segmentation [0.0]
We derive a new CNN architecture called PocketNet that achieves comparable segmentation results to conventional CNNs while using less than 3% of the number of parameters.
We show that PocketNet achieves comparable segmentation results to conventional CNNs while using less than 3% of the number of parameters.
arXiv Detail & Related papers (2021-04-21T20:10:30Z) - Fixed-point Quantization of Convolutional Neural Networks for Quantized
Inference on Embedded Platforms [0.9954382983583577]
We propose a method to optimally quantize the weights, biases and activations of each layer of a pre-trained CNN.
We find that layer-wise quantization of parameters significantly helps in this process.
arXiv Detail & Related papers (2021-02-03T17:05:55Z) - FATNN: Fast and Accurate Ternary Neural Networks [89.07796377047619]
Ternary Neural Networks (TNNs) have received much attention due to being potentially orders of magnitude faster in inference, as well as more power efficient, than full-precision counterparts.
In this work, we show that, under some mild constraints, computational complexity of the ternary inner product can be reduced by a factor of 2.
We elaborately design an implementation-dependent ternary quantization algorithm to mitigate the performance gap.
arXiv Detail & Related papers (2020-08-12T04:26:18Z) - Towards Lossless Binary Convolutional Neural Networks Using Piecewise
Approximation [4.023728681102073]
CNNs can significantly reduce the number of arithmetic operations and the size of memory storage.
However, the accuracy degradation of single and multiple binary CNNs is unacceptable for modern architectures.
We propose a Piecewise Approximation scheme for multiple binary CNNs which lessens accuracy loss by approximating full precision weights and activations.
arXiv Detail & Related papers (2020-08-08T13:32:33Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z) - Cross-filter compression for CNN inference acceleration [4.324080238456531]
We propose a new cross-filter compression method that can provide $sim32times$ memory savings and $122times$ speed up in convolution operations.
Our method, based on Binary-Weight and XNOR-Net separately, is evaluated on CIFAR-10 and ImageNet dataset.
arXiv Detail & Related papers (2020-05-18T19:06:14Z) - Improved Residual Networks for Image and Video Recognition [98.10703825716142]
Residual networks (ResNets) represent a powerful type of convolutional neural network (CNN) architecture.
We show consistent improvements in accuracy and learning convergence over the baseline.
Our proposed approach allows us to train extremely deep networks, while the baseline shows severe optimization issues.
arXiv Detail & Related papers (2020-04-10T11:09:50Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.