F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
- URL: http://arxiv.org/abs/2202.05239v1
- Date: Thu, 10 Feb 2022 18:48:56 GMT
- Title: F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
- Authors: Qing Jin, Jian Ren, Richard Zhuang, Sumant Hanumante, Zhengang Li,
Zhiyu Chen, Yanzhi Wang, Kaiyuan Yang, Sergey Tulyakov
- Abstract summary: We present F8Net, a novel quantization framework consisting of only fixed-point 8-bit multiplication.
Our approach achieves comparable and better performance, when compared with existing quantization techniques.
- Score: 47.403304754934155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural network quantization is a promising compression technique to reduce
memory footprint and save energy consumption, potentially leading to real-time
inference. However, there is a performance gap between quantized and
full-precision models. To reduce it, existing quantization approaches require
high-precision INT32 or full-precision multiplication during inference for
scaling or dequantization. This introduces a noticeable cost in terms of
memory, speed, and required energy. To tackle these issues, we present F8Net, a
novel quantization framework consisting of only fixed-point 8-bit
multiplication. To derive our method, we first discuss the advantages of
fixed-point multiplication with different formats of fixed-point numbers and
study the statistical behavior of the associated fixed-point numbers. Second,
based on the statistical and algorithmic analysis, we apply different
fixed-point formats for weights and activations of different layers. We
introduce a novel algorithm to automatically determine the right format for
each layer during training. Third, we analyze a previous quantization algorithm
-- parameterized clipping activation (PACT) -- and reformulate it using
fixed-point arithmetic. Finally, we unify the recently proposed method for
quantization fine-tuning and our fixed-point approach to show the potential of
our method. We verify F8Net on ImageNet for MobileNet V1/V2 and ResNet18/50.
Our approach achieves comparable and better performance, when compared not only
to existing quantization techniques with INT32 multiplication or floating-point
arithmetic, but also to the full-precision counterparts, achieving
state-of-the-art performance.
Related papers
- Post-Training Quantization for Re-parameterization via Coarse & Fine
Weight Splitting [13.270381125055275]
We propose a coarse & fine weight splitting (CFWS) method to reduce quantization error of weight.
We develop an improved KL metric to determine optimal quantization scales for activation.
For example, the quantized RepVGG-A1 model exhibits a mere 0.3% accuracy loss.
arXiv Detail & Related papers (2023-12-17T02:31:20Z) - FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search [50.07268323597872]
We propose the first one-shot mixed-precision quantization search that eliminates the need for retraining in both integer and low-precision floating point models.
With integer models, we increase the accuracy of ResNet-18 on ImageNet by 1.31% and ResNet-50 by 0.90% with equivalent model cost over previous methods.
For the first time, we explore a novel mixed-precision floating-point search and improve MobileNetV2 by up to 0.98% compared to prior state-of-the-art FP8 models.
arXiv Detail & Related papers (2023-08-07T04:17:19Z) - Effective and Fast: A Novel Sequential Single Path Search for
Mixed-Precision Quantization [45.22093693422085]
Mixed-precision quantization model can match different quantization bit-precisions according to the sensitivity of different layers to achieve great performance.
It is a difficult problem to quickly determine the quantization bit-precision of each layer in deep neural networks according to some constraints.
We propose a novel sequential single path search (SSPS) method for mixed-precision quantization.
arXiv Detail & Related papers (2021-03-04T09:15:08Z) - HAWQV3: Dyadic Neural Network Quantization [73.11579145354801]
Current low-precision quantization algorithms often have the hidden cost of conversion back and forth from floating point to quantized integer values.
We present HAWQV3, a novel mixed-precision integer-only quantization framework.
arXiv Detail & Related papers (2020-11-20T23:51:43Z) - Optimal Quantization for Batch Normalization in Neural Network
Deployments and Beyond [18.14282813812512]
Batch Normalization (BN) poses a challenge for Quantized Neural Networks (QNNs)
We propose a novel method to quantize BN by converting an affine transformation of two floating points to a fixed-point operation with shared quantized scale.
Our method is verified by experiments at layer level on CIFAR and ImageNet datasets.
arXiv Detail & Related papers (2020-08-30T09:33:29Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z) - Accelerating Neural Network Inference by Overflow Aware Quantization [16.673051600608535]
Inherited heavy computation of deep neural networks prevents their widespread applications.
We propose an overflow aware quantization method by designing trainable adaptive fixed-point representation.
With the proposed method, we are able to fully utilize the computing power to minimize the quantization loss and obtain optimized inference performance.
arXiv Detail & Related papers (2020-05-27T11:56:22Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.