QDrop: Randomly Dropping Quantization for Extremely Low-bit
Post-Training Quantization
- URL: http://arxiv.org/abs/2203.05740v1
- Date: Fri, 11 Mar 2022 04:01:53 GMT
- Title: QDrop: Randomly Dropping Quantization for Extremely Low-bit
Post-Training Quantization
- Authors: Xiuying Wei, Ruihao Gong, Yuhang Li, Xianglong Liu, Fengwei Yu
- Abstract summary: Post-training quantization (PTQ) has driven much attention to produce efficient neural networks without long-time retraining.
In this study, we pioneeringly confirm that properly incorporating activation quantization into the PTQ reconstruction benefits the final accuracy.
Based on the conclusion, a simple yet effective approach dubbed as QDROP is proposed, which randomly drops the quantization of activations during PTQ.
- Score: 54.44028700760694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, post-training quantization (PTQ) has driven much attention to
produce efficient neural networks without long-time retraining. Despite its low
cost, current PTQ works tend to fail under the extremely low-bit setting. In
this study, we pioneeringly confirm that properly incorporating activation
quantization into the PTQ reconstruction benefits the final accuracy. To deeply
understand the inherent reason, a theoretical framework is established,
indicating that the flatness of the optimized low-bit model on calibration and
test data is crucial. Based on the conclusion, a simple yet effective approach
dubbed as QDROP is proposed, which randomly drops the quantization of
activations during PTQ. Extensive experiments on various tasks including
computer vision (image classification, object detection) and natural language
processing (text classification and question answering) prove its superiority.
With QDROP, the limit of PTQ is pushed to the 2-bit activation for the first
time and the accuracy boost can be up to 51.49%. Without bells and whistles,
QDROP establishes a new state of the art for PTQ. Our code is available at
https://github.com/wimh966/QDrop and has been integrated into MQBench
(https://github.com/ModelTC/MQBench)
Related papers
- CBQ: Cross-Block Quantization for Large Language Models [66.82132832702895]
Post-training quantization (PTQ) has played a key role in compressing large language models (LLMs) with ultra-low costs.
We propose CBQ, a cross-block reconstruction-based PTQ method for LLMs.
CBQ employs a cross-block dependency using a reconstruction scheme, establishing long-range dependencies across multiple blocks to minimize error accumulation.
arXiv Detail & Related papers (2023-12-13T07:56:27Z) - Designing strong baselines for ternary neural network quantization
through support and mass equalization [7.971065005161565]
Deep neural networks (DNNs) offer the highest performance in a wide range of applications in computer vision.
This computational burden can be dramatically reduced by quantizing floating point values to ternary values.
We show experimentally that our approach allows to significantly improve the performance of ternary quantization through a variety of scenarios.
arXiv Detail & Related papers (2023-06-30T07:35:07Z) - Benchmarking the Reliability of Post-training Quantization: a Particular
Focus on Worst-case Performance [53.45700148820669]
Post-training quantization (PTQ) is a popular method for compressing deep neural networks (DNNs) without modifying their original architecture or training procedures.
Despite its effectiveness and convenience, the reliability of PTQ methods in the presence of some extrem cases such as distribution shift and data noise remains largely unexplored.
This paper first investigates this problem on various commonly-used PTQ methods.
arXiv Detail & Related papers (2023-03-23T02:55:50Z) - RepQ-ViT: Scale Reparameterization for Post-Training Quantization of
Vision Transformers [2.114921680609289]
We propose RepQ-ViT, a novel PTQ framework for vision transformers (ViTs)
RepQ-ViT decouples the quantization and inference processes.
It can outperform existing strong baselines and encouragingly improve the accuracy of 4-bit PTQ of ViTs to a usable level.
arXiv Detail & Related papers (2022-12-16T02:52:37Z) - RAPQ: Rescuing Accuracy for Power-of-Two Low-bit Post-training
Quantization [4.8018862391424095]
We introduce a Power-of-Two post-training quantization( PTQ) method for deep neural network that meets hardware requirements.
We propose a novel Power-of-Two PTQ framework, dubbed RAPQ, which dynamically adjusts the Power-of-Two scales of the whole network.
We are the first to propose PTQ for the more constrained but hardware-friendly Power-of-Two quantization and prove that it can achieve nearly the same accuracy as SOTA PTQ method.
arXiv Detail & Related papers (2022-04-26T14:02:04Z) - Cluster-Promoting Quantization with Bit-Drop for Minimizing Network
Quantization Loss [61.26793005355441]
Cluster-Promoting Quantization (CPQ) finds the optimal quantization grids for neural networks.
DropBits is a new bit-drop technique that revises the standard dropout regularization to randomly drop bits instead of neurons.
We experimentally validate our method on various benchmark datasets and network architectures.
arXiv Detail & Related papers (2021-09-05T15:15:07Z) - A White Paper on Neural Network Quantization [20.542729144379223]
We introduce state-of-the-art algorithms for mitigating the impact of quantization noise on the network's performance.
We consider two main classes of algorithms: Post-Training Quantization (PTQ) and Quantization-Aware-Training (QAT)
arXiv Detail & Related papers (2021-06-15T17:12:42Z) - BRECQ: Pushing the Limit of Post-Training Quantization by Block
Reconstruction [29.040991149922615]
We study the challenging task of neural network quantization without end-to-end retraining, called Post-training Quantization (PTQ)
We propose a novel PTQ framework, dubbed BRECQ, which pushes the limits of bitwidth in PTQ down to INT2 for the first time.
For the first time we prove that, without bells and whistles, PTQ can attain 4-bit ResNet and MobileNetV2 comparable with QAT and enjoy 240 times faster production of quantized models.
arXiv Detail & Related papers (2021-02-10T13:46:16Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z) - ZeroQ: A Novel Zero Shot Quantization Framework [83.63606876854168]
Quantization is a promising approach for reducing the inference time and memory footprint of neural networks.
Existing zero-shot quantization methods use different epochs to address this, but they result in poor performance.
Here, we propose ZeroQ, a novel zero-shot quantization framework to address this.
arXiv Detail & Related papers (2020-01-01T23:58:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.