QDrop: Randomly Dropping Quantization for Extremely Low-bit
Post-Training Quantization
- URL: http://arxiv.org/abs/2203.05740v1
- Date: Fri, 11 Mar 2022 04:01:53 GMT
- Title: QDrop: Randomly Dropping Quantization for Extremely Low-bit
Post-Training Quantization
- Authors: Xiuying Wei, Ruihao Gong, Yuhang Li, Xianglong Liu, Fengwei Yu
- Abstract summary: Post-training quantization (PTQ) has driven much attention to produce efficient neural networks without long-time retraining.
In this study, we pioneeringly confirm that properly incorporating activation quantization into the PTQ reconstruction benefits the final accuracy.
Based on the conclusion, a simple yet effective approach dubbed as QDROP is proposed, which randomly drops the quantization of activations during PTQ.
- Score: 54.44028700760694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, post-training quantization (PTQ) has driven much attention to
produce efficient neural networks without long-time retraining. Despite its low
cost, current PTQ works tend to fail under the extremely low-bit setting. In
this study, we pioneeringly confirm that properly incorporating activation
quantization into the PTQ reconstruction benefits the final accuracy. To deeply
understand the inherent reason, a theoretical framework is established,
indicating that the flatness of the optimized low-bit model on calibration and
test data is crucial. Based on the conclusion, a simple yet effective approach
dubbed as QDROP is proposed, which randomly drops the quantization of
activations during PTQ. Extensive experiments on various tasks including
computer vision (image classification, object detection) and natural language
processing (text classification and question answering) prove its superiority.
With QDROP, the limit of PTQ is pushed to the 2-bit activation for the first
time and the accuracy boost can be up to 51.49%. Without bells and whistles,
QDROP establishes a new state of the art for PTQ. Our code is available at
https://github.com/wimh966/QDrop and has been integrated into MQBench
(https://github.com/ModelTC/MQBench)
Related papers
- PTQ1.61: Push the Real Limit of Extremely Low-Bit Post-Training Quantization Methods for Large Language Models [64.84734437930362]
Large Language Models (LLMs) suffer severe performance degradation when facing extremely low-bit (sub 2-bit) quantization.
We propose an extremely low-bit PTQ method called PTQ1.61, which enables weight quantization to 1.61-bit for the first time.
Experiments indicate our PTQ1.61 achieves state-of-the-art performance in extremely low-bit quantization.
arXiv Detail & Related papers (2025-02-18T08:04:58Z) - ResQ: Mixed-Precision Quantization of Large Language Models with Low-Rank Residuals [10.860081994662645]
Post-training quantization of large language models (LLMs) holds the promise in reducing the prohibitive computational cost at inference time.
We propose ResQ, a PTQ method that pushes further the state-of-the-art.
We demonstrate that ResQ outperforms recent uniform and mixed precision PTQ methods on a variety of benchmarks.
arXiv Detail & Related papers (2024-12-18T22:01:55Z) - EfQAT: An Efficient Framework for Quantization-Aware Training [20.47826378511535]
Quantization-aware training (QAT) schemes have been shown to achieve near-full precision accuracy.
Post-training quantization (PTQ) schemes do not involve training and are therefore computationally cheap.
We propose EfQAT, which generalizes both schemes by optimizing only a subset of the parameters of a quantized model.
arXiv Detail & Related papers (2024-11-17T11:06:36Z) - Towards Accurate Post-Training Quantization of Vision Transformers via Error Reduction [48.740630807085566]
Post-training quantization (PTQ) for vision transformers (ViTs) has received increasing attention from both academic and industrial communities.
Current methods fail to account for the complex interactions between quantized weights and activations, resulting in significant quantization errors and suboptimal performance.
This paper presents ERQ, an innovative two-step PTQ method specifically crafted to reduce quantization errors arising from activation and weight quantization sequentially.
arXiv Detail & Related papers (2024-07-09T12:06:03Z) - Benchmarking the Reliability of Post-training Quantization: a Particular
Focus on Worst-case Performance [53.45700148820669]
Post-training quantization (PTQ) is a popular method for compressing deep neural networks (DNNs) without modifying their original architecture or training procedures.
Despite its effectiveness and convenience, the reliability of PTQ methods in the presence of some extrem cases such as distribution shift and data noise remains largely unexplored.
This paper first investigates this problem on various commonly-used PTQ methods.
arXiv Detail & Related papers (2023-03-23T02:55:50Z) - RAPQ: Rescuing Accuracy for Power-of-Two Low-bit Post-training
Quantization [4.8018862391424095]
We introduce a Power-of-Two post-training quantization( PTQ) method for deep neural network that meets hardware requirements.
We propose a novel Power-of-Two PTQ framework, dubbed RAPQ, which dynamically adjusts the Power-of-Two scales of the whole network.
We are the first to propose PTQ for the more constrained but hardware-friendly Power-of-Two quantization and prove that it can achieve nearly the same accuracy as SOTA PTQ method.
arXiv Detail & Related papers (2022-04-26T14:02:04Z) - Cluster-Promoting Quantization with Bit-Drop for Minimizing Network
Quantization Loss [61.26793005355441]
Cluster-Promoting Quantization (CPQ) finds the optimal quantization grids for neural networks.
DropBits is a new bit-drop technique that revises the standard dropout regularization to randomly drop bits instead of neurons.
We experimentally validate our method on various benchmark datasets and network architectures.
arXiv Detail & Related papers (2021-09-05T15:15:07Z) - A White Paper on Neural Network Quantization [20.542729144379223]
We introduce state-of-the-art algorithms for mitigating the impact of quantization noise on the network's performance.
We consider two main classes of algorithms: Post-Training Quantization (PTQ) and Quantization-Aware-Training (QAT)
arXiv Detail & Related papers (2021-06-15T17:12:42Z) - BRECQ: Pushing the Limit of Post-Training Quantization by Block
Reconstruction [29.040991149922615]
We study the challenging task of neural network quantization without end-to-end retraining, called Post-training Quantization (PTQ)
We propose a novel PTQ framework, dubbed BRECQ, which pushes the limits of bitwidth in PTQ down to INT2 for the first time.
For the first time we prove that, without bells and whistles, PTQ can attain 4-bit ResNet and MobileNetV2 comparable with QAT and enjoy 240 times faster production of quantized models.
arXiv Detail & Related papers (2021-02-10T13:46:16Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.