RepQ-ViT: Scale Reparameterization for Post-Training Quantization of
Vision Transformers
- URL: http://arxiv.org/abs/2212.08254v2
- Date: Mon, 7 Aug 2023 03:00:41 GMT
- Title: RepQ-ViT: Scale Reparameterization for Post-Training Quantization of
Vision Transformers
- Authors: Zhikai Li, Junrui Xiao, Lianwei Yang, and Qingyi Gu
- Abstract summary: We propose RepQ-ViT, a novel PTQ framework for vision transformers (ViTs)
RepQ-ViT decouples the quantization and inference processes.
It can outperform existing strong baselines and encouragingly improve the accuracy of 4-bit PTQ of ViTs to a usable level.
- Score: 2.114921680609289
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Post-training quantization (PTQ), which only requires a tiny dataset for
calibration without end-to-end retraining, is a light and practical model
compression technique. Recently, several PTQ schemes for vision transformers
(ViTs) have been presented; unfortunately, they typically suffer from
non-trivial accuracy degradation, especially in low-bit cases. In this paper,
we propose RepQ-ViT, a novel PTQ framework for ViTs based on quantization scale
reparameterization, to address the above issues. RepQ-ViT decouples the
quantization and inference processes, where the former employs complex
quantizers and the latter employs scale-reparameterized simplified quantizers.
This ensures both accurate quantization and efficient inference, which
distinguishes it from existing approaches that sacrifice quantization
performance to meet the target hardware. More specifically, we focus on two
components with extreme distributions: post-LayerNorm activations with severe
inter-channel variation and post-Softmax activations with power-law features,
and initially apply channel-wise quantization and log$\sqrt{2}$ quantization,
respectively. Then, we reparameterize the scales to hardware-friendly
layer-wise quantization and log2 quantization for inference, with only slight
accuracy or computational costs. Extensive experiments are conducted on
multiple vision tasks with different model variants, proving that RepQ-ViT,
without hyperparameters and expensive reconstruction procedures, can outperform
existing strong baselines and encouragingly improve the accuracy of 4-bit PTQ
of ViTs to a usable level. Code is available at
https://github.com/zkkli/RepQ-ViT.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.