Sharpness-aware Quantization for Deep Neural Networks
- URL: http://arxiv.org/abs/2111.12273v5
- Date: Tue, 21 Mar 2023 10:31:46 GMT
- Title: Sharpness-aware Quantization for Deep Neural Networks
- Authors: Jing Liu, Jianfei Cai, Bohan Zhuang
- Abstract summary: Sharpness-Aware Quantization (SAQ) is a novel method to explore the effect of Sharpness-Aware Minimization (SAM) on model compression.
We show that SAQ improves the generalization performance of the quantized models, yielding the SOTA results in uniform quantization.
- Score: 45.150346855368
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Network quantization is a dominant paradigm of model compression. However,
the abrupt changes in quantized weights during training often lead to severe
loss fluctuations and result in a sharp loss landscape, making the gradients
unstable and thus degrading the performance. Recently, Sharpness-Aware
Minimization (SAM) has been proposed to smooth the loss landscape and improve
the generalization performance of the models. Nevertheless, directly applying
SAM to the quantized models can lead to perturbation mismatch or diminishment
issues, resulting in suboptimal performance. In this paper, we propose a novel
method, dubbed Sharpness-Aware Quantization (SAQ), to explore the effect of SAM
in model compression, particularly quantization for the first time.
Specifically, we first provide a unified view of quantization and SAM by
treating them as introducing quantization noises and adversarial perturbations
to the model weights, respectively. According to whether the noise and
perturbation terms depend on each other, SAQ can be formulated into three
cases, which are analyzed and compared comprehensively. Furthermore, by
introducing an efficient training strategy, SAQ only incurs a little additional
training overhead compared with the default optimizer (e.g., SGD or AdamW).
Extensive experiments on both convolutional neural networks and Transformers
across various datasets (i.e., ImageNet, CIFAR-10/100, Oxford Flowers-102,
Oxford-IIIT Pets) show that SAQ improves the generalization performance of the
quantized models, yielding the SOTA results in uniform quantization. For
example, on ImageNet, SAQ outperforms AdamW by 1.2% on the Top-1 accuracy for
4-bit ViT-B/16. Our 4-bit ResNet-50 surpasses the previous SOTA method by 0.9%
on the Top-1 accuracy.
Related papers
- PassionSR: Post-Training Quantization with Adaptive Scale in One-Step Diffusion based Image Super-Resolution [87.89013794655207]
Diffusion-based image super-resolution (SR) models have shown superior performance at the cost of multiple denoising steps.
We propose a novel post-training quantization approach with adaptive scale in one-step diffusion (OSD) image SR, PassionSR.
Our PassionSR achieves significant advantages over recent leading low-bit quantization methods for image SR.
arXiv Detail & Related papers (2024-11-26T04:49:42Z) - 2DQuant: Low-bit Post-Training Quantization for Image Super-Resolution [83.09117439860607]
Low-bit quantization has become widespread for compressing image super-resolution (SR) models for edge deployment.
It is notorious that low-bit quantization degrades the accuracy of SR models compared to their full-precision (FP) counterparts.
We present a dual-stage low-bit post-training quantization (PTQ) method for image super-resolution, namely 2DQuant, which achieves efficient and accurate SR under low-bit quantization.
arXiv Detail & Related papers (2024-06-10T06:06:11Z) - Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models [88.80146574509195]
Quantization is a promising approach for reducing memory overhead and accelerating inference.
We propose a novel-aware quantization (ZSAQ) framework for the zero-shot quantization of various PLMs.
arXiv Detail & Related papers (2023-10-20T07:09:56Z) - RAND: Robustness Aware Norm Decay For Quantized Seq2seq Models [14.07649230604283]
We propose low complexity changes to the quantization aware training (QAT) process to improve model accuracy.
With the improved accuracy, it opens up the possibility to exploit some of the other benefits of noise based QAT.
arXiv Detail & Related papers (2023-05-24T19:45:56Z) - Q-Diffusion: Quantizing Diffusion Models [52.978047249670276]
Post-training quantization (PTQ) is considered a go-to compression method for other tasks.
We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture.
We show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance.
arXiv Detail & Related papers (2023-02-08T19:38:59Z) - Error-aware Quantization through Noise Tempering [43.049102196902844]
Quantization-aware training (QAT) optimize model parameters with respect to the end task while simulating quantization error.
In this work, we incorporate exponentially decaying quantization-error-aware noise together with a learnable scale of task loss gradient to approximate the effect of a quantization operator.
Our method obtains state-of-the-art top-1 classification accuracy for uniform (non mixed-precision) quantization, out-performing previous methods by 0.5-1.2% absolute.
arXiv Detail & Related papers (2022-12-11T20:37:50Z) - Mixed-Precision Inference Quantization: Radically Towards Faster
inference speed, Lower Storage requirement, and Lower Loss [4.877532217193618]
Existing quantization techniques rely heavily on experience and "fine-tuning" skills.
This study provides a methodology for acquiring a mixed-precise quantization model with a lower loss than the full precision model.
In particular, we will demonstrate that neural networks with massive identity mappings are resistant to the quantization method.
arXiv Detail & Related papers (2022-07-20T10:55:34Z) - 4-bit Conformer with Native Quantization Aware Training for Speech
Recognition [13.997832593421577]
We propose to develop 4-bit ASR models with native quantization aware training, which leverages native integer operations to effectively optimize both training and inference.
We conducted two experiments on state-of-the-art Conformer-based ASR models to evaluate our proposed quantization technique.
For the first time investigated and revealed the viability of 4-bit quantization on a practical ASR system that is trained with large-scale datasets.
arXiv Detail & Related papers (2022-03-29T23:57:15Z) - Efficient Sharpness-aware Minimization for Improved Training of Neural
Networks [146.2011175973769]
This paper proposes Efficient Sharpness Aware Minimizer (M) which boosts SAM s efficiency at no cost to its generalization performance.
M includes two novel and efficient training strategies-StochasticWeight Perturbation and Sharpness-Sensitive Data Selection.
We show, via extensive experiments on the CIFAR and ImageNet datasets, that ESAM enhances the efficiency over SAM from requiring 100% extra computations to 40% vis-a-vis bases.
arXiv Detail & Related papers (2021-10-07T02:20:37Z) - Learnable Companding Quantization for Accurate Low-bit Neural Networks [3.655021726150368]
Quantizing deep neural networks is an effective method for reducing memory consumption and improving inference speed.
It is still hard for extremely low-bit models to achieve accuracy comparable with that of full-precision models.
We propose learnable companding quantization (LCQ) as a novel non-uniform quantization method for 2-, 3-, and 4-bit models.
arXiv Detail & Related papers (2021-03-12T09:06:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.