Outlier Suppression+: Accurate quantization of large language models by
equivalent and optimal shifting and scaling
- URL: http://arxiv.org/abs/2304.09145v3
- Date: Mon, 23 Oct 2023 08:48:31 GMT
- Title: Outlier Suppression+: Accurate quantization of large language models by
equivalent and optimal shifting and scaling
- Authors: Xiuying Wei, Yunchen Zhang, Yuhang Li, Xiangguo Zhang, Ruihao Gong,
Jinyang Guo, Xianglong Liu
- Abstract summary: Post-training quantization of transformer language models faces challenges due to existence of detrimental outliers in activations.
We propose the Outlier Suppression+(OS+) framework, which contains the channel-wise shifting for asymmetry and channel-wise scaling for concentration.
We show that these operations can be seamlessly migrated into subsequent modules while maintaining equivalence.
- Score: 44.60348333479704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Post-training quantization~(PTQ) of transformer language models faces
significant challenges due to the existence of detrimental outliers in
activations. We observe that these outliers are concentrated in specific
channels and are asymmetric across channels. To address this issue, we propose
the Outlier Suppression+~(OS+) framework, which contains the channel-wise
shifting for asymmetry and channel-wise scaling for concentration. We show that
these operations can be seamlessly migrated into subsequent modules while
maintaining equivalence. Second, we propose a fast and stable scheme to
calculate effective shifting and scaling values. The channel-wise shifting
aligns the center of each channel for removal of outlier asymmetry. The
channel-wise scaling quantitatively evaluates changes brought by migration and
quantization for better quantization burden balance. We validate our OS+ under
both standard and fine-grained quantization settings with models including
BERT, OPT, BLOOM, BLOOMZ, and LLaMA. Comprehensive results across various tasks
demonstrate the superiority of our approach. Especially, with standard
quantization, OS+ can achieve near-floating-point performance on both small
models and large language models on 8-bit and 6-bit. Besides, we establish a
new state-of-the-art for 4-bit BERT with 15.5\% improvement. Our code is
available at \url{https://github.com/ModelTC/Outlier_Suppression_Plus}.
Related papers
- OutlierTune: Efficient Channel-Wise Quantization for Large Language Models [24.645237670811476]
OutlierTune is an efficient per-channel post-training quantization method for the activations of large language models.
The proposed framework is easy to implement and hardware-efficient, introducing almost no additional computational overheads during the inference.
arXiv Detail & Related papers (2024-06-27T02:02:26Z) - Mitigating the Impact of Outlier Channels for Language Model Quantization with Activation Regularization [62.15918574997175]
It is known that language models contain outlier channels whose values on average are orders of magnitude higher than other channels.
We propose a strategy which regularizes a layer's inputs via quantization-aware training (QAT) and its outputs via activation kurtosis regularization.
We show that regularizing both the inputs and outputs is crucial for preventing a model's "migrating" the difficulty in input quantization to the weights.
arXiv Detail & Related papers (2024-04-04T17:25:30Z) - QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models [44.515165695546614]
Quantization-Aware Training (QAT) offers a solution, but its extensive training costs make Post-Training Quantization (PTQ) a more practical approach for Large Language Models (LLMs)
We propose QLLM, an accurate and efficient low-bitwidth PTQ method designed for LLMs.
arXiv Detail & Related papers (2023-10-12T05:25:49Z) - Rethinking Channel Dimensions to Isolate Outliers for Low-bit Weight Quantization of Large Language Models [7.485068491216164]
Large Language Models (LLMs) have recently demonstrated remarkable success across various tasks.
Weight-only quantization can be a promising approach, but sub-4 bit quantization remains a challenge due to large-magnitude activation outliers.
We propose per-IC quantization, a simple yet effective method that creates quantization groups within each input channel.
arXiv Detail & Related papers (2023-09-27T09:48:31Z) - Outlier Suppression: Pushing the Limit of Low-bit Transformer Language
Models [57.933500846742234]
Recent work recognizes that structured outliers are the critical bottleneck for quantization performance.
We propose an outlier suppression framework including two components: Gamma Migration and Token-Wise Clipping.
This framework effectively suppresses the outliers and can be used in a plug-and-play mode.
arXiv Detail & Related papers (2022-09-27T12:05:59Z) - ClusTR: Exploring Efficient Self-attention via Clustering for Vision
Transformers [70.76313507550684]
We propose a content-based sparse attention method, as an alternative to dense self-attention.
Specifically, we cluster and then aggregate key and value tokens, as a content-based method of reducing the total token count.
The resulting clustered-token sequence retains the semantic diversity of the original signal, but can be processed at a lower computational cost.
arXiv Detail & Related papers (2022-08-28T04:18:27Z) - Direct Quantization for Training Highly Accurate Low Bit-width Deep
Neural Networks [73.29587731448345]
This paper proposes two novel techniques to train deep convolutional neural networks with low bit-width weights and activations.
First, to obtain low bit-width weights, most existing methods obtain the quantized weights by performing quantization on the full-precision network weights.
Second, to obtain low bit-width activations, existing works consider all channels equally.
arXiv Detail & Related papers (2020-12-26T15:21:18Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.