FPTQ: Fine-grained Post-Training Quantization for Large Language Models
- URL: http://arxiv.org/abs/2308.15987v1
- Date: Wed, 30 Aug 2023 12:18:18 GMT
- Title: FPTQ: Fine-grained Post-Training Quantization for Large Language Models
- Authors: Qingyuan Li, Yifan Zhang, Liang Li, Peng Yao, Bo Zhang, Xiangxiang
Chu, Yerui Sun, Li Du, Yuchen Xie
- Abstract summary: We propose a novel W4A8 post-training quantization method for the available open-sourced LLMs.
We obtain the state-of-the-art W4A8 quantized performance on BLOOM, LLaMA, and LLaMA-2 on standard benchmarks.
- Score: 28.11564378745513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the era of large-scale language models, the substantial parameter size
poses significant challenges for deployment. Being a prevalent compression
technique, quantization has emerged as the mainstream practice to tackle this
issue, which is mainly centered on two recipes W8A8 and W4A16 (i.e. weights and
activations in such bit widths). In this study, we propose a novel W4A8
post-training quantization method for the available open-sourced LLMs, which
combines the advantages of both two recipes. Therefore, we can leverage the
benefit in the I/O utilization of 4-bit weight quantization and the
acceleration due to 8-bit matrix computation. Nevertheless, the W4A8 faces
notorious performance degradation. As a remedy, we involve layerwise activation
quantization strategies which feature a novel logarithmic equalization for most
intractable layers, and we combine them with fine-grained weight quantization.
Without whistles and bells, we eliminate the necessity for further fine-tuning
and obtain the state-of-the-art W4A8 quantized performance on BLOOM, LLaMA, and
LLaMA-2 on standard benchmarks. We confirm that the W4A8 quantization is
achievable for the deployment of large language models, fostering their
wide-spreading real-world applications.
Related papers
- "Give Me BF16 or Give Me Death"? Accuracy-Performance Trade-Offs in LLM Quantization [67.3213104337679]
We evaluate popular quantization formats across academic benchmarks and real-world tasks.
We find that W4A16 offers the best costefficiency for synchronous deployments, and for asynchronous deployment on mid-tier architectures.
arXiv Detail & Related papers (2024-11-04T18:21:59Z) - Mitigating the Impact of Outlier Channels for Language Model Quantization with Activation Regularization [62.15918574997175]
It is known that language models contain outlier channels whose values on average are orders of magnitude higher than other channels.
We propose a strategy which regularizes a layer's inputs via quantization-aware training (QAT) and its outputs via activation kurtosis regularization.
We show that regularizing both the inputs and outputs is crucial for preventing a model's "migrating" the difficulty in input quantization to the weights.
arXiv Detail & Related papers (2024-04-04T17:25:30Z) - AffineQuant: Affine Transformation Quantization for Large Language Models [58.45460102764]
Post-Training Quantization (PTQ) has emerged as a subject of considerable interest due to its compression efficiency and cost-effectiveness in the context of training.
Existing PTQ methods for Large-scale Language Models (LLMs) limit the optimization scope to scaling transformations between pre- and post-quantization weights.
In this paper, we advocate for the direct optimization using equivalent Affine transformations in PTQ (AffineQuant)
arXiv Detail & Related papers (2024-03-19T08:40:21Z) - Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs
on the Edge [45.690907522226794]
Large Language Models (LLMs) stand out for their impressive performance in intricate language modeling tasks.
Recent works show that 8-bit or lower weight quantization is feasible with minimal impact on end-to-end task performance.
We propose Agile-Quant, an activation-guided quantization framework for popular Large Language Models.
arXiv Detail & Related papers (2023-12-09T22:12:52Z) - Enhancing Computation Efficiency in Large Language Models through Weight and Activation Quantization [12.655230451207956]
This paper focuses on post-training quantization (PTQ) in Large Language Models (LLMs)
We present two innovative techniques: activation-quantization-aware scaling (AQAS) and sequence-length-aware calibration (SLAC)
We demonstrate that our techniques significantly boost task accuracies to levels comparable with full-precision models.
arXiv Detail & Related papers (2023-11-09T06:19:51Z) - LLM-FP4: 4-Bit Floating-Point Quantized Transformers [38.23587031169402]
We propose LLM-FP4 for quantizing both weights and activations in large language models (LLMs) down to 4-bit floating-point values.
Compared to integer quantization, floating-point (FP) quantization is more flexible and can better handle long-tail or bell-shaped distributions.
Our method, for the first time, can quantize both weights and activations in the LLaMA-13B to only 4-bit and achieves an average score of 63.1.
arXiv Detail & Related papers (2023-10-25T17:59:32Z) - QUIK: Towards End-to-End 4-Bit Inference on Generative Large Language
Models [57.04178959678024]
We show that the majority of inference computations for large generative models can be performed with both weights and activations being cast to 4 bits.
We achieve this via a hybrid quantization strategy called QUIK, which compresses most of the weights and activations to 4-bit.
We provide GPU kernels matching the QUIK format with highly-efficient layer-wise runtimes, which lead to practical end-to-end throughput improvements of up to 3.4x.
arXiv Detail & Related papers (2023-10-13T17:15:05Z) - Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM [6.85331857224501]
Large Language Models (LLMs) pose significant hardware challenges related to memory requirements and computational ability.
There are two mainstream quantization schemes for LLMs: coarse-grained ($textite.g.,$ channel-wise) quantization and fine-grained ($textite.g.,$ group-wise) quantization.
We introduce Dual Grained Quantization (DGQ), a novel A8W4 quantization for LLM that maintains superior performance while ensuring fast inference speed.
arXiv Detail & Related papers (2023-10-07T14:50:28Z) - OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models [57.27101446992148]
Large language models (LLMs) have revolutionized natural language processing tasks.
Recent post-training quantization (PTQ) methods are effective in reducing memory footprint and improving the computational efficiency of LLM.
We introduce an Omnidirectionally calibrated Quantization technique for LLMs, which achieves good performance in diverse quantization settings.
arXiv Detail & Related papers (2023-08-25T02:28:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.