Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM
Inference?
- URL: http://arxiv.org/abs/2310.05079v2
- Date: Sat, 21 Oct 2023 12:38:52 GMT
- Title: Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM
Inference?
- Authors: Cheng Zhang, Jianyi Cheng, Ilia Shumailov, George A. Constantinides,
and Yiren Zhao
- Abstract summary: We study the statistical and learning properties of Large language models (LLMs)
We adapt block quantisations for LLMs, a family of methods that share scaling factors across packed numbers.
Our nearly-lossless quantised 6-bit LLMs achieve a $19times$ higher arithmetic density and $5times$ memory density than the float32 baseline.
- Score: 21.243853199880807
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The inference of Large language models (LLMs) requires immense computation
and memory resources. To curtail these costs, quantisation has merged as a
promising solution, but existing LLM quantisation mainly focuses on 8-bit. In
this work, we explore the statistical and learning properties of the LLM layer
and attribute the bottleneck of LLM quantisation to numerical scaling offsets.
To address this, we adapt block quantisations for LLMs, a family of methods
that share scaling factors across packed numbers. Block quantisations
efficiently reduce the numerical scaling offsets solely from an arithmetic
perspective, without additional treatments in the computational path. Our
nearly-lossless quantised 6-bit LLMs achieve a $19\times$ higher arithmetic
density and $5\times$ memory density than the float32 baseline, surpassing the
prior art 8-bit quantisation by $2.5\times$ in arithmetic density and
$1.2\times$ in memory density, without requiring any data calibration or
re-training. We also share our insights into sub-8-bit LLM quantisation,
including the mismatch between activation and weight distributions, optimal
fine-tuning strategies, and a lower quantisation granularity inherent in the
statistical properties of LLMs. The latter two tricks enable nearly-lossless
4-bit LLMs on downstream tasks. Our code is open-sourced.
Related papers
- SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models [67.67135738642547]
Post-training quantization (PTQ) is a powerful compression technique investigated in large language models (LLMs)
Existing PTQ methods are not ideal in terms of accuracy and efficiency, especially with below 4 bit-widths.
This paper presents a Salience-Driven Mixed-Precision Quantization scheme for LLMs, namely SliM-LLM.
arXiv Detail & Related papers (2024-05-23T16:21:48Z) - QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving [52.31791050376249]
Quantization can accelerate large language model (LLM) inference.
Existing INT4 quantization methods suffer from significant runtime overhead when dequantizing weights or partial sums.
We introduce QoQ, a W4A8KV4 quantization algorithm with 4-bit weight, 8-bit activation, and 4-bit KV cache.
QServe improves the maximum achievable serving of Llama-3-8B by 1.2x on A100, 1.4x on L40S; and Qwen-721.5B by 2.4x on A100, 3.5x on L40S.
arXiv Detail & Related papers (2024-05-07T17:59:30Z) - An Empirical Study of LLaMA3 Quantization: From LLMs to MLLMs [54.91212829143966]
This study explores LLaMA3's capabilities when quantized to low bit-width.
We evaluate 10 existing post-training quantization and LoRA-finetuning methods of LLaMA3 on 1-8 bits and diverse datasets.
Our experimental results indicate that LLaMA3 still suffers non-negligent degradation in linguistic and visual contexts.
arXiv Detail & Related papers (2024-04-22T10:03:03Z) - FlattenQuant: Breaking Through the Inference Compute-bound for Large
Language Models with Per-tensor Quantization [6.931020818874328]
We introduce a method called FlattenQuant, which significantly reduces the maximum value of the tensor by flattening the large channels in the tensor, to achieve low bit per-tensor quantization with minimal accuracy loss.
Our work achieves up to 2$times$ speedup and 2.3$times$ memory reduction for LLMs with negligible loss in accuracy.
arXiv Detail & Related papers (2024-02-28T02:00:34Z) - OneBit: Towards Extremely Low-bit Large Language Models [66.29839811207617]
This paper boldly quantizes the weight matrices of LLMs to 1-bit, paving the way for the extremely low bit-width deployment of LLMs.
Experiments indicate that OneBit achieves good performance (at least 81% of the non-quantized performance on LLaMA models) with robust training processes.
arXiv Detail & Related papers (2024-02-17T14:26:57Z) - BiLLM: Pushing the Limit of Post-Training Quantization for LLMs [53.31402059062365]
BiLLM is a groundbreaking 1-bit post-training quantization scheme tailored for pretrained large language models.
It achieves for the first time high-accuracy inference (e.g. 8.41 perplexity on LLaMA2-70B) with only 1.08-bit weights across various LLMs families.
arXiv Detail & Related papers (2024-02-06T09:26:34Z) - Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM [6.85331857224501]
Large Language Models (LLMs) pose significant hardware challenges related to memory requirements and computational ability.
There are two mainstream quantization schemes for LLMs: coarse-grained ($textite.g.,$ channel-wise) quantization and fine-grained ($textite.g.,$ group-wise) quantization.
We introduce Dual Grained Quantization (DGQ), a novel A8W4 quantization for LLM that maintains superior performance while ensuring fast inference speed.
arXiv Detail & Related papers (2023-10-07T14:50:28Z) - FPTQ: Fine-grained Post-Training Quantization for Large Language Models [28.11564378745513]
We propose a novel W4A8 post-training quantization method for the available open-sourced LLMs.
We obtain the state-of-the-art W4A8 quantized performance on BLOOM, LLaMA, and LLaMA-2 on standard benchmarks.
arXiv Detail & Related papers (2023-08-30T12:18:18Z) - OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models [57.27101446992148]
Large language models (LLMs) have revolutionized natural language processing tasks.
Recent post-training quantization (PTQ) methods are effective in reducing memory footprint and improving the computational efficiency of LLM.
We introduce an Omnidirectionally calibrated Quantization technique for LLMs, which achieves good performance in diverse quantization settings.
arXiv Detail & Related papers (2023-08-25T02:28:35Z) - SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models [14.929695160346276]
Large language models (LLMs) show excellent performance but are compute- and memory-intensive.
We propose SmoothQuant, a training-free, accuracy-preserving, and general-purpose post-training quantization solution.
We demonstrate up to 1.56x speedup and 2x memory reduction for LLMs with negligible loss in accuracy.
arXiv Detail & Related papers (2022-11-18T18:59:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.