GPTVQ: The Blessing of Dimensionality for LLM Quantization
- URL: http://arxiv.org/abs/2402.15319v1
- Date: Fri, 23 Feb 2024 13:39:16 GMT
- Title: GPTVQ: The Blessing of Dimensionality for LLM Quantization
- Authors: Mart van Baalen, Andrey Kuzmin, Markus Nagel, Peter Couperus, Cedric
Bastoul, Eric Mahurin, Tijmen Blankevoort, Paul Whatmough
- Abstract summary: We show that the size versus accuracy trade-off of neural network quantization can be significantly improved by increasing the quantization dimensionality.
We propose the GPTVQ method, a new fast method for post-training vector quantization (VQ) that scales well to Large Language Models (LLMs)
Our method interleaves quantization of one or more columns with updates to the remaining unquantized weights, using information from the Hessian of the per-layer output reconstruction MSE.
- Score: 16.585681547799762
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we show that the size versus accuracy trade-off of neural
network quantization can be significantly improved by increasing the
quantization dimensionality. We propose the GPTVQ method, a new fast method for
post-training vector quantization (VQ) that scales well to Large Language
Models (LLMs). Our method interleaves quantization of one or more columns with
updates to the remaining unquantized weights, using information from the
Hessian of the per-layer output reconstruction MSE. Quantization codebooks are
initialized using an efficient data-aware version of the EM algorithm. The
codebooks are then updated, and further compressed by using integer
quantization and SVD-based compression. GPTVQ establishes a new state-of-the
art in the size vs accuracy trade-offs on a wide range of LLMs such as Llama-v2
and Mistral. Furthermore, our method is efficient: on a single H100 it takes
between 3 and 11 hours to process a Llamav2-70B model, depending on
quantization setting. Lastly, with on-device timings for VQ decompression on a
mobile CPU we show that VQ leads to improved latency compared to using a 4-bit
integer format.
Related papers
- Residual vector quantization for KV cache compression in large language model [2.3094645821058735]
KV cache compression methods have mainly relied on scalar quantization techniques to reduce the memory requirements during decoding.
In this work, we apply residual vector quantization, which has been widely used for high fidelity audio compression, to compress KV cache in large language models (LLM)
We learn the codebook using exponential moving average and there are no other learnable parameters including the input and output projections normally used in a vector quantization set up.
arXiv Detail & Related papers (2024-10-21T07:20:41Z) - VQ4DiT: Efficient Post-Training Vector Quantization for Diffusion Transformers [7.369445527610879]
Diffusion Transformers Models (DiTs) have transitioned the network architecture from traditional UNets to transformers, demonstrating exceptional capabilities in image generation.
Vector quantization (VQ) can decompose model weight into a codebook and assignments, allowing extreme weight quantization and significantly reducing memory usage.
We propose VQ4DiT, a fast post-training vector quantization method for DiTs. Experiments show that VQ4DiT establishes a new state-of-the-art in model size and performance trade-offs, quantizing weights to 2-bit precision while retaining acceptable image generation quality.
arXiv Detail & Related papers (2024-08-30T09:15:54Z) - GPTQT: Quantize Large Language Models Twice to Push the Efficiency [1.3149617027696827]
This paper introduces a new post-training quantization method, GPTQT, to reduce memory usage and enhance processing speed.
Practice has shown that minimizing the quantization error of weights is ineffective, leading to overfitting.
GPTQT employs a progressive two-step approach: initially quantizing weights using Linear quantization to a relatively high bit, followed by converting obtained int weight to lower bit binary coding.
arXiv Detail & Related papers (2024-07-03T08:08:01Z) - QTIP: Quantization with Trellises and Incoherence Processing [29.917017118524246]
Post-training quantization (PTQ) reduces the memory footprint of LLMs.
Recent state-of-the-art PTQ approaches use vector quantization (VQ) to quantize multiple weights at once.
We introduce QTIP, which instead uses trellis coded quantization (TCQ) to achieve ultra-high-dimensional quantization.
arXiv Detail & Related papers (2024-06-17T06:03:13Z) - Extreme Compression of Large Language Models via Additive Quantization [59.3122859349777]
Our algorithm, called AQLM, generalizes the classic Additive Quantization (AQ) approach for information retrieval.
We provide fast GPU and CPU implementations of AQLM for token generation, which enable us to match or outperform optimized FP16 implementations for speed.
arXiv Detail & Related papers (2024-01-11T18:54:44Z) - SqueezeLLM: Dense-and-Sparse Quantization [80.32162537942138]
Main bottleneck for generative inference with LLMs is memory bandwidth, rather than compute, for single batch inference.
We introduce SqueezeLLM, a post-training quantization framework that enables lossless compression to ultra-low precisions of up to 3-bit.
Our framework incorporates two novel ideas: (i) sensitivity-based non-uniform quantization, which searches for the optimal bit precision assignment based on second-order information; and (ii) the Dense-and-Sparse decomposition that stores outliers and sensitive weight values in an efficient sparse format.
arXiv Detail & Related papers (2023-06-13T08:57:54Z) - Learning Representations for CSI Adaptive Quantization and Feedback [51.14360605938647]
We propose an efficient method for adaptive quantization and feedback in frequency division duplexing systems.
Existing works mainly focus on the implementation of autoencoder (AE) neural networks for CSI compression.
We recommend two different methods: one based on a post training quantization and the second one in which the codebook is found during the training of the AE.
arXiv Detail & Related papers (2022-07-13T08:52:13Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Towards Efficient Post-training Quantization of Pre-trained Language
Models [85.68317334241287]
We study post-training quantization(PTQ) of PLMs, and propose module-wise quantization error minimization(MREM), an efficient solution to mitigate these issues.
Experiments on GLUE and SQuAD benchmarks show that our proposed PTQ solution not only performs close to QAT, but also enjoys significant reductions in training time, memory overhead, and data consumption.
arXiv Detail & Related papers (2021-09-30T12:50:06Z) - OMPQ: Orthogonal Mixed Precision Quantization [64.59700856607017]
Mixed precision quantization takes advantage of hardware's multiple bit-width arithmetic operations to unleash the full potential of network quantization.
We propose to optimize a proxy metric, the concept of networkity, which is highly correlated with the loss of the integer programming.
This approach reduces the search time and required data amount by orders of magnitude, with little compromise on quantization accuracy.
arXiv Detail & Related papers (2021-09-16T10:59:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.