PCDVQ: Enhancing Vector Quantization for Large Language Models via Polar Coordinate Decoupling
- URL: http://arxiv.org/abs/2506.05432v2
- Date: Thu, 26 Jun 2025 06:17:49 GMT
- Title: PCDVQ: Enhancing Vector Quantization for Large Language Models via Polar Coordinate Decoupling
- Authors: Yuxuan Yue, Zukang Xu, Zhihang Yuan, Dawei Yang, Jianlong Wu, Liqiang Nie,
- Abstract summary: Vector Quantization (VQ) serves as a prevalent solution to this issue for its extremely low-bit (even at 2-bit) and considerable accuracy.<n>This paper proposes Polar Coordinate Decoupled Vector Quantization (PCDVQ), an effective and efficient VQ framework.<n> Experimental results show that PCDVQ outperforms baseline methods at 2-bit level by at least 1.5% zero-shot accuracy.
- Score: 53.91873442457923
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) face significant challenges in edge deployment due to their massive parameter scale. Vector Quantization (VQ), a clustering-based quantization method, serves as a prevalent solution to this issue for its extremely low-bit (even at 2-bit) and considerable accuracy. Since a vector is a quantity in mathematics and physics that has both direction and magnitude, existing VQ works typically quantize them in a coupled manner. However, we find that direction exhibits significantly greater sensitivity to quantization compared to the magnitude. For instance, when separately clustering the directions and magnitudes of weight vectors in LLaMA-2-7B, the accuracy drop of zero-shot tasks are 46.5\% and 2.3\%, respectively. This gap even increases with the reduction of clustering centers. Further, Euclidean distance, a common metric to access vector similarities in current VQ works, places greater emphasis on reducing the magnitude error. This property is contrary to the above finding, unavoidably leading to larger quantization errors. To these ends, this paper proposes Polar Coordinate Decoupled Vector Quantization (PCDVQ), an effective and efficient VQ framework consisting of two key modules: 1) Polar Coordinate Decoupling (PCD), which transforms vectors into their polar coordinate representations and perform independent quantization of the direction and magnitude parameters.2) Distribution Aligned Codebook Construction (DACC), which optimizes the direction and magnitude codebooks in accordance with the source distribution. Experimental results show that PCDVQ outperforms baseline methods at 2-bit level by at least 1.5\% zero-shot accuracy, establishing a novel paradigm for accurate and highly compressed LLMs.
Related papers
- The Geometry of LLM Quantization: GPTQ as Babai's Nearest Plane Algorithm [52.89358421626026]
GPTQ emerged as one of the standard methods for one-shot post-training quantization at LLM scale.<n>We show that GPTQ is mathematically identical to Babai's nearest plane algorithm for the classical closest vector problem.
arXiv Detail & Related papers (2025-07-24T16:22:18Z) - TurboQuant: Online Vector Quantization with Near-optimal Distortion Rate [13.14434628836727]
Vector quantization aims to quantize high-dimensional Euclidean vectors while minimizing distortion in their geometric structure.<n>We propose TurboQuant to address both mean-squared error (MSE) and inner product distortion.<n>Our data-oblivious algorithms, suitable for online applications, achieve near-optimal distortion rates.
arXiv Detail & Related papers (2025-04-28T15:05:35Z) - PolarQuant: Leveraging Polar Transformation for Efficient Key Cache Quantization and Decoding Acceleration [26.972039704548184]
Quantizing the KV cache to lower bit widths is an effective way to reduce computational costs.<n>Previous methods struggle with quantizing key vectors due to outliers, resulting in excessive overhead.<n>We propose a novel quantization approach called PolarQuant, which efficiently addresses the outlier challenge.
arXiv Detail & Related papers (2025-02-01T18:59:03Z) - Pushing the Limits of Large Language Model Quantization via the Linearity Theorem [71.3332971315821]
We present a "line theoremarity" establishing a direct relationship between the layer-wise $ell$ reconstruction error and the model perplexity increase due to quantization.
This insight enables two novel applications: (1) a simple data-free LLM quantization method using Hadamard rotations and MSE-optimal grids, dubbed HIGGS, and (2) an optimal solution to the problem of finding non-uniform per-layer quantization levels.
arXiv Detail & Related papers (2024-11-26T15:35:44Z) - Pyramid Vector Quantization for LLMs [8.779688608449902]
Pyramid Vector Quantization (PVQ) for large language models.<n>PVQ uses a fixed integer lattice on the sphere by projecting points onto the 1-sphere, which allows for efficient encoding and decoding without requiring an explicit codebook in memory.<n>We achieve state-of-the-art quantization performance with pareto-optimal trade-off between performance and bits per weight and bits per activation, compared to compared methods.
arXiv Detail & Related papers (2024-10-22T11:57:32Z) - QERA: an Analytical Framework for Quantization Error Reconstruction [12.110441045050223]
An increasing interest in quantizing weights to extremely low precision while offsetting the resulting error with low-rank, high-precision error reconstruction terms.<n>The combination of quantization and low-rank approximation is now popular in both adapter-based, parameter-efficient fine-tuning methods.<n>We formulate an analytical framework, named Quantization Error Reconstruction Analysis (QERA), and offer a closed-form solution to the problem.
arXiv Detail & Related papers (2024-10-08T13:37:34Z) - SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models [63.118592279833656]
Post-training quantization (PTQ) is an effective technique for compressing large language models (LLMs)<n>We propose SliM-LLM, a salience-driven mixed-precision quantization framework that allocates bit-widths at the group-wise.<n> Experiments show that SliM-LLM achieves superior performance across various LLMs at low bit-widths.
arXiv Detail & Related papers (2024-05-23T16:21:48Z) - CBQ: Cross-Block Quantization for Large Language Models [66.82132832702895]
Post-training quantization (PTQ) has played a key role in compressing large language models (LLMs) with ultra-low costs.<n>We propose CBQ, a cross-block reconstruction-based PTQ method for LLMs.<n> CBQ employs a cross-block dependency using a reconstruction scheme, establishing long-range dependencies across multiple blocks to minimize error accumulation.
arXiv Detail & Related papers (2023-12-13T07:56:27Z) - Sliced Wasserstein Estimation with Control Variates [47.18652387199418]
Sliced Wasserstein (SW) distances between two probability measures are defined as the expectation of the Wasserstein distance between two one-dimensional projections.
Due to the intractability of the expectation, Monte Carlo integration is performed to estimate the value of the SW distance.
Despite having various variants, there has been no prior work that improves the Monte Carlo estimation scheme for the SW distance.
arXiv Detail & Related papers (2023-04-30T06:03:17Z) - Quantum Gate Generation in Two-Level Open Quantum Systems by Coherent
and Incoherent Photons Found with Gradient Search [77.34726150561087]
We consider an environment formed by incoherent photons as a resource for controlling open quantum systems via an incoherent control.
We exploit a coherent control in the Hamiltonian and an incoherent control in the dissipator which induces the time-dependent decoherence rates.
arXiv Detail & Related papers (2023-02-28T07:36:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.