QuantEase: Optimization-based Quantization for Language Models
- URL: http://arxiv.org/abs/2309.01885v2
- Date: Fri, 1 Dec 2023 07:04:05 GMT
- Title: QuantEase: Optimization-based Quantization for Language Models
- Authors: Kayhan Behdin, Ayan Acharya, Aman Gupta, Qingquan Song, Siyu Zhu,
Sathiya Keerthi, Rahul Mazumder
- Abstract summary: This work introduces Quantization (PTQ) of various quantization layers from recent advances of Large Language Models (LLMs)
Our CD-based approach features straightforward updates, relying solely on vector operations.
We also explore an outlier approach, allowing for retaining significant weights (outoutliers) with complete precision.
- Score: 17.333778751252392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rising popularity of Large Language Models (LLMs), there has been an
increasing interest in compression techniques that enable their efficient
deployment. This study focuses on the Post-Training Quantization (PTQ) of LLMs.
Drawing from recent advances, our work introduces QuantEase, a layer-wise
quantization framework where individual layers undergo separate quantization.
The problem is framed as a discrete-structured non-convex optimization,
prompting the development of algorithms rooted in Coordinate Descent (CD)
techniques. These CD-based methods provide high-quality solutions to the
complex non-convex layer-wise quantization problems. Notably, our CD-based
approach features straightforward updates, relying solely on matrix and vector
operations, circumventing the need for matrix inversion or decomposition. We
also explore an outlier-aware variant of our approach, allowing for retaining
significant weights (outliers) with complete precision. Our proposal attains
state-of-the-art performance in terms of perplexity and zero-shot accuracy in
empirical evaluations across various LLMs and datasets, with relative
improvements up to 15% over methods such as GPTQ. Leveraging careful linear
algebra optimizations, QuantEase can quantize models like Falcon-180B on a
single NVIDIA A100 GPU in $\sim$3 hours. Particularly noteworthy is our
outlier-aware algorithm's capability to achieve near or sub-3-bit quantization
of LLMs with an acceptable drop in accuracy, obviating the need for non-uniform
quantization or grouping techniques, improving upon methods such as SpQR by up
to two times in terms of perplexity.
Related papers
- Q-VLM: Post-training Quantization for Large Vision-Language Models [73.19871905102545]
We propose a post-training quantization framework of large vision-language models (LVLMs) for efficient multi-modal inference.
We mine the cross-layer dependency that significantly influences discretization errors of the entire vision-language model, and embed this dependency into optimal quantization strategy.
Experimental results demonstrate that our method compresses the memory by 2.78x and increase generate speed by 1.44x about 13B LLaVA model without performance degradation.
arXiv Detail & Related papers (2024-10-10T17:02:48Z) - Optimization by Parallel Quasi-Quantum Annealing with Gradient-Based Sampling [0.0]
This study proposes a different approach that integrates gradient-based update through continuous relaxation, combined with Quasi-Quantum Annealing (QQA)
Numerical experiments demonstrate that our method is a competitive general-purpose solver, achieving performance comparable to iSCO and learning-based solvers.
arXiv Detail & Related papers (2024-09-02T12:55:27Z) - LeanQuant: Accurate and Scalable Large Language Model Quantization with Loss-error-aware Grid [36.33062038680275]
Large language models (LLMs) have shown immense potential across various domains.
Post-training quantization has emerged as a promising technique to reduce memory requirements and decoding latency.
We propose LeanQuant, a novel quantization method that is accurate, versatile, and scalable.
arXiv Detail & Related papers (2024-07-14T00:23:51Z) - CLAQ: Pushing the Limits of Low-Bit Post-Training Quantization for LLMs [44.03692512352445]
Column-Level Adaptive weight Quantization (CLAQ) is a novel and effective framework for Large Language Models (LLMs) quantization.
In this paper, we present a novel and effective CLAQ framework by introducing three different types of adaptive strategies for LLM quantization.
Experiments on various mainstream open source LLMs including LLaMA-1, LLaMA-2 and Yi demonstrate that our methods achieve the state-of-the-art results across different bit settings.
arXiv Detail & Related papers (2024-05-27T14:49:39Z) - LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit [55.73370804397226]
Quantization, a key compression technique, can effectively mitigate these demands by compressing and accelerating large language models.
We present LLMC, a plug-and-play compression toolkit, to fairly and systematically explore the impact of quantization.
Powered by this versatile toolkit, our benchmark covers three key aspects: calibration data, algorithms (three strategies), and data formats.
arXiv Detail & Related papers (2024-05-09T11:49:05Z) - Data-freeWeight Compress and Denoise for Large Language Models [101.53420111286952]
We propose a novel approach termed Data-free Joint Rank-k Approximation for compressing the parameter matrices.
We achieve a model pruning of 80% parameters while retaining 93.43% of the original performance without any calibration data.
arXiv Detail & Related papers (2024-02-26T05:51:47Z) - Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models [88.80146574509195]
Quantization is a promising approach for reducing memory overhead and accelerating inference.
We propose a novel-aware quantization (ZSAQ) framework for the zero-shot quantization of various PLMs.
arXiv Detail & Related papers (2023-10-20T07:09:56Z) - SqueezeLLM: Dense-and-Sparse Quantization [80.32162537942138]
Main bottleneck for generative inference with LLMs is memory bandwidth, rather than compute, for single batch inference.
We introduce SqueezeLLM, a post-training quantization framework that enables lossless compression to ultra-low precisions of up to 3-bit.
Our framework incorporates two novel ideas: (i) sensitivity-based non-uniform quantization, which searches for the optimal bit precision assignment based on second-order information; and (ii) the Dense-and-Sparse decomposition that stores outliers and sensitive weight values in an efficient sparse format.
arXiv Detail & Related papers (2023-06-13T08:57:54Z) - QuantNAS for super resolution: searching for efficient
quantization-friendly architectures against quantization noise [19.897685398009912]
We propose a novel quantization-aware procedure, the QuantNAS.
We use entropy regularization, quantization noise, and Adaptive Deviation for Quantization (ADQ) module to enhance the search procedure.
The proposed procedure is 30% faster than direct weight quantization and is more stable.
arXiv Detail & Related papers (2022-08-31T13:12:16Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.