INT2.1: Towards Fine-Tunable Quantized Large Language Models with Error
Correction through Low-Rank Adaptation
- URL: http://arxiv.org/abs/2306.08162v1
- Date: Tue, 13 Jun 2023 22:25:35 GMT
- Title: INT2.1: Towards Fine-Tunable Quantized Large Language Models with Error
Correction through Low-Rank Adaptation
- Authors: Yuji Chai, John Gkountouras, Glenn G. Ko, David Brooks, Gu-Yeon Wei
- Abstract summary: We introduce a method that dramatically reduces fine-tuning VRAM requirements and rectifies quantization errors in quantized Large Language Models.
Our method reduces the memory requirements by up to 5.6 times, which enables fine-tuning a 7 billion parameter Large Language Model (LLM) on consumer laptops.
- Score: 5.837035655563323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a method that dramatically reduces fine-tuning VRAM requirements
and rectifies quantization errors in quantized Large Language Models. First, we
develop an extremely memory-efficient fine-tuning (EMEF) method for quantized
models using Low-Rank Adaptation (LoRA), and drawing upon it, we construct an
error-correcting algorithm designed to minimize errors induced by the
quantization process. Our method reduces the memory requirements by up to 5.6
times, which enables fine-tuning a 7 billion parameter Large Language Model
(LLM) on consumer laptops. At the same time, we propose a Low-Rank Error
Correction (LREC) method that exploits the added LoRA layers to ameliorate the
gap between the quantized model and its float point counterpart. Our error
correction framework leads to a fully functional INT2 quantized LLM with the
capacity to generate coherent English text. To the best of our knowledge, this
is the first INT2 Large Language Model that has been able to reach such a
performance. The overhead of our method is merely a 1.05 times increase in
model size, which translates to an effective precision of INT2.1. Also, our
method readily generalizes to other quantization standards, such as INT3, INT4,
and INT8, restoring their lost performance, which marks a significant milestone
in the field of model quantization. The strategies delineated in this paper
hold promising implications for the future development and optimization of
quantized models, marking a pivotal shift in the landscape of low-resource
machine learning computations.
Related papers
- LeanQuant: Accurate Large Language Model Quantization with Loss-Error-Aware Grid [36.33062038680275]
Large language models (LLMs) have numerous applications across various domains.
Weight quantization is an effective technique for reducing the decoding latency and memory requirements of LLMs.
We propose LeanQuant, which learns a loss-error-aware quantization grid by leveraging the inverse diagonal Hessian.
arXiv Detail & Related papers (2024-07-14T00:23:51Z) - TernaryLLM: Ternarized Large Language Model [29.29122031050894]
Large language models (LLMs) have achieved remarkable performance on Natural Language Processing (NLP) tasks.
We introduce Dual Learnable Ternarization (DLT), which enables both scales and shifts to be learnable.
We also propose Outlier-Friendly Feature Knowledge Distillation (OFF) to recover the information lost in extremely low-bit quantization.
arXiv Detail & Related papers (2024-06-11T11:40:12Z) - Characterizing the Accuracy - Efficiency Trade-off of Low-rank Decomposition in Language Models [1.530997923234786]
Large language models (LLMs) have emerged and presented their general problem-solving capabilities with one model.
We formalize the low-rank decomposition design space and show that the decomposition design space is enormous.
Results show that we can achieve a 9% model size reduction with minimal accuracy drops.
arXiv Detail & Related papers (2024-05-10T17:40:02Z) - LQER: Low-Rank Quantization Error Reconstruction for LLMs [13.205129808742862]
We introduce Low-rank Quantization Error Reduction (LQER), which combines quantization and low-rank approximation to recover the model capability.
Unlike existing methods, the computation pattern of LQER eliminates the need for specialized Scatter and Gather processes.
Our W4A8 LLMs achieve near-lossless performance on six popular downstream tasks, while using 1.36$times$ fewer hardware resources than the leading state-of-the-art method.
arXiv Detail & Related papers (2024-02-04T10:59:52Z) - Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models [88.80146574509195]
Quantization is a promising approach for reducing memory overhead and accelerating inference.
We propose a novel-aware quantization (ZSAQ) framework for the zero-shot quantization of various PLMs.
arXiv Detail & Related papers (2023-10-20T07:09:56Z) - LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models [104.23434818428062]
We focus on the scenario where quantization and LoRA fine-tuning are applied together on a pre-trained model.
We propose LoftQ (LoRA-Fine-Tuning-aware Quantization), a novel quantization framework.
Experiments show that our method is highly effective and outperforms existing quantization methods.
arXiv Detail & Related papers (2023-10-12T18:34:08Z) - QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models [85.02796681773447]
We propose a quantization-aware low-rank adaptation (QA-LoRA) algorithm.
The motivation lies in the imbalanced degrees of freedom of quantization and adaptation.
QA-LoRA is easily implemented with a few lines of code.
arXiv Detail & Related papers (2023-09-26T07:22:23Z) - FineQuant: Unlocking Efficiency with Fine-Grained Weight-Only
Quantization for LLMs [9.072821427818557]
Large Language Models (LLMs) have achieved state-of-the-art performance across various language tasks but pose challenges for practical deployment.
We propose an efficient weight-only quantization method that reduces memory consumption and accelerates inference for LLMs.
We evaluate our approach on large-scale open source models such as OPT-175B and internal MoE models, showcasing minimal accuracy loss while achieving up to 3.65 times higher throughput.
arXiv Detail & Related papers (2023-08-16T23:57:41Z) - The Devil is in the Errors: Leveraging Large Language Models for
Fine-grained Machine Translation Evaluation [93.01964988474755]
AutoMQM is a prompting technique which asks large language models to identify and categorize errors in translations.
We study the impact of labeled data through in-context learning and finetuning.
We then evaluate AutoMQM with PaLM-2 models, and we find that it improves performance compared to just prompting for scores.
arXiv Detail & Related papers (2023-08-14T17:17:21Z) - Rethinking Masked Language Modeling for Chinese Spelling Correction [70.85829000570203]
We study Chinese Spelling Correction (CSC) as a joint decision made by two separate models: a language model and an error model.
We find that fine-tuning BERT tends to over-fit the error model while under-fit the language model, resulting in poor generalization to out-of-distribution error patterns.
We demonstrate that a very simple strategy, randomly masking 20% non-error tokens from the input sequence during fine-tuning is sufficient for learning a much better language model without sacrificing the error model.
arXiv Detail & Related papers (2023-05-28T13:19:12Z) - Model-Agnostic Multitask Fine-tuning for Few-shot Vision-Language
Transfer Learning [59.38343286807997]
We propose Model-Agnostic Multitask Fine-tuning (MAMF) for vision-language models on unseen tasks.
Compared with model-agnostic meta-learning (MAML), MAMF discards the bi-level optimization and uses only first-order gradients.
We show that MAMF consistently outperforms the classical fine-tuning method for few-shot transfer learning on five benchmark datasets.
arXiv Detail & Related papers (2022-03-09T17:26:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.