Sparse Fine-tuning for Inference Acceleration of Large Language Models
- URL: http://arxiv.org/abs/2310.06927v2
- Date: Fri, 13 Oct 2023 13:47:44 GMT
- Title: Sparse Fine-tuning for Inference Acceleration of Large Language Models
- Authors: Eldar Kurtic, Denis Kuznedelev, Elias Frantar, Michael Goin, Dan
Alistarh
- Abstract summary: We consider the problem of accurate sparse fine-tuning of large language models (LLMs)
We perform a detailed study of distillation-type losses, determining an L2-based distillation approach we term SquareHead.
For MPT text generation, we show for the first time that sparse fine-tuning can reach 75% sparsity without accuracy drops.
- Score: 48.285897264669984
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of accurate sparse fine-tuning of large language
models (LLMs), that is, fine-tuning pretrained LLMs on specialized tasks, while
inducing sparsity in their weights. On the accuracy side, we observe that
standard loss-based fine-tuning may fail to recover accuracy, especially at
high sparsities. To address this, we perform a detailed study of
distillation-type losses, determining an L2-based distillation approach we term
SquareHead which enables accurate recovery even at higher sparsities, across
all model types. On the practical efficiency side, we show that sparse LLMs can
be executed with speedups by taking advantage of sparsity, for both CPU and GPU
runtimes. While the standard approach is to leverage sparsity for computational
reduction, we observe that in the case of memory-bound LLMs sparsity can also
be leveraged for reducing memory bandwidth. We exhibit end-to-end results
showing speedups due to sparsity, while recovering accuracy, on T5 (language
translation), Whisper (speech translation), and open GPT-type (MPT for text
generation). For MPT text generation, we show for the first time that sparse
fine-tuning can reach 75% sparsity without accuracy drops, provide notable
end-to-end speedups for both CPU and GPU inference, and highlight that sparsity
is also compatible with quantization approaches. Models and software for
reproducing our results are provided in Section 6.
Related papers
- SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs [2.7624021966289605]
Large Language Models (LLMs) have revolutionized natural language understanding and generation tasks.
LLMs suffer from high memory consumption and slow inference times due to their large parameter sizes.
This paper introduces SLiM, a novel approach for compressing LLMs using a one-shot Quantized Sparse Plus Low-rank Approximation.
arXiv Detail & Related papers (2024-10-12T18:36:07Z) - Efficient Arbitrary Precision Acceleration for Large Language Models on GPU Tensor Cores [3.6385567224218556]
Large language models (LLMs) have been widely applied but face challenges in efficient inference.
We introduce a novel bipolar-INT data format that facilitates parallel computing and supports symmetric quantization.
We implement an arbitrary precision matrix multiplication scheme that decomposes and recovers at the bit level, enabling flexible precision.
arXiv Detail & Related papers (2024-09-26T14:17:58Z) - Search for Efficient Large Language Models [52.98684997131108]
Large Language Models (LLMs) have long held sway in the realms of artificial intelligence research.
Weight pruning, quantization, and distillation have been embraced to compress LLMs, targeting memory reduction and inference acceleration.
Most model compression techniques concentrate on weight optimization, overlooking the exploration of optimal architectures.
arXiv Detail & Related papers (2024-09-25T21:32:12Z) - Characterizing the Accuracy -- Efficiency Trade-off of Low-rank Decomposition in Language Models [1.401463252785724]
Low-rank decomposition can be a promising direction for LLM-based applications that require real-time service at scale.
We formalize the low-rank decomposition design space and show that the decomposition design space is enormous.
Our results show that we can achieve a 9% model size reduction with minimal accuracy drops.
arXiv Detail & Related papers (2024-05-10T17:40:02Z) - Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment [56.44025052765861]
Large language models (LLMs) have revolutionized Natural Language Processing (NLP), but their size creates computational bottlenecks.
We introduce a novel approach to create accurate, sparse foundational versions of performant LLMs.
We show a total speedup on CPUs for sparse-quantized LLaMA models of up to 8.6x.
arXiv Detail & Related papers (2024-05-06T16:03:32Z) - DB-LLM: Accurate Dual-Binarization for Efficient LLMs [83.70686728471547]
Large language models (LLMs) have significantly advanced the field of natural language processing.
Existing ultra-low-bit quantization always causes severe accuracy drops.
We propose a novel Dual-Binarization method for LLMs, namely DB-LLM.
arXiv Detail & Related papers (2024-02-19T09:04:30Z) - E-Sparse: Boosting the Large Language Model Inference through Entropy-based N:M Sparsity [6.434967516411846]
We introduce the information entropy of hidden state features into a pruning metric design, namely E-Sparse.
E-Sparse employs the information richness to leverage the channel importance, and further incorporates several novel techniques to put it into effect.
E-Sparse can significantly speed up the model inference over the dense model (up to 1.53X) and obtain significant memory saving (up to 43.52%), with acceptable accuracy loss.
arXiv Detail & Related papers (2023-10-24T15:27:15Z) - QUIK: Towards End-to-End 4-Bit Inference on Generative Large Language
Models [57.04178959678024]
We show that the majority of inference computations for large generative models can be performed with both weights and activations being cast to 4 bits.
We achieve this via a hybrid quantization strategy called QUIK, which compresses most of the weights and activations to 4-bit.
We provide GPU kernels matching the QUIK format with highly-efficient layer-wise runtimes, which lead to practical end-to-end throughput improvements of up to 3.4x.
arXiv Detail & Related papers (2023-10-13T17:15:05Z) - Compress, Then Prompt: Improving Accuracy-Efficiency Trade-off of LLM
Inference with Transferable Prompt [96.24800696597707]
We introduce a new perspective to optimize this trade-off by prompting compressed models.
We propose a soft prompt learning method where we expose the compressed model to the prompt learning process.
Our experimental analysis suggests our soft prompt strategy greatly improves the performance of the 8x compressed LLaMA-7B model.
arXiv Detail & Related papers (2023-05-17T20:45:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.