Sparse Fine-tuning for Inference Acceleration of Large Language Models
- URL: http://arxiv.org/abs/2310.06927v2
- Date: Fri, 13 Oct 2023 13:47:44 GMT
- Title: Sparse Fine-tuning for Inference Acceleration of Large Language Models
- Authors: Eldar Kurtic, Denis Kuznedelev, Elias Frantar, Michael Goin, Dan
Alistarh
- Abstract summary: We consider the problem of accurate sparse fine-tuning of large language models (LLMs)
We perform a detailed study of distillation-type losses, determining an L2-based distillation approach we term SquareHead.
For MPT text generation, we show for the first time that sparse fine-tuning can reach 75% sparsity without accuracy drops.
- Score: 48.285897264669984
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of accurate sparse fine-tuning of large language
models (LLMs), that is, fine-tuning pretrained LLMs on specialized tasks, while
inducing sparsity in their weights. On the accuracy side, we observe that
standard loss-based fine-tuning may fail to recover accuracy, especially at
high sparsities. To address this, we perform a detailed study of
distillation-type losses, determining an L2-based distillation approach we term
SquareHead which enables accurate recovery even at higher sparsities, across
all model types. On the practical efficiency side, we show that sparse LLMs can
be executed with speedups by taking advantage of sparsity, for both CPU and GPU
runtimes. While the standard approach is to leverage sparsity for computational
reduction, we observe that in the case of memory-bound LLMs sparsity can also
be leveraged for reducing memory bandwidth. We exhibit end-to-end results
showing speedups due to sparsity, while recovering accuracy, on T5 (language
translation), Whisper (speech translation), and open GPT-type (MPT for text
generation). For MPT text generation, we show for the first time that sparse
fine-tuning can reach 75% sparsity without accuracy drops, provide notable
end-to-end speedups for both CPU and GPU inference, and highlight that sparsity
is also compatible with quantization approaches. Models and software for
reproducing our results are provided in Section 6.
Related papers
- Optimization-based Structural Pruning for Large Language Models without Back-Propagation [57.9629676017527]
We propose an optimization-based structural pruning on Large-Language Models (LLMs)
Our method learns the pruning masks in a probabilistic space directly by optimizing the loss of the pruned model.
Our method operates for 2.7 hours with around 35GB memory for the 13B models on a single A100 GPU, and our pruned models outperform the state-of-the-arts w.r.t. perplexity.
arXiv Detail & Related papers (2024-06-15T09:31:03Z) - TernaryLLM: Ternarized Large Language Model [29.29122031050894]
Large language models (LLMs) have achieved remarkable performance on Natural Language Processing (NLP) tasks.
We introduce Dual Learnable Ternarization (DLT), which enables both scales and shifts to be learnable.
We also propose Outlier-Friendly Feature Knowledge Distillation (OFF) to recover the information lost in extremely low-bit quantization.
arXiv Detail & Related papers (2024-06-11T11:40:12Z) - Characterizing the Accuracy - Efficiency Trade-off of Low-rank Decomposition in Language Models [1.530997923234786]
Large language models (LLMs) have emerged and presented their general problem-solving capabilities with one model.
We formalize the low-rank decomposition design space and show that the decomposition design space is enormous.
Results show that we can achieve a 9% model size reduction with minimal accuracy drops.
arXiv Detail & Related papers (2024-05-10T17:40:02Z) - Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment [56.44025052765861]
Large language models (LLMs) have revolutionized Natural Language Processing (NLP), but their size creates computational bottlenecks.
We introduce a novel approach to create accurate, sparse foundational versions of performant LLMs.
We show a total speedup on CPUs for sparse-quantized LLaMA models of up to 8.6x.
arXiv Detail & Related papers (2024-05-06T16:03:32Z) - DB-LLM: Accurate Dual-Binarization for Efficient LLMs [83.70686728471547]
Large language models (LLMs) have significantly advanced the field of natural language processing.
Existing ultra-low-bit quantization always causes severe accuracy drops.
We propose a novel Dual-Binarization method for LLMs, namely DB-LLM.
arXiv Detail & Related papers (2024-02-19T09:04:30Z) - E-Sparse: Boosting the Large Language Model Inference through Entropy-based N:M Sparsity [6.434967516411846]
We introduce the information entropy of hidden state features into a pruning metric design, namely E-Sparse.
E-Sparse employs the information richness to leverage the channel importance, and further incorporates several novel techniques to put it into effect.
E-Sparse can significantly speed up the model inference over the dense model (up to 1.53X) and obtain significant memory saving (up to 43.52%), with acceptable accuracy loss.
arXiv Detail & Related papers (2023-10-24T15:27:15Z) - One-Shot Sensitivity-Aware Mixed Sparsity Pruning for Large Language Models [42.95555008229016]
We propose a method based on Hessian sensitivity-aware mixed sparsity pruning to prune LLMs to at least 50% sparsity without the need of any retraining.
The advantages of the proposed method exhibit even more when the sparsity is extremely high.
arXiv Detail & Related papers (2023-10-14T05:43:09Z) - QUIK: Towards End-to-End 4-Bit Inference on Generative Large Language
Models [57.04178959678024]
We show that the majority of inference computations for large generative models can be performed with both weights and activations being cast to 4 bits.
We achieve this via a hybrid quantization strategy called QUIK, which compresses most of the weights and activations to 4-bit.
We provide GPU kernels matching the QUIK format with highly-efficient layer-wise runtimes, which lead to practical end-to-end throughput improvements of up to 3.4x.
arXiv Detail & Related papers (2023-10-13T17:15:05Z) - Compress, Then Prompt: Improving Accuracy-Efficiency Trade-off of LLM
Inference with Transferable Prompt [96.24800696597707]
We introduce a new perspective to optimize this trade-off by prompting compressed models.
We propose a soft prompt learning method where we expose the compressed model to the prompt learning process.
Our experimental analysis suggests our soft prompt strategy greatly improves the performance of the 8x compressed LLaMA-7B model.
arXiv Detail & Related papers (2023-05-17T20:45:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.