ComPEFT: Compression for Communicating Parameter Efficient Updates via
Sparsification and Quantization
- URL: http://arxiv.org/abs/2311.13171v1
- Date: Wed, 22 Nov 2023 05:28:59 GMT
- Title: ComPEFT: Compression for Communicating Parameter Efficient Updates via
Sparsification and Quantization
- Authors: Prateek Yadav, Leshem Choshen, Colin Raffel, Mohit Bansal
- Abstract summary: We present ComPEFT, a novel method for compressing fine-tuning residuals (task vectors) of PEFT based models.
In extensive evaluation across T5, T0, and LLaMA-based models with 200M - 65B parameters, ComPEFT achieves compression ratios of 8x - 50x.
- Score: 100.90624220423634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parameter-efficient fine-tuning (PEFT) techniques make it possible to
efficiently adapt a language model to create "expert" models that specialize to
new tasks or domains. Recent techniques in model merging and compositional
generalization leverage these expert models by dynamically composing modules to
improve zero/few-shot generalization. Despite the efficiency of PEFT methods,
the size of expert models can make it onerous to retrieve expert models per
query over high-latency networks like the Internet or serve multiple experts on
a single GPU. To address these issues, we present ComPEFT, a novel method for
compressing fine-tuning residuals (task vectors) of PEFT based models. ComPEFT
employs sparsification and ternary quantization to reduce the size of the PEFT
module without performing any additional retraining while preserving or
enhancing model performance. In extensive evaluation across T5, T0, and
LLaMA-based models with 200M - 65B parameters, ComPEFT achieves compression
ratios of 8x - 50x. In particular, we show that ComPEFT improves with scale -
stronger models exhibit higher compressibility and better performance. For
example, we show that ComPEFT applied to LLaMA outperforms QLoRA by 4.16% on
MMLU with a storage size reduction of up to 26x. In addition, we show that the
compressed experts produced by ComPEFT maintain few-shot compositional
generalization capabilities, facilitate efficient communication and
computation, and exhibit enhanced performance when merged. Lastly, we provide
an analysis of different method components, compare it with other PEFT methods,
and test ComPEFT's efficacy for compressing the residual of full-finetuning.
Our code is available at https://github.com/prateeky2806/compeft.
Related papers
- Preserving Pre-trained Representation Space: On Effectiveness of Prefix-tuning for Large Multi-modal Models [24.62337386603331]
Large Multi-modal Models (LMMs) are revolutionizing the way machines interact with the world.
To adapt LMMs for downstream tasks, parameter-efficient fine-tuning (PEFT) has gained popularity.
This paper focuses on the strengths and weaknesses of each tuning strategy, shifting the focus from the efficiency typically associated with these approaches.
arXiv Detail & Related papers (2024-10-29T07:55:50Z) - MoDeGPT: Modular Decomposition for Large Language Model Compression [59.361006801465344]
This paper introduces textbfModular bfDecomposition (MoDeGPT), a novel structured compression framework.
MoDeGPT partitions the Transformer block into modules comprised of matrix pairs and reduces the hidden dimensions.
Our experiments show MoDeGPT, without backward propagation, matches or surpasses previous structured compression methods.
arXiv Detail & Related papers (2024-08-19T01:30:14Z) - Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning [17.032155725171958]
We propose the Light-PEFT framework, which includes two methods: Masked Early Pruning of the Foundation Model and Multi-Granularity Early Pruning of PEFT.
Compared to utilizing the PEFT method directly, Light-PEFT achieves training and inference speedup, reduces memory usage, and maintains comparable performance.
arXiv Detail & Related papers (2024-06-06T07:03:29Z) - A Provably Effective Method for Pruning Experts in Fine-tuned Sparse Mixture-of-Experts [49.394145046409044]
This paper provides the first provably efficient technique for pruning experts in finetuned MoE models.
We theoretically prove that prioritizing the pruning of the experts with a smaller change of the routers l2 norm from the pretrained model guarantees the preservation of test accuracy.
Although our theoretical analysis is centered on binary classification tasks on simplified MoE architecture, our expert pruning method is verified on large vision MoE models.
arXiv Detail & Related papers (2024-05-26T17:52:58Z) - Context-PEFT: Efficient Multi-Modal, Multi-Task Fine-Tuning [12.648711621637663]
This paper introduces a novel.
COCO-Efficient Fine-Tuning (PEFT) framework for multi-modal, multi-task transfer learning with pre-trained language models.
We propose Context-PEFT, which learns different groups of adaptor parameters based on the token's domain.
Our method is evaluated on the captioning task, where it outperforms full fine-tuning under similar data constraints.
arXiv Detail & Related papers (2023-12-14T13:00:24Z) - Non-Intrusive Adaptation: Input-Centric Parameter-efficient Fine-Tuning
for Versatile Multimodal Modeling [42.42235704360381]
Large language models (LLMs) and vision language models (VLMs) demonstrate excellent performance on a wide range of tasks.
These large scales make it impossible to adapt and deploy fully specialized models given a task of interest.
In this work, we describe AdaLink as a non-intrusive PEFT technique that achieves competitive performance.
arXiv Detail & Related papers (2023-10-18T16:43:08Z) - Compress, Then Prompt: Improving Accuracy-Efficiency Trade-off of LLM
Inference with Transferable Prompt [96.24800696597707]
We introduce a new perspective to optimize this trade-off by prompting compressed models.
We propose a soft prompt learning method where we expose the compressed model to the prompt learning process.
Our experimental analysis suggests our soft prompt strategy greatly improves the performance of the 8x compressed LLaMA-7B model.
arXiv Detail & Related papers (2023-05-17T20:45:13Z) - Empirical Analysis of the Strengths and Weaknesses of PEFT Techniques
for LLMs [1.867982979635437]
We provide a benchmark of various PEFT techniques and evaluate model performance across different data scales.
Contrary to popular belief, we empirically prove that PEFT techniques converge slower than full tuning in low data scenarios.
We further optimize these PEFT techniques by selectively choosing which parts of the model to train, and find that these techniques can be applied with significantly fewer parameters.
arXiv Detail & Related papers (2023-04-28T17:39:49Z) - AutoPEFT: Automatic Configuration Search for Parameter-Efficient
Fine-Tuning [77.61565726647784]
Motivated by advances in neural architecture search, we propose AutoPEFT for automatic PEFT configuration selection.
We show that AutoPEFT-discovered configurations significantly outperform existing PEFT methods and are on par or better than FFT without incurring substantial training efficiency costs.
arXiv Detail & Related papers (2023-01-28T08:51:23Z) - UniPELT: A Unified Framework for Parameter-Efficient Language Model
Tuning [64.638804236566]
We propose a unified framework, UniPELT, which incorporates different PELT methods as submodules and learns to activate the ones that best suit the current data or task setup.
Remarkably, on the GLUE benchmark, UniPELT consistently achieves 13pt gains compared to the best individual PELT method that it incorporates and even outperforms fine-tuning under different setups.
arXiv Detail & Related papers (2021-10-14T17:40:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.