CPET: Effective Parameter-Efficient Tuning for Compressed Large Language
Models
- URL: http://arxiv.org/abs/2307.07705v2
- Date: Wed, 15 Nov 2023 17:02:17 GMT
- Title: CPET: Effective Parameter-Efficient Tuning for Compressed Large Language
Models
- Authors: Weilin Zhao, Yuxiang Huang, Xu Han, Zhiyuan Liu, Zhengyan Zhang,
Maosong Sun
- Abstract summary: We propose an effective PET framework based on compressed language models (LLMs)
We evaluate the impact of mainstream compression techniques on PET performance.
We then introduce knowledge inheritance and recovery strategies to restore the knowledge loss caused by these compression techniques.
- Score: 86.63730733413796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parameter-efficient tuning (PET) has been widely explored in recent years
because it tunes much fewer parameters (PET modules) than full-parameter
fine-tuning (FT) while still stimulating sufficient knowledge from large
language models (LLMs) for downstream tasks. Moreover, when PET is employed to
serve multiple tasks, different task-specific PET modules can be built on a
frozen LLM, avoiding redundant LLM deployments. Although PET significantly
reduces the cost of tuning and deploying LLMs, its inference still suffers from
the computational bottleneck of LLMs. To address the above issue, we propose an
effective PET framework based on compressed LLMs, named "CPET". In CPET, we
evaluate the impact of mainstream LLM compression techniques on PET performance
and then introduce knowledge inheritance and recovery strategies to restore the
knowledge loss caused by these compression techniques. Our experimental results
demonstrate that, owing to the restoring strategies of CPET, collaborating
task-specific PET modules with a compressed LLM can achieve comparable
performance to collaborating PET modules with the original version of the
compressed LLM and outperform directly applying vanilla PET methods to the
compressed LLM.
Related papers
- HiDe-PET: Continual Learning via Hierarchical Decomposition of Parameter-Efficient Tuning [55.88910947643436]
We propose a unified framework for continual learning (CL) with pre-trained models (PTMs) and parameter-efficient tuning (PET)
We present Hierarchical Decomposition PET (HiDe-PET), an innovative approach that explicitly optimize the objective through incorporating task-specific and task-shared knowledge.
Our approach demonstrates remarkably superior performance over a broad spectrum of recent strong baselines.
arXiv Detail & Related papers (2024-07-07T01:50:25Z) - Delta-CoMe: Training-Free Delta-Compression with Mixed-Precision for Large Language Models [79.46938238953916]
Fine-tuning large language models (LLMs) to diverse applications is crucial to meet complex demands.
Recent studies suggest decomposing a fine-tuned LLM into a base model and corresponding delta weights, which are then compressed using low-rank or low-bit approaches to reduce costs.
In this work, we observe that existing low-rank and low-bit compression methods can significantly harm the model performance for task-specific fine-tuned LLMs.
arXiv Detail & Related papers (2024-06-13T07:57:27Z) - ConPET: Continual Parameter-Efficient Tuning for Large Language Models [65.48107393731861]
Continual learning requires continual adaptation of models to newly emerging tasks.
We propose Continual.
Efficient Tuning (ConPET), a generalizable paradigm for.
continual task adaptation of large language models.
arXiv Detail & Related papers (2023-09-26T08:52:04Z) - VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity
Control [44.73827206809393]
In vision-and-language (VL), parameter-efficient tuning (PET) techniques are proposed to integrate modular modifications into encoder-decoder PLMs.
We propose a Vision-and-Language.
Efficient Tuning (VL-PET) framework to impose effective control over modular modifications.
arXiv Detail & Related papers (2023-08-18T20:18:30Z) - Exploring the Impact of Model Scaling on Parameter-Efficient Tuning [100.61202305296275]
Scaling-efficient tuning (PET) methods can effectively drive extremely large pre-trained language models (PLMs)
In small PLMs, there are usually noticeable performance differences among PET methods.
We introduce a more flexible PET method called Arbitrary PET (APET) method.
arXiv Detail & Related papers (2023-06-04T10:10:54Z) - A Unified Continual Learning Framework with General Parameter-Efficient
Tuning [56.250772378174446]
"Pre-training $rightarrow$ downstream adaptation" presents both new opportunities and challenges for Continual Learning.
We position prompting as one instantiation of PET, and propose a unified CL framework, dubbed as Learning-Accumulation-Ensemble (LAE)
PET, e.g., using Adapter, LoRA, or Prefix, can adapt a pre-trained model to downstream tasks with fewer parameters and resources.
arXiv Detail & Related papers (2023-03-17T15:52:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.