ConPET: Continual Parameter-Efficient Tuning for Large Language Models
- URL: http://arxiv.org/abs/2309.14763v1
- Date: Tue, 26 Sep 2023 08:52:04 GMT
- Title: ConPET: Continual Parameter-Efficient Tuning for Large Language Models
- Authors: Chenyang Song, Xu Han, Zheni Zeng, Kuai Li, Chen Chen, Zhiyuan Liu,
Maosong Sun and Tao Yang
- Abstract summary: Continual learning requires continual adaptation of models to newly emerging tasks.
We propose Continual.
Efficient Tuning (ConPET), a generalizable paradigm for.
continual task adaptation of large language models.
- Score: 65.48107393731861
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continual learning necessitates the continual adaptation of models to newly
emerging tasks while minimizing the catastrophic forgetting of old ones. This
is extremely challenging for large language models (LLMs) with vanilla
full-parameter tuning due to high computation costs, memory consumption, and
forgetting issue. Inspired by the success of parameter-efficient tuning (PET),
we propose Continual Parameter-Efficient Tuning (ConPET), a generalizable
paradigm for continual task adaptation of LLMs with task-number-independent
training complexity. ConPET includes two versions with different application
scenarios. First, Static ConPET can adapt former continual learning methods
originally designed for relatively smaller models to LLMs through PET and a
dynamic replay strategy, which largely reduces the tuning costs and alleviates
the over-fitting and forgetting issue. Furthermore, to maintain scalability,
Dynamic ConPET adopts separate PET modules for different tasks and a PET module
selector for dynamic optimal selection. In our extensive experiments, the
adaptation of Static ConPET helps multiple former methods reduce the scale of
tunable parameters by over 3,000 times and surpass the PET-only baseline by at
least 5 points on five smaller benchmarks, while Dynamic ConPET gains its
advantage on the largest dataset. The codes and datasets are available at
https://github.com/Raincleared-Song/ConPET.
Related papers
- SAFE: Slow and Fast Parameter-Efficient Tuning for Continual Learning with Pre-Trained Models [26.484208658326857]
Continual learning aims to incrementally acquire new concepts in data streams while resisting forgetting previous knowledge.
With the rise of powerful pre-trained models (PTMs), there is a growing interest in training incremental learning systems.
arXiv Detail & Related papers (2024-11-04T15:34:30Z) - ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections [59.839926875976225]
We propose the ETHER transformation family, which performs Efficient fineTuning via HypErplane Reflections.
In particular, we introduce ETHER and its relaxation ETHER+, which match or outperform existing PEFT methods with significantly fewer parameters.
arXiv Detail & Related papers (2024-05-30T17:26:02Z) - UniPT: Universal Parallel Tuning for Transfer Learning with Efficient
Parameter and Memory [69.33445217944029]
PETL is an effective strategy for adapting pre-trained models to downstream domains.
Recent PETL works focus on the more valuable memory-efficient characteristic.
We propose a new memory-efficient PETL strategy, Universal Parallel Tuning (UniPT)
arXiv Detail & Related papers (2023-08-28T05:38:43Z) - VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity
Control [44.73827206809393]
In vision-and-language (VL), parameter-efficient tuning (PET) techniques are proposed to integrate modular modifications into encoder-decoder PLMs.
We propose a Vision-and-Language.
Efficient Tuning (VL-PET) framework to impose effective control over modular modifications.
arXiv Detail & Related papers (2023-08-18T20:18:30Z) - Exploring the Impact of Model Scaling on Parameter-Efficient Tuning [100.61202305296275]
Scaling-efficient tuning (PET) methods can effectively drive extremely large pre-trained language models (PLMs)
In small PLMs, there are usually noticeable performance differences among PET methods.
We introduce a more flexible PET method called Arbitrary PET (APET) method.
arXiv Detail & Related papers (2023-06-04T10:10:54Z) - A Unified Continual Learning Framework with General Parameter-Efficient
Tuning [56.250772378174446]
"Pre-training $rightarrow$ downstream adaptation" presents both new opportunities and challenges for Continual Learning.
We position prompting as one instantiation of PET, and propose a unified CL framework, dubbed as Learning-Accumulation-Ensemble (LAE)
PET, e.g., using Adapter, LoRA, or Prefix, can adapt a pre-trained model to downstream tasks with fewer parameters and resources.
arXiv Detail & Related papers (2023-03-17T15:52:45Z) - Sparse Structure Search for Parameter-Efficient Tuning [85.49094523664428]
We show that S$3$PET surpasses manual and random structures with less trainable parameters.
The searched structures preserve more than 99% fine-tuning performance with 0.01% trainable parameters.
arXiv Detail & Related papers (2022-06-15T08:45:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.