PPT: Pre-trained Prompt Tuning for Few-shot Learning
- URL: http://arxiv.org/abs/2109.04332v1
- Date: Thu, 9 Sep 2021 15:11:04 GMT
- Title: PPT: Pre-trained Prompt Tuning for Few-shot Learning
- Authors: Yuxian Gu, Xu Han, Zhiyuan Liu, Minlie Huang
- Abstract summary: Prompts for pre-trained language models (PLMs) have shown remarkable performance by bridging the gap between pre-training tasks and various downstream tasks.
Among these methods, prompt tuning, which freezes PLMs and only tunes soft prompts, provides an efficient and effective solution for adapting large-scale PLMs to downstream tasks.
In our work, we find that prompt tuning performs comparably with conventional full-model fine-tuning when downstream data are sufficient, whereas it performs much worse under few-shot learning settings.
- Score: 47.05554619258627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prompts for pre-trained language models (PLMs) have shown remarkable
performance by bridging the gap between pre-training tasks and various
downstream tasks. Among these methods, prompt tuning, which freezes PLMs and
only tunes soft prompts, provides an efficient and effective solution for
adapting large-scale PLMs to downstream tasks. However, prompt tuning is yet to
be fully explored. In our pilot experiments, we find that prompt tuning
performs comparably with conventional full-model fine-tuning when downstream
data are sufficient, whereas it performs much worse under few-shot learning
settings, which may hinder the application of prompt tuning in practice. We
attribute this low performance to the manner of initializing soft prompts.
Therefore, in this work, we propose to pre-train prompts by adding soft prompts
into the pre-training stage to obtain a better initialization. We name this
Pre-trained Prompt Tuning framework "PPT". To ensure the generalization of PPT,
we formulate similar classification tasks into a unified task form and
pre-train soft prompts for this unified task. Extensive experiments show that
tuning pre-trained prompts for downstream tasks can reach or even outperform
full-model fine-tuning under both full-data and few-shot settings. Our approach
is effective and efficient for using large-scale PLMs in practice.
Related papers
- Revisiting the Power of Prompt for Visual Tuning [50.11465784194896]
This study explores the correlation evolvement between prompts and patch tokens during proficient training.
Inspired by the observation that the prompt tokens tend to share high mutual information with patch tokens, we propose initializing prompts with downstream token prototypes.
Our method significantly advances the adaptation for self-supervised pretraining, achieving impressive task performance gains of at least 10% to 30%.
arXiv Detail & Related papers (2024-02-04T07:49:02Z) - Improving Prompt Tuning with Learned Prompting Layers [12.46460062708119]
We propose a novel framework, underlineSelective underlinePrompt underlineTuning (SPT)
It learns to select the proper prompt layers by inserting a prompt controlled by a learnable probabilistic gate at each intermediate layer.
We conduct extensive experiments with ten benchmark datasets under the full-data and few-shot scenarios.
arXiv Detail & Related papers (2023-10-31T02:07:51Z) - Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts [97.20933523766182]
Prompt tuning is a parameter-efficient tuning (PETuning) method for utilizing pre-trained models (PTMs)
We present Late Prompt Tuning () that inserts a late prompt into an intermediate layer of the PTM instead of the input layer or all layers.
We show that, can achieve competitive performance to full model tuning and other PETuning methods under both full-data and few-shot scenarios.
arXiv Detail & Related papers (2022-10-20T14:23:52Z) - Multitask Pre-training of Modular Prompt for Chinese Few-Shot Learning [83.10861551885321]
We present Multi-task Pre-trained Modular Prompt (MP2) to boost prompt tuning for few-shot learning.
MP2 is a set of combinable prompts pre-trained on 38 Chinese tasks.
We show MP2 significantly outperforms prompt tuning, full model tuning, and prior prompt pre-training methods in few-shot settings.
arXiv Detail & Related papers (2022-10-14T06:43:42Z) - Instance-wise Prompt Tuning for Pretrained Language Models [72.74916121511662]
Instance-wise Prompt Tuning (IPT) is the first prompt learning paradigm that injects knowledge from the input data instances to the prompts.
IPT significantly outperforms task-based prompt learning methods, and achieves comparable performance to conventional finetuning with only 0.5% - 1.5% of tuned parameters.
arXiv Detail & Related papers (2022-06-04T10:08:50Z) - Learning a Better Initialization for Soft Prompts via Meta-Learning [58.53984967461313]
We propose MetaPT (Meta-learned Prompt Tuning) to improve prompt tuning.
We introduce the structure by first clustering pre-training data into different auxiliary tasks.
We use these tasks to pre-train prompts with a meta-learning algorithm.
arXiv Detail & Related papers (2022-05-25T03:50:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.