SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
- URL: http://arxiv.org/abs/2110.07904v1
- Date: Fri, 15 Oct 2021 07:35:58 GMT
- Title: SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
- Authors: Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, Daniel Cer
- Abstract summary: We propose a novel prompt-based transfer learning approach called SPoT: Soft Prompt Transfer.
We show SPoT significantly boosts the performance of PromptTuning across many tasks.
We also conduct a large-scale study on task transferability with 26 NLP tasks and 160 combinations of source-target tasks.
- Score: 7.2462572989580405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As pre-trained language models have gotten larger, there has been growing
interest in parameter-efficient methods to apply these models to downstream
tasks. Building on the PromptTuning approach of Lester et al. (2021), which
learns task-specific soft prompts to condition a frozen language model to
perform downstream tasks, we propose a novel prompt-based transfer learning
approach called SPoT: Soft Prompt Transfer. SPoT first learns a prompt on one
or more source tasks and then uses it to initialize the prompt for a target
task. We show that SPoT significantly boosts the performance of PromptTuning
across many tasks. More importantly, SPoT either matches or outperforms
ModelTuning, which fine-tunes the entire model on each individual task, across
all model sizes while being more parameter-efficient (up to 27,000x fewer
task-specific parameters). We further conduct a large-scale study on task
transferability with 26 NLP tasks and 160 combinations of source-target tasks,
and demonstrate that tasks can often benefit each other via prompt transfer.
Finally, we propose a simple yet efficient retrieval approach that interprets
task prompts as task embeddings to identify the similarity between tasks and
predict the most transferable source tasks for a given novel target task.
Related papers
- PECTP: Parameter-Efficient Cross-Task Prompts for Incremental Vision Transformer [76.39111896665585]
Incremental Learning (IL) aims to learn deep models on sequential tasks continually.
Recent vast pre-trained models (PTMs) have achieved outstanding performance by prompt technique in practical IL without the old samples.
arXiv Detail & Related papers (2024-07-04T10:37:58Z) - Exploring the Transferability of Visual Prompting for Multimodal Large Language Models [47.162575147632396]
Transferable Visual Prompting (TVP) is a simple and effective approach to generate visual prompts that can transfer to different models and improve their performance on downstream tasks after trained on only one model.
We introduce two strategies to address the issue of cross-model feature corruption of existing visual prompting methods and enhance the transferability of the learned prompts.
arXiv Detail & Related papers (2024-04-17T09:39:07Z) - Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning [44.43258626098661]
We argue that when we extract knowledge from source tasks via training source prompts, we need to consider this correlation among source tasks for better transfer to target tasks.
We propose a Bayesian approach where we work with the posterior distribution of prompts across source tasks.
We show extensive experimental results on the standard benchmark NLP tasks, where our Bayesian multi-task transfer learning approach outperforms the state-of-the-art methods in many settings.
arXiv Detail & Related papers (2024-02-13T16:57:02Z) - TransPrompt v2: A Transferable Prompting Framework for Cross-task Text
Classification [37.824031151922604]
We propose TransPrompt v2, a novel transferable prompting framework for few-shot learning across similar or distant text classification tasks.
For learning across similar tasks, we employ a multi-task meta-knowledge acquisition (MMA) procedure to train a meta-learner.
For learning across distant tasks, we inject the task type descriptions into the prompt, and capture the intra-type and inter-type prompt embeddings.
arXiv Detail & Related papers (2023-08-29T04:16:57Z) - Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning [43.639430661322585]
We propose multitask prompt tuning (MPT)
MPT learns a single transferable prompt by distilling knowledge from multiple task-specific source prompts.
We then learn multiplicative low rank updates to this shared prompt to efficiently adapt it to each downstream target task.
arXiv Detail & Related papers (2023-03-06T03:25:59Z) - Multitask Vision-Language Prompt Tuning [103.5967011236282]
We propose multitask vision-language prompt tuning (MV)
MV incorporates cross-task knowledge into prompt tuning for vision-language models.
Results in 20 vision tasks demonstrate that the proposed approach outperforms all single-task baseline prompt tuning methods.
arXiv Detail & Related papers (2022-11-21T18:41:44Z) - Instance-wise Prompt Tuning for Pretrained Language Models [72.74916121511662]
Instance-wise Prompt Tuning (IPT) is the first prompt learning paradigm that injects knowledge from the input data instances to the prompts.
IPT significantly outperforms task-based prompt learning methods, and achieves comparable performance to conventional finetuning with only 0.5% - 1.5% of tuned parameters.
arXiv Detail & Related papers (2022-06-04T10:08:50Z) - Attentional Mixtures of Soft Prompt Tuning for Parameter-efficient
Multi-task Knowledge Sharing [53.399742232323895]
ATTEMPT is a new modular, multi-task, and parameter-efficient language model (LM) tuning approach.
It combines knowledge transferred across different tasks via a mixture of soft prompts while keeping original LM unchanged.
It is parameter-efficient (e.g., updates 1,600 times fewer parameters than fine-tuning) and enables multi-task learning and flexible extensions.
arXiv Detail & Related papers (2022-05-24T10:48:33Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - Exploring and Predicting Transferability across NLP Tasks [115.6278033699853]
We study the transferability between 33 NLP tasks across three broad classes of problems.
Our results show that transfer learning is more beneficial than previously thought.
We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task.
arXiv Detail & Related papers (2020-05-02T09:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.