PERFECT: Prompt-free and Efficient Few-shot Learning with Language
Models
- URL: http://arxiv.org/abs/2204.01172v1
- Date: Sun, 3 Apr 2022 22:31:25 GMT
- Title: PERFECT: Prompt-free and Efficient Few-shot Learning with Language
Models
- Authors: Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Marzieh
Saeidi, Lambert Mathias, Veselin Stoyanov, and Majid Yazdani
- Abstract summary: PERFECT is a simple and efficient method for few-shot fine-tuning of PLMs without relying on any such handcrafting.
We show that manually engineered task prompts can be replaced with task-specific adapters that enable sample-efficient fine-tuning.
Experiments on a wide range of few-shot NLP tasks demonstrate that PERFECT, while being simple and efficient, also outperforms existing state-of-the-art few-shot learning methods.
- Score: 67.3725459417758
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current methods for few-shot fine-tuning of pretrained masked language models
(PLMs) require carefully engineered prompts and verbalizers for each new task
to convert examples into a cloze-format that the PLM can score. In this work,
we propose PERFECT, a simple and efficient method for few-shot fine-tuning of
PLMs without relying on any such handcrafting, which is highly effective given
as few as 32 data points. PERFECT makes two key design choices: First, we show
that manually engineered task prompts can be replaced with task-specific
adapters that enable sample-efficient fine-tuning and reduce memory and storage
costs by roughly factors of 5 and 100, respectively. Second, instead of using
handcrafted verbalizers, we learn new multi-token label embeddings during
fine-tuning, which are not tied to the model vocabulary and which allow us to
avoid complex auto-regressive decoding. These embeddings are not only learnable
from limited data but also enable nearly 100x faster training and inference.
Experiments on a wide range of few-shot NLP tasks demonstrate that PERFECT,
while being simple and efficient, also outperforms existing state-of-the-art
few-shot learning methods. Our code is publicly available at
https://github.com/rabeehk/perfect.
Related papers
- RIFF: Learning to Rephrase Inputs for Few-shot Fine-tuning of Language Models [4.085425430499285]
We explore the impact of altering the input text of the original task in conjunction with parameter-efficient fine-tuning methods.
To most effectively rewrite the input text, we train a few-shot paraphrase model with a Maximum-Marginal Likelihood objective.
We show that enriching data with paraphrases at train and test time enhances the performance beyond what can be achieved with parameter-efficient fine-tuning alone.
arXiv Detail & Related papers (2024-03-04T17:58:09Z) - Zero-Shot Learners for Natural Language Understanding via a Unified
Multiple Choice Perspective [26.41585967095811]
Zero-shot learning aims to train a model on a given task such that it can address new learning tasks without any additional training.
Our approach converts zero-shot learning into multiple-choice tasks, avoiding problems in commonly used large-scale generative models such as FLAN.
Our approach shows state-of-the-art performance on several benchmarks and produces satisfactory results on tasks such as natural language inference and text classification.
arXiv Detail & Related papers (2022-10-16T17:24:06Z) - Instance-wise Prompt Tuning for Pretrained Language Models [72.74916121511662]
Instance-wise Prompt Tuning (IPT) is the first prompt learning paradigm that injects knowledge from the input data instances to the prompts.
IPT significantly outperforms task-based prompt learning methods, and achieves comparable performance to conventional finetuning with only 0.5% - 1.5% of tuned parameters.
arXiv Detail & Related papers (2022-06-04T10:08:50Z) - Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than
In-Context Learning [81.3514358542452]
Few-shot in-context learning (ICL) incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made.
parameter-efficient fine-tuning offers an alternative paradigm where a small set of parameters are trained to enable a model to perform the new task.
In this paper, we rigorously compare few-shot ICL and parameter-efficient fine-tuning and demonstrate that the latter offers better accuracy as well as dramatically lower computational costs.
arXiv Detail & Related papers (2022-05-11T17:10:41Z) - Making Pre-trained Language Models End-to-end Few-shot Learners with
Contrastive Prompt Tuning [41.15017636192417]
We present CP-Tuning, the first end-to-end Contrastive Prompt Tuning framework for fine-tuning Language Models.
It is integrated with the task-invariant continuous prompt encoding technique with fully trainable prompt parameters.
Experiments over a variety of language understanding tasks used in IR systems and different PLMs show that CP-Tuning outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-04-01T02:24:24Z) - AdaPrompt: Adaptive Model Training for Prompt-based NLP [77.12071707955889]
We propose AdaPrompt, adaptively retrieving external data for continual pretraining of PLMs.
Experimental results on five NLP benchmarks show that AdaPrompt can improve over standard PLMs in few-shot settings.
In zero-shot settings, our method outperforms standard prompt-based methods by up to 26.35% relative error reduction.
arXiv Detail & Related papers (2022-02-10T04:04:57Z) - LiST: Lite Self-training Makes Efficient Few-shot Learners [91.28065455714018]
LiST improves by 35% over classic fine-tuning methods and 6% over prompt-tuning with 96% reduction in number of trainable parameters when fine-tuned with no more than 30 labeled examples from each target domain.
arXiv Detail & Related papers (2021-10-12T18:47:18Z) - Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with
Language Models [48.0311578882384]
finetuning language models (LMs) with training examples and task descriptions has been seen as critical to recent successes in few-shot learning.
We show that finetuning LMs in the few-shot setting can considerably reduce the need for prompt engineering.
arXiv Detail & Related papers (2021-06-24T23:38:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.