Efficiently Enhancing Zero-Shot Performance of Instruction Following
Model via Retrieval of Soft Prompt
- URL: http://arxiv.org/abs/2210.03029v4
- Date: Mon, 16 Oct 2023 04:57:33 GMT
- Title: Efficiently Enhancing Zero-Shot Performance of Instruction Following
Model via Retrieval of Soft Prompt
- Authors: Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo
- Abstract summary: retrieval of soft prompts can efficiently assist hard prompts in zero-shot task generalization.
We train soft prompt embeddings for each prompt through prompt tuning, store the samples of the training instances mapped with the prompt embeddings, and retrieve the corresponding prompt embedding of the training instance closest to the query instance during inference.
While only adding 0.007% additional parameters, retrieval of soft prompt enhances the performance of T0 on unseen tasks by outperforming it on 10 out of 11 datasets as well as improving the mean accuracy of T0 on BIG-bench benchmark by 2.39% points.
- Score: 56.22456716092954
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Enhancing the zero-shot performance of instruction-following models requires
heavy computation, either by scaling the total number of training datasets or
the model size. In this work, we explore how retrieval of soft prompts obtained
through prompt tuning can efficiently assist hard prompts in zero-shot task
generalization. Specifically, we train soft prompt embeddings for each prompt
through prompt tuning, store the samples of the training instances mapped with
the prompt embeddings, and retrieve the corresponding prompt embedding of the
training instance closest to the query instance during inference. While only
adding 0.007% additional parameters, retrieval of soft prompt enhances the
performance of T0 on unseen tasks by outperforming it on 10 out of 11 datasets
as well as improving the mean accuracy of T0 on BIG-bench benchmark by 2.39%
points. Also, we report an interesting finding that retrieving source
embeddings trained on similar answer choice formats is more important than
those on similar task types.
Related papers
- Revisiting the Power of Prompt for Visual Tuning [50.11465784194896]
This study explores the correlation evolvement between prompts and patch tokens during proficient training.
Inspired by the observation that the prompt tokens tend to share high mutual information with patch tokens, we propose initializing prompts with downstream token prototypes.
Our method significantly advances the adaptation for self-supervised pretraining, achieving impressive task performance gains of at least 10% to 30%.
arXiv Detail & Related papers (2024-02-04T07:49:02Z) - InfoPrompt: Information-Theoretic Soft Prompt Tuning for Natural
Language Understanding [51.48361798508375]
We develop an information-theoretic framework that formulates soft prompt tuning as maximizing mutual information between prompts and other model parameters.
We show that InfoPrompt can significantly accelerate the convergence of the prompt tuning and outperform traditional prompt tuning methods.
arXiv Detail & Related papers (2023-06-08T04:31:48Z) - Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language
Models [107.05966685291067]
We propose test-time prompt tuning (TPT) to learn adaptive prompts on the fly with a single test sample.
TPT improves the zero-shot top-1 accuracy of CLIP by 3.6% on average.
In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data.
arXiv Detail & Related papers (2022-09-15T17:55:11Z) - Instance-wise Prompt Tuning for Pretrained Language Models [72.74916121511662]
Instance-wise Prompt Tuning (IPT) is the first prompt learning paradigm that injects knowledge from the input data instances to the prompts.
IPT significantly outperforms task-based prompt learning methods, and achieves comparable performance to conventional finetuning with only 0.5% - 1.5% of tuned parameters.
arXiv Detail & Related papers (2022-06-04T10:08:50Z) - Prompt Consistency for Zero-Shot Task Generalization [118.81196556175797]
In this paper, we explore methods to utilize unlabeled data to improve zero-shot performance.
Specifically, we take advantage of the fact that multiple prompts can be used to specify a single task, and propose to regularize prompt consistency.
Our approach outperforms the state-of-the-art zero-shot learner, T0, on 9 out of 11 datasets across 4 NLP tasks by up to 10.6 absolute points in terms of accuracy.
arXiv Detail & Related papers (2022-04-29T19:18:37Z) - ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves
Zero-Shot Generalization [15.28478657477945]
We propose ZeroPrompt for zero-shot generalization, focusing on task scaling and zero-shot prompting.
We show that task scaling can substantially improve training efficiency by 30 times in FLOPs.
We also present a prompting method that incorporates a genetic algorithm to automatically search for the best prompt for unseen tasks.
arXiv Detail & Related papers (2022-01-18T12:30:17Z) - PPT: Pre-trained Prompt Tuning for Few-shot Learning [47.05554619258627]
Prompts for pre-trained language models (PLMs) have shown remarkable performance by bridging the gap between pre-training tasks and various downstream tasks.
Among these methods, prompt tuning, which freezes PLMs and only tunes soft prompts, provides an efficient and effective solution for adapting large-scale PLMs to downstream tasks.
In our work, we find that prompt tuning performs comparably with conventional full-model fine-tuning when downstream data are sufficient, whereas it performs much worse under few-shot learning settings.
arXiv Detail & Related papers (2021-09-09T15:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.