Exploring Lottery Prompts for Pre-trained Language Models
- URL: http://arxiv.org/abs/2305.19500v1
- Date: Wed, 31 May 2023 02:17:04 GMT
- Title: Exploring Lottery Prompts for Pre-trained Language Models
- Authors: Yulin Chen, Ning Ding, Xiaobin Wang, Shengding Hu, Hai-Tao Zheng,
Zhiyuan Liu, Pengjun Xie
- Abstract summary: We explore the instance-level prompt and their generalizability.
We find that for every instance, there is almost always a lottery prompt that induces the correct prediction from the PLM.
Some strong lottery prompts have high performance over the whole training set.
- Score: 46.66885465183664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consistently scaling pre-trained language models (PLMs) imposes substantial
burdens on model adaptation, necessitating more efficient alternatives to
conventional fine-tuning. Given the advantage of prompting in the zero-shot
setting and the observed performance fluctuation among different prompts, we
explore the instance-level prompt and their generalizability. By searching
through the prompt space, we first validate the assumption that for every
instance, there is almost always a lottery prompt that induces the correct
prediction from the PLM, and such prompt can be obtained at a low cost thanks
to the inherent ability of PLMs. Meanwhile, we find that some strong lottery
prompts have high performance over the whole training set, and they are
equipped with distinguishable linguistic features. Lastly, we attempt to
generalize the searched strong lottery prompts to unseen data with prompt
ensembling method without any parameter tuning. Experiments are conducted on
various types of NLP classification tasks and demonstrate that the proposed
method can achieve comparable results with other gradient-free and
optimization-free baselines.
Related papers
- Large Language Models Prompting With Episodic Memory [53.8690170372303]
We propose PrOmpting with Episodic Memory (POEM), a novel prompt optimization technique that is simple, efficient, and demonstrates strong generalization capabilities.
In the testing phase, we optimize the sequence of examples for each test query by selecting the sequence that yields the highest total rewards from the top-k most similar training examples in the episodic memory.
Our results show that POEM outperforms recent techniques like TEMPERA and RLPrompt by over 5.3% in various text classification tasks.
arXiv Detail & Related papers (2024-08-14T11:19:28Z) - Hard Prompts Made Interpretable: Sparse Entropy Regularization for Prompt Tuning with RL [29.01858866450715]
We present RLPrompt, which aims to find optimal prompt tokens leveraging soft Q-learning.
While the results show promise, we have observed that the prompts frequently appear unnatural, which impedes their interpretability.
We address this limitation by using sparse Tsallis entropy regularization, a principled approach to filtering out unlikely tokens from consideration.
arXiv Detail & Related papers (2024-07-20T03:10:19Z) - Approximated Prompt Tuning for Vision-Language Pre-trained Models [54.326232586461614]
In vision-language pre-trained models, prompt tuning often requires a large number of learnable tokens to bridge the gap between the pre-training and downstream tasks.
We propose a novel Approximated Prompt Tuning (APT) approach towards efficient VL transfer learning.
arXiv Detail & Related papers (2023-06-27T05:43:47Z) - Fairness-guided Few-shot Prompting for Large Language Models [93.05624064699965]
In-context learning can suffer from high instability due to variations in training examples, example order, and prompt formats.
We introduce a metric to evaluate the predictive bias of a fixed prompt against labels or a given attributes.
We propose a novel search strategy based on the greedy search to identify the near-optimal prompt for improving the performance of in-context learning.
arXiv Detail & Related papers (2023-03-23T12:28:25Z) - RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning [84.75064077323098]
This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL)
RLPrompt is flexibly applicable to different types of LMs, such as masked gibberish (e.g., grammaBERT) and left-to-right models (e.g., GPTs)
Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing finetuning or prompting methods.
arXiv Detail & Related papers (2022-05-25T07:50:31Z) - Contrastive Demonstration Tuning for Pre-trained Language Models [59.90340768724675]
Demonstration examples are crucial for an excellent final performance of prompt-tuning.
The proposed approach can be: (i) Plugged into any previous prompt-tuning approaches; (ii) Extended to widespread classification tasks with a large number of categories.
Experimental results on 16 datasets illustrate that our method integrated with previous approaches LM-BFF and P-tuning can yield better performance.
arXiv Detail & Related papers (2022-04-09T05:30:48Z) - Prompt-Learning for Fine-Grained Entity Typing [40.983849729537795]
We investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot and zero-shot scenarios.
We propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types.
arXiv Detail & Related papers (2021-08-24T09:39:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.