PTP: Boosting Stability and Performance of Prompt Tuning with
Perturbation-Based Regularizer
- URL: http://arxiv.org/abs/2305.02423v1
- Date: Wed, 3 May 2023 20:30:51 GMT
- Title: PTP: Boosting Stability and Performance of Prompt Tuning with
Perturbation-Based Regularizer
- Authors: Lichang Chen, Heng Huang, Minhao Cheng
- Abstract summary: We introduce perturbation-based regularizers, which can smooth the loss landscape, into prompt tuning.
We design two kinds of perturbation-based regularizers, including random-noise-based and adversarial-based.
Our new algorithms improve the state-of-the-art prompt tuning methods by 1.94% and 2.34% on SuperGLUE and FewGLUE benchmarks, respectively.
- Score: 94.23904400441957
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent studies show that prompt tuning can better leverage the power of large
language models than fine-tuning on downstream natural language understanding
tasks. However, the existing prompt tuning methods have training instability
issues, as the variance of scores under different random seeds is quite large.
To address this critical problem, we first investigate and find that the loss
landscape of vanilla prompt tuning is precipitous when it is visualized, where
a slight change of input data can cause a big fluctuation in the loss
landscape. This is an essential factor that leads to the instability of prompt
tuning. Based on this observation, we introduce perturbation-based
regularizers, which can smooth the loss landscape, into prompt tuning. We
propose a new algorithm, called Prompt Tuning with Perturbation-based
regularizer~(PTP), which can not only alleviate training instability
dramatically but also boost the performance of prompt tuning. We design two
kinds of perturbation-based regularizers, including random-noise-based and
adversarial-based. In particular, our proposed perturbations are flexible on
both text space and embedding space. Extensive experiments show the
effectiveness of our proposed methods in stabilizing the training. Our new
algorithms improve the state-of-the-art prompt tuning methods by 1.94\% and
2.34\% on SuperGLUE and FewGLUE benchmarks, respectively.
Related papers
- Hard Prompts Made Interpretable: Sparse Entropy Regularization for Prompt Tuning with RL [29.01858866450715]
We present RLPrompt, which aims to find optimal prompt tokens leveraging soft Q-learning.
While the results show promise, we have observed that the prompts frequently appear unnatural, which impedes their interpretability.
We address this limitation by using sparse Tsallis entropy regularization, a principled approach to filtering out unlikely tokens from consideration.
arXiv Detail & Related papers (2024-07-20T03:10:19Z) - Embedded Prompt Tuning: Towards Enhanced Calibration of Pretrained Models for Medical Images [18.094731760514264]
We study the effectiveness of fine-tuning methods when adapting foundation models to medical image classification tasks.
We propose the Embedded Prompt Tuning (EPT) method by embedding prompt tokens into the expanded channels.
EPT outperforms several state-of-the-art finetuning methods by a significant margin on few-shot medical image classification tasks.
arXiv Detail & Related papers (2024-07-01T06:35:53Z) - Attention Prompt Tuning: Parameter-efficient Adaptation of Pre-trained
Models for Spatiotemporal Modeling [32.603558214472265]
We introduce Attention Prompt Tuning (APT) for video-based applications such as action recognition.
APT involves injecting a set of learnable prompts along with data tokens during fine-tuning while keeping the backbone frozen.
The proposed approach greatly reduces the number of FLOPs and latency while achieving a significant performance boost.
arXiv Detail & Related papers (2024-03-11T17:59:41Z) - Sparse is Enough in Fine-tuning Pre-trained Large Language Models [98.46493578509039]
We propose a gradient-based sparse fine-tuning algorithm, named Sparse Increment Fine-Tuning (SIFT)
We validate its effectiveness on a range of tasks including the GLUE Benchmark and Instruction-tuning.
arXiv Detail & Related papers (2023-12-19T06:06:30Z) - Approximated Prompt Tuning for Vision-Language Pre-trained Models [54.326232586461614]
In vision-language pre-trained models, prompt tuning often requires a large number of learnable tokens to bridge the gap between the pre-training and downstream tasks.
We propose a novel Approximated Prompt Tuning (APT) approach towards efficient VL transfer learning.
arXiv Detail & Related papers (2023-06-27T05:43:47Z) - Prompt Tuning for Generative Multimodal Pretrained Models [75.44457974275154]
We implement prompt tuning on the unified sequence-to-sequence pretrained model adaptive to both understanding and generation tasks.
Experimental results demonstrate that the light-weight prompt tuning can achieve comparable performance with finetuning.
In comparison with finetuned models, the prompt-tuned models demonstrate improved robustness against adversarial attacks.
arXiv Detail & Related papers (2022-08-04T08:56:38Z) - On Controller Tuning with Time-Varying Bayesian Optimization [74.57758188038375]
We will use time-varying optimization (TVBO) to tune controllers online in changing environments using appropriate prior knowledge on the control objective and its changes.
We propose a novel TVBO strategy using Uncertainty-Injection (UI), which incorporates the assumption of incremental and lasting changes.
Our model outperforms the state-of-the-art method in TVBO, exhibiting reduced regret and fewer unstable parameter configurations.
arXiv Detail & Related papers (2022-07-22T14:54:13Z) - STT: Soft Template Tuning for Few-Shot Adaptation [72.46535261444151]
We propose a new prompt-tuning framework, called Soft Template Tuning (STT)
STT combines manual and auto prompts, and treats downstream classification tasks as a masked language modeling task.
It can even outperform the time- and resource-consuming fine-tuning method on sentiment classification tasks.
arXiv Detail & Related papers (2022-07-18T07:07:22Z) - Latency Adjustable Transformer Encoder for Language Understanding [0.8287206589886879]
This paper proposes an efficient Transformer architecture that adjusts the inference computational cost adaptively with a desired inference latency speedup.
The proposed method detects less important hidden sequence elements (word-vectors) and eliminates them in each encoder layer using a proposed Attention Context Contribution (ACC) metric.
The proposed method mathematically and experimentally improves the inference latency of BERT_base and GPT-2 by up to 4.8 and 3.72 times with less than 0.75% accuracy drop and passable perplexity on average.
arXiv Detail & Related papers (2022-01-10T13:04:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.