OpenPrompt: An Open-source Framework for Prompt-learning
- URL: http://arxiv.org/abs/2111.01998v1
- Date: Wed, 3 Nov 2021 03:31:14 GMT
- Title: OpenPrompt: An Open-source Framework for Prompt-learning
- Authors: Ning Ding, Shengding Hu, Weilin Zhao, Yulin Chen, Zhiyuan Liu, Hai-Tao
Zheng, Maosong Sun
- Abstract summary: We present OpenPrompt, a unified easy-to-use toolkit to conduct prompt-learning over PLMs.
OpenPrompt is a research-friendly framework that is equipped with efficiency, modularity, and extendibility.
- Score: 59.17869696803559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prompt-learning has become a new paradigm in modern natural language
processing, which directly adapts pre-trained language models (PLMs) to
$cloze$-style prediction, autoregressive modeling, or sequence to sequence
generation, resulting in promising performances on various tasks. However, no
standard implementation framework of prompt-learning is proposed yet, and most
existing prompt-learning codebases, often unregulated, only provide limited
implementations for specific scenarios. Since there are many details such as
templating strategy, initializing strategy, and verbalizing strategy, etc. need
to be considered in prompt-learning, practitioners face impediments to quickly
adapting the desired prompt learning methods to their applications. In this
paper, we present {OpenPrompt}, a unified easy-to-use toolkit to conduct
prompt-learning over PLMs. OpenPrompt is a research-friendly framework that is
equipped with efficiency, modularity, and extendibility, and its combinability
allows the freedom to combine different PLMs, task formats, and prompting
modules in a unified paradigm. Users could expediently deploy prompt-learning
frameworks and evaluate the generalization of them on different NLP tasks
without constraints. OpenPrompt is publicly released at {\url{
https://github.com/thunlp/OpenPrompt}}.
Related papers
- Optimization of Prompt Learning via Multi-Knowledge Representation for Vision-Language Models [26.964848679914354]
CoKnow is a framework that enhances Prompt Learning for Vision-Language Models with rich contextual knowledge.
We conducted extensive experiments on 11 publicly available datasets, demonstrating that CoKnow outperforms a series of previous methods.
arXiv Detail & Related papers (2024-04-16T07:44:52Z) - Efficient Prompting Methods for Large Language Models: A Survey [50.171011917404485]
Prompting has become a mainstream paradigm for adapting large language models (LLMs) to specific natural language processing tasks.
This approach brings the additional computational burden of model inference and human effort to guide and control the behavior of LLMs.
We present the basic concepts of prompting, review the advances for efficient prompting, and highlight future research directions.
arXiv Detail & Related papers (2024-04-01T12:19:08Z) - A Prompt Learning Framework for Source Code Summarization [24.33455799484519]
We propose a novel prompt learning framework for code summarization called PromptCS.
PromptCS trains a prompt agent that can generate continuous prompts to unleash the potential for LLMs in code summarization.
We evaluate PromptCS on the CodeSearchNet dataset involving multiple programming languages.
arXiv Detail & Related papers (2023-12-26T14:37:55Z) - On Meta-Prompting [18.949285430843695]
We call these approaches meta-prompting, or prompting to obtain prompts.
We propose a theoretical framework based on category theory to generalize and describe them.
arXiv Detail & Related papers (2023-12-11T17:46:44Z) - Context-Aware Prompt Tuning for Vision-Language Model with
Dual-Alignment [15.180715595425864]
We introduce a novel method to improve the prompt learning of vision-language models by incorporating pre-trained large language models (LLMs)
With DuAl-PT, we propose to learn more context-aware prompts, benefiting from both explicit and implicit context modeling.
Empirically, DuAl-PT achieves superior performance on 11 downstream datasets on few-shot recognition and base-to-new generalization.
arXiv Detail & Related papers (2023-09-08T06:51:15Z) - Effective Structured Prompting by Meta-Learning and Representative Verbalizer [27.64413828719264]
We propose MetaPrompter for effective structured prompting.
We propose a novel soft verbalizer (RepVerb) which constructs label embedding from feature embeddings directly.
Experimental results demonstrate that MetaPrompter performs better than the recent state-of-the-arts.
arXiv Detail & Related papers (2023-06-01T12:44:33Z) - TEMPERA: Test-Time Prompting via Reinforcement Learning [57.48657629588436]
We propose Test-time Prompt Editing using Reinforcement learning (TEMPERA)
In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge.
Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
arXiv Detail & Related papers (2022-11-21T22:38:20Z) - Bayesian Prompt Learning for Image-Language Model Generalization [64.50204877434878]
We use the regularization ability of Bayesian methods to frame prompt learning as a variational inference problem.
Our approach regularizes the prompt space, reduces overfitting to the seen prompts and improves the prompt generalization on unseen prompts.
We demonstrate empirically on 15 benchmarks that Bayesian prompt learning provides an appropriate coverage of the prompt space.
arXiv Detail & Related papers (2022-10-05T17:05:56Z) - RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning [84.75064077323098]
This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL)
RLPrompt is flexibly applicable to different types of LMs, such as masked gibberish (e.g., grammaBERT) and left-to-right models (e.g., GPTs)
Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing finetuning or prompting methods.
arXiv Detail & Related papers (2022-05-25T07:50:31Z) - IDPG: An Instance-Dependent Prompt Generation Method [58.45110542003139]
Prompt tuning is a new, efficient NLP transfer learning paradigm that adds a task-specific prompt in each input instance during the model training stage.
We propose a conditional prompt generation method to generate prompts for each input instance.
arXiv Detail & Related papers (2022-04-09T15:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.