PromptSource: An Integrated Development Environment and Repository for
Natural Language Prompts
- URL: http://arxiv.org/abs/2202.01279v1
- Date: Wed, 2 Feb 2022 20:48:54 GMT
- Title: PromptSource: An Integrated Development Environment and Repository for
Natural Language Prompts
- Authors: Stephen H. Bach, Victor Sanh, Zheng-Xin Yong, Albert Webson, Colin
Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault
Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik
Ben-David, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Alan Fries, Maged S.
Al-shaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang,
Xiangru Tang, Mike Tian-Jian Jiang, Alexander M. Rush
- Abstract summary: PromptSource is a system for creating, sharing, and using natural language prompts.
Prompts are functions that map an example from a dataset to a natural language input and target output.
Over 2,000 prompts for roughly 170 datasets are already available in PromptSource.
- Score: 106.82620362222197
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: PromptSource is a system for creating, sharing, and using natural language
prompts. Prompts are functions that map an example from a dataset to a natural
language input and target output. Using prompts to train and query language
models is an emerging area in NLP that requires new tools that let users
develop and refine these prompts collaboratively. PromptSource addresses the
emergent challenges in this new setting with (1) a templating language for
defining data-linked prompts, (2) an interface that lets users quickly iterate
on prompt development by observing outputs of their prompts on many examples,
and (3) a community-driven set of guidelines for contributing new prompts to a
common pool. Over 2,000 prompts for roughly 170 datasets are already available
in PromptSource. PromptSource is available at
https://github.com/bigscience-workshop/promptsource.
Related papers
- Promptriever: Instruction-Trained Retrievers Can Be Prompted Like Language Models [54.272894325370956]
We present Promptriever, the first retrieval model able to be prompted like an LM.
Promptriever achieves strong performance on standard retrieval tasks, and also follows instructions.
arXiv Detail & Related papers (2024-09-17T12:42:55Z) - PromptSet: A Programmer's Prompting Dataset [0.0]
We present a novel dataset called PromptSet, with more than 61,000 unique developer prompts used in open source Python programs.
We perform analysis on this dataset and introduce the notion of a static linter for prompts.
arXiv Detail & Related papers (2024-02-26T16:34:29Z) - Language Prompt for Autonomous Driving [58.45334918772529]
We propose the first object-centric language prompt set for driving scenes within 3D, multi-view, and multi-frame space, named NuPrompt.
It expands Nuscenes dataset by constructing a total of 35,367 language descriptions, each referring to an average of 5.3 object tracks.
Based on the object-text pairs from the new benchmark, we formulate a new prompt-based driving task, ie, employing a language prompt to predict the described object trajectory across views and frames.
arXiv Detail & Related papers (2023-09-08T15:21:07Z) - Learning to Transfer Prompts for Text Generation [97.64625999380425]
We propose a novel prompt-based method (PTG) for text generation in a transferable setting.
First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks.
In extensive experiments, PTG yields competitive or better results than fine-tuning methods.
arXiv Detail & Related papers (2022-05-03T14:53:48Z) - Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified
Multilingual Prompt [98.26682501616024]
We propose a novel model that uses a unified prompt for all languages, called UniPrompt.
The unified prompt is computation by a multilingual PLM to produce language-independent representation.
Our proposed methods can significantly outperform the strong baselines across different languages.
arXiv Detail & Related papers (2022-02-23T11:57:52Z) - Context-Tuning: Learning Contextualized Prompts for Natural Language
Generation [52.835877179365525]
We propose a novel continuous prompting approach, called Context-Tuning, to fine-tuning PLMs for natural language generation.
Firstly, the prompts are derived based on the input text, so that they can elicit useful knowledge from PLMs for generation.
Secondly, to further enhance the relevance of the generated text to the inputs, we utilize continuous inverse prompting to refine the process of natural language generation.
arXiv Detail & Related papers (2022-01-21T12:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.