Context-Tuning: Learning Contextualized Prompts for Natural Language
Generation
- URL: http://arxiv.org/abs/2201.08670v1
- Date: Fri, 21 Jan 2022 12:35:28 GMT
- Title: Context-Tuning: Learning Contextualized Prompts for Natural Language
Generation
- Authors: Tianyi Tang, Junyi Li, Wayne Xin Zhao
- Abstract summary: We propose a novel continuous prompting approach, called Context-Tuning, to fine-tuning PLMs for natural language generation.
Firstly, the prompts are derived based on the input text, so that they can elicit useful knowledge from PLMs for generation.
Secondly, to further enhance the relevance of the generated text to the inputs, we utilize continuous inverse prompting to refine the process of natural language generation.
- Score: 52.835877179365525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, pretrained language models (PLMs) have made exceptional success in
language generation. To leverage the rich knowledge encoded by PLMs, a simple
yet powerful mechanism is to use prompts, in the form of either discrete tokens
or continuous embeddings. In existing studies, manual prompts are
time-consuming and require domain expertise, while continuous prompts are
typically independent of the inputs. To address this issue, we propose a novel
continuous prompting approach, called Context-Tuning, to fine-tuning PLMs for
natural language generation. Firstly, the prompts are derived based on the
input text, so that they can elicit useful knowledge from PLMs for generation.
We refer to such prompts as contextualized prompts. Secondly, to further
enhance the relevance of the generated text to the inputs, we utilize
continuous inverse prompting to refine the process of natural language
generation by modeling an inverse generation process from output to input.
Moreover, we propose a lightweight contexttuning, fine-tuning only 0.4% of
parameters while retaining well performance.
Related papers
- IPO: Interpretable Prompt Optimization for Vision-Language Models [40.83071220530289]
This paper introduces a simple but interpretable prompt (IPO)
IPO utilizes large language models (LLMs) to generate textual prompts dynamically.
We incorporate a large multimodal model (LMM) to condition on visual content by generating image descriptions.
arXiv Detail & Related papers (2024-10-20T14:10:22Z) - Generative Context-aware Fine-tuning of Self-supervised Speech Models [54.389711404209415]
We study the use of generative large language models (LLM) generated context information.
We propose an approach to distill the generated information during fine-tuning of self-supervised speech models.
We evaluate the proposed approach using the SLUE and Libri-light benchmarks for several downstream tasks: automatic speech recognition, named entity recognition, and sentiment analysis.
arXiv Detail & Related papers (2023-12-15T15:46:02Z) - MPrompt: Exploring Multi-level Prompt Tuning for Machine Reading
Comprehension [19.12663587559988]
We propose a multi-level prompt tuning (MPrompt) method for machine reading comprehension.
It utilizes prompts at task-specific, domain-specific, and context-specific levels to enhance the comprehension of input semantics.
We conducted extensive experiments on 12 benchmarks of various QA formats and achieved an average improvement of 1.94% over the state-of-the-art methods.
arXiv Detail & Related papers (2023-10-27T14:24:06Z) - Demonstrate-Search-Predict: Composing retrieval and language models for
knowledge-intensive NLP [77.817293104436]
We propose a framework that relies on passing natural language texts in sophisticated pipelines between an LM and an RM.
We have written novel DSP programs for answering questions in open-domain, multi-hop, and conversational settings.
arXiv Detail & Related papers (2022-12-28T18:52:44Z) - AdaPrompt: Adaptive Model Training for Prompt-based NLP [77.12071707955889]
We propose AdaPrompt, adaptively retrieving external data for continual pretraining of PLMs.
Experimental results on five NLP benchmarks show that AdaPrompt can improve over standard PLMs in few-shot settings.
In zero-shot settings, our method outperforms standard prompt-based methods by up to 26.35% relative error reduction.
arXiv Detail & Related papers (2022-02-10T04:04:57Z) - AutoPrompt: Eliciting Knowledge from Language Models with Automatically
Generated Prompts [46.03503882865222]
AutoPrompt is an automated method to create prompts for a diverse set of tasks based on a gradient-guided search.
We show that masked language models (MLMs) have an inherent capability to perform sentiment analysis and natural language inference without additional parameters or finetuning.
arXiv Detail & Related papers (2020-10-29T22:54:00Z) - POINTER: Constrained Progressive Text Generation via Insertion-based
Generative Pre-training [93.79766670391618]
We present POINTER, a novel insertion-based approach for hard-constrained text generation.
The proposed method operates by progressively inserting new tokens between existing tokens in a parallel manner.
The resulting coarse-to-fine hierarchy makes the generation process intuitive and interpretable.
arXiv Detail & Related papers (2020-05-01T18:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.