Novelty Controlled Paraphrase Generation with Retrieval Augmented
Conditional Prompt Tuning
- URL: http://arxiv.org/abs/2202.00535v1
- Date: Tue, 1 Feb 2022 16:26:36 GMT
- Title: Novelty Controlled Paraphrase Generation with Retrieval Augmented
Conditional Prompt Tuning
- Authors: Jishnu Ray Chowdhury, Yong Zhuang, Shuyi Wang
- Abstract summary: Paraphrase generation is a fundamental and long-standing task in natural language processing.
We propose Retrieval Augmented Prompt Tuning (RAPT) as a parameter-efficient method to adapt large pre-trained language models for paraphrase generation.
We also propose Novelty Conditioned RAPT (NC-RAPT) as a simple model-agnostic method of using specialized prompt tokens for controlled paraphrase generation.
- Score: 8.142947808507367
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Paraphrase generation is a fundamental and long-standing task in natural
language processing. In this paper, we concentrate on two contributions to the
task: (1) we propose Retrieval Augmented Prompt Tuning (RAPT) as a
parameter-efficient method to adapt large pre-trained language models for
paraphrase generation; (2) we propose Novelty Conditioned RAPT (NC-RAPT) as a
simple model-agnostic method of using specialized prompt tokens for controlled
paraphrase generation with varying levels of lexical novelty. By conducting
extensive experiments on four datasets, we demonstrate the effectiveness of the
proposed approaches for retaining the semantic content of the original text
while inducing lexical novelty in the generation.
Related papers
- Personalized Text Generation with Contrastive Activation Steering [63.60368120937822]
We propose a training-free framework that disentangles and represents personalized writing style as a vector.
Our framework achieves a significant 8% relative improvement in personalized generation while reducing storage requirements by 1700 times over PEFT method.
arXiv Detail & Related papers (2025-03-07T08:07:15Z) - Harnessing the Plug-and-Play Controller by Prompting [12.705251690623495]
This paper introduces a novel method for flexible attribute control in text generation using pre-trained language models (PLMs)
The proposed approach aims to enhance the fluency of generated text by guiding the generation process with PPCs.
arXiv Detail & Related papers (2024-02-06T17:18:25Z) - PIP: Parse-Instructed Prefix for Syntactically Controlled Paraphrase
Generation [61.05254852400895]
Parse-Instructed Prefix (PIP) is a novel adaptation of prefix-tuning to tune large pre-trained language models.
In contrast to traditional fine-tuning methods for this task, PIP is a compute-efficient alternative with 10 times less learnable parameters.
arXiv Detail & Related papers (2023-05-26T07:42:38Z) - GQE-PRF: Generative Query Expansion with Pseudo-Relevance Feedback [8.142861977776256]
We propose a novel approach which effectively integrates text generation models into PRF-based query expansion.
Our approach generates augmented query terms via neural text generation models conditioned on both the initial query and pseudo-relevance feedback.
We evaluate the performance of our approach on information retrieval tasks using two benchmark datasets.
arXiv Detail & Related papers (2021-08-13T01:09:02Z) - SDA: Improving Text Generation with Self Data Augmentation [88.24594090105899]
We propose to improve the standard maximum likelihood estimation (MLE) paradigm by incorporating a self-imitation-learning phase for automatic data augmentation.
Unlike most existing sentence-level augmentation strategies, our method is more general and could be easily adapted to any MLE-based training procedure.
arXiv Detail & Related papers (2021-01-02T01:15:57Z) - Improving Text Generation with Student-Forcing Optimal Transport [122.11881937642401]
We propose using optimal transport (OT) to match the sequences generated in training and testing modes.
An extension is also proposed to improve the OT learning, based on the structural and contextual information of the text sequences.
The effectiveness of the proposed method is validated on machine translation, text summarization, and text generation tasks.
arXiv Detail & Related papers (2020-10-12T19:42:25Z) - Exemplar-Controllable Paraphrasing and Translation using Bitext [57.92051459102902]
We adapt models from prior work to be able to learn solely from bilingual text (bitext)
Our single proposed model can perform four tasks: controlled paraphrase generation in both languages and controlled machine translation in both language directions.
arXiv Detail & Related papers (2020-10-12T17:02:50Z) - An Effective Contextual Language Modeling Framework for Speech
Summarization with Augmented Features [13.97006782398121]
Bidirectional Representations from Transformers (BERT) model was proposed and has achieved record-breaking success on many natural language processing tasks.
We explore the incorporation of confidence scores into sentence representations to see if such an attempt could help alleviate the negative effects caused by imperfect automatic speech recognition.
We validate the effectiveness of our proposed method on a benchmark dataset.
arXiv Detail & Related papers (2020-06-01T18:27:48Z) - Improving Adversarial Text Generation by Modeling the Distant Future [155.83051741029732]
We consider a text planning scheme and present a model-based imitation-learning approach to alleviate the aforementioned issues.
We propose a novel guider network to focus on the generative process over a longer horizon, which can assist next-word prediction and provide intermediate rewards for generator optimization.
arXiv Detail & Related papers (2020-05-04T05:45:13Z) - POINTER: Constrained Progressive Text Generation via Insertion-based
Generative Pre-training [93.79766670391618]
We present POINTER, a novel insertion-based approach for hard-constrained text generation.
The proposed method operates by progressively inserting new tokens between existing tokens in a parallel manner.
The resulting coarse-to-fine hierarchy makes the generation process intuitive and interpretable.
arXiv Detail & Related papers (2020-05-01T18:11:54Z) - Syntax-driven Iterative Expansion Language Models for Controllable Text
Generation [2.578242050187029]
We propose a new paradigm for introducing a syntactic inductive bias into neural text generation.
Our experiments show that this paradigm is effective at text generation, with quality between LSTMs and Transformers, and comparable diversity.
arXiv Detail & Related papers (2020-04-05T14:29:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.