Control Prefixes for Text Generation
- URL: http://arxiv.org/abs/2110.08329v1
- Date: Fri, 15 Oct 2021 19:32:17 GMT
- Title: Control Prefixes for Text Generation
- Authors: Jordan Clive, Kris Cao, Marek Rei
- Abstract summary: We propose a dynamic method, Control Prefixes, which allows for the inclusion of conditional input-dependent information in each prompt.
We present state-of-the-art results on several data-to-text datasets, including WebNLG.
- Score: 17.682443394199375
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prompt learning methods adapt pre-trained language models to downstream
applications by using a task-specific prompt together with the input. Most of
the current work on prompt learning in text generation relies on a shared
dataset-level prompt for all examples in the dataset. We extend this approach
and propose a dynamic method, Control Prefixes, which allows for the inclusion
of conditional input-dependent information in each prompt. Control Prefixes is
at the intersection of prompt learning and controlled generation, empowering
the model to have finer-grained control during text generation. The method
incorporates attribute-level learnable representations into different layers of
a pre-trained transformer, allowing for the generated text to be guided in a
particular direction. We provide a systematic evaluation of the technique and
apply it to five datasets from the GEM benchmark for natural language
generation (NLG). We present state-of-the-art results on several data-to-text
datasets, including WebNLG.
Related papers
- Controllable Text Generation in the Instruction-Tuning Era [3.310278632293704]
We find that prompting-based approaches outperform controllable text generation methods on most datasets and tasks.
We provide an algorithm that uses only a task dataset and a Large Language Model with in-context capabilities to automatically generate a constraint dataset.
arXiv Detail & Related papers (2024-05-02T17:24:30Z) - Plug and Play with Prompts: A Prompt Tuning Approach for Controlling Text Generation [16.49758711633611]
Large Language Models (LLMs) have shown exceptional language generation capabilities in response to text-based prompts.
In this work, we explore the use of Prompt Tuning to achieve controlled language generation.
We demonstrate the efficacy of our method towards mitigating harmful, toxic, and biased text generated by language models.
arXiv Detail & Related papers (2024-04-08T01:54:28Z) - Harnessing the Plug-and-Play Controller by Prompting [12.705251690623495]
This paper introduces a novel method for flexible attribute control in text generation using pre-trained language models (PLMs)
The proposed approach aims to enhance the fluency of generated text by guiding the generation process with PPCs.
arXiv Detail & Related papers (2024-02-06T17:18:25Z) - Composable Text Controls in Latent Space with ODEs [97.12426987887021]
This paper proposes a new efficient approach for composable text operations in the compact latent space of text.
By connecting pretrained LMs to the latent space through efficient adaption, we then decode the sampled vectors into desired text sequences.
Experiments show that composing those operators within our approach manages to generate or edit high-quality text.
arXiv Detail & Related papers (2022-08-01T06:51:45Z) - Learning to Transfer Prompts for Text Generation [97.64625999380425]
We propose a novel prompt-based method (PTG) for text generation in a transferable setting.
First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks.
In extensive experiments, PTG yields competitive or better results than fine-tuning methods.
arXiv Detail & Related papers (2022-05-03T14:53:48Z) - A Survey of Pretrained Language Models Based Text Generation [97.64625999380425]
Text Generation aims to produce plausible and readable text in human language from input data.
Deep learning has greatly advanced this field by neural generation models, especially the paradigm of pretrained language models (PLMs)
Grounding text generation on PLMs is seen as a promising direction in both academia and industry.
arXiv Detail & Related papers (2022-01-14T01:44:58Z) - Pre-training Language Model Incorporating Domain-specific Heterogeneous Knowledge into A Unified Representation [49.89831914386982]
We propose a unified pre-trained language model (PLM) for all forms of text, including unstructured text, semi-structured text, and well-structured text.
Our approach outperforms the pre-training of plain text using only 1/4 of the data.
arXiv Detail & Related papers (2021-09-02T16:05:24Z) - Attribute Alignment: Controlling Text Generation from Pre-trained
Language Models [46.19190007510232]
We propose a simple and flexible method for controlling text generation by aligning disentangled attribute representations.
In contrast to recent efforts on training a discriminator to perturb the token level distribution for an attribute, we use the same data to learn an alignment function to guide the pre-trained, non-controlled language model to generate texts with the target attribute without changing the original language model parameters.
arXiv Detail & Related papers (2021-03-20T01:51:32Z) - POINTER: Constrained Progressive Text Generation via Insertion-based
Generative Pre-training [93.79766670391618]
We present POINTER, a novel insertion-based approach for hard-constrained text generation.
The proposed method operates by progressively inserting new tokens between existing tokens in a parallel manner.
The resulting coarse-to-fine hierarchy makes the generation process intuitive and interpretable.
arXiv Detail & Related papers (2020-05-01T18:11:54Z) - Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer [64.22926988297685]
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP)
In this paper, we explore the landscape of introducing transfer learning techniques for NLP by a unified framework that converts all text-based language problems into a text-to-text format.
arXiv Detail & Related papers (2019-10-23T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.