A Plug-and-Play Method for Controlled Text Generation
- URL: http://arxiv.org/abs/2109.09707v1
- Date: Mon, 20 Sep 2021 17:27:03 GMT
- Title: A Plug-and-Play Method for Controlled Text Generation
- Authors: Damian Pascual, Beni Egressy, Clara Meister, Ryan Cotterell, Roger
Wattenhofer
- Abstract summary: We present a plug-and-play decoding method for controlled language generation that is so simple and intuitive, it can be described in a single sentence.
Despite the simplicity of this approach, we see it works incredibly well in practice.
- Score: 38.283313068622085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large pre-trained language models have repeatedly shown their ability to
produce fluent text. Yet even when starting from a prompt, generation can
continue in many plausible directions. Current decoding methods with the goal
of controlling generation, e.g., to ensure specific words are included, either
require additional models or fine-tuning, or work poorly when the task at hand
is semantically unconstrained, e.g., story generation. In this work, we present
a plug-and-play decoding method for controlled language generation that is so
simple and intuitive, it can be described in a single sentence: given a topic
or keyword, we add a shift to the probability distribution over our vocabulary
towards semantically similar words. We show how annealing this distribution can
be used to impose hard constraints on language generation, something no other
plug-and-play method is currently able to do with SOTA language generators.
Despite the simplicity of this approach, we see it works incredibly well in
practice: decoding from GPT-2 leads to diverse and fluent sentences while
guaranteeing the appearance of given guide words. We perform two user studies,
revealing that (1) our method outperforms competing methods in human
evaluations; and (2) forcing the guide words to appear in the generated text
has no impact on the fluency of the generated text.
Related papers
- DECIDER: A Dual-System Rule-Controllable Decoding Framework for Language Generation [57.07295906718989]
Constrained decoding approaches aim to control the meaning or style of text generated by a Pre-trained Language Model (PLM) using specific target words during inference.
We propose a novel decoding framework, DECIDER, which enables us to program rules on how we complete tasks to control a PLM.
arXiv Detail & Related papers (2024-03-04T11:49:08Z) - Pixel Sentence Representation Learning [67.4775296225521]
In this work, we conceptualize the learning of sentence-level textual semantics as a visual representation learning process.
We employ visually-grounded text perturbation methods like typos and word order shuffling, resonating with human cognitive patterns, and enabling perturbation to be perceived as continuous.
Our approach is further bolstered by large-scale unsupervised topical alignment training and natural language inference supervision.
arXiv Detail & Related papers (2024-02-13T02:46:45Z) - TextDiffuser-2: Unleashing the Power of Language Models for Text
Rendering [118.30923824681642]
TextDiffuser-2 aims to unleash the power of language models for text rendering.
We utilize the language model within the diffusion model to encode the position and texts at the line level.
We conduct extensive experiments and incorporate user studies involving human participants as well as GPT-4V.
arXiv Detail & Related papers (2023-11-28T04:02:40Z) - Most Language Models can be Poets too: An AI Writing Assistant and
Constrained Text Generation Studio [0.5097809301149341]
We find that most language models generate compelling text even under significant constraints.
We present a technique for modifying the output of a language model by compositionally applying filter functions to the language models vocabulary.
We also present a Huggingface space web-app presenting this technique called Gadsby.
arXiv Detail & Related papers (2023-06-28T05:10:51Z) - Collocation2Text: Controllable Text Generation from Guide Phrases in
Russian [0.0]
Collocation2Text is a plug-and-play method for automatic controllable text generation in Russian.
The method is based on two interacting models: the autoregressive language ruGPT-3 model and the autoencoding language ruRoBERTa model.
Experiments on generating news articles using the proposed method showed its effectiveness for automatically generated fluent texts.
arXiv Detail & Related papers (2022-06-18T17:10:08Z) - Language modeling via stochastic processes [30.796382023812022]
Modern language models can generate high-quality short texts, but often meander or are incoherent when generating longer texts.
Recent work in self-supervised learning suggests that models can learn good latent representations via contrastive learning.
We propose one approach for leveraging constrastive representations, which we call Time Control.
arXiv Detail & Related papers (2022-03-21T22:13:53Z) - Controllable Natural Language Generation with Contrastive Prefixes [120.12778570283956]
GPT2 generation utilizes a set of small attribute-specific vectors, called prefixes, to steer natural language generation.
We propose a novel supervised method and also an unsupervised method to train the prefixes for single-aspect control.
Experimental results on both single-aspect and multi-aspect control show that our methods can guide generation towards the desired attributes while keeping high linguistic quality.
arXiv Detail & Related papers (2022-02-27T00:31:03Z) - Directed Beam Search: Plug-and-Play Lexically Constrained Language
Generation [6.2211479935811775]
State-of-the-art language models are too large to be trained from scratch in a manageable time.
We propose Directed Beam Search (DBS), a plug-and-play method for lexically constrained language generation.
arXiv Detail & Related papers (2020-12-31T03:05:44Z) - Facts2Story: Controlling Text Generation by Key Facts [0.0]
We propose a controlled generation task based on expanding a sequence of facts, expressed in natural language, into a longer narrative.
We show that while auto-regressive, unidirectional Language Models such as GPT2 produce better fluency, they struggle to adhere to the requested facts.
We propose a plan-and-cloze model (using fine-tuned XLNet) which produces competitive fluency while adhering to the requested content.
arXiv Detail & Related papers (2020-12-08T10:14:29Z) - POINTER: Constrained Progressive Text Generation via Insertion-based
Generative Pre-training [93.79766670391618]
We present POINTER, a novel insertion-based approach for hard-constrained text generation.
The proposed method operates by progressively inserting new tokens between existing tokens in a parallel manner.
The resulting coarse-to-fine hierarchy makes the generation process intuitive and interpretable.
arXiv Detail & Related papers (2020-05-01T18:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.