DisCup: Discriminator Cooperative Unlikelihood Prompt-tuning for
Controllable Text Generation
- URL: http://arxiv.org/abs/2210.09551v1
- Date: Tue, 18 Oct 2022 02:59:06 GMT
- Title: DisCup: Discriminator Cooperative Unlikelihood Prompt-tuning for
Controllable Text Generation
- Authors: Hanqing Zhang and Dawei Song
- Abstract summary: We propose a new CTG approach, namely DisCup, which incorporates the attribute knowledge of discriminator to optimize the control-prompts.
DisCup can achieve a new state-of-the-art control performance while maintaining an efficient and high-quality text generation, only relying on around 10 virtual tokens.
- Score: 6.844825905212349
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Prompt learning with immensely large Casual Language Models (CLMs) has been
shown promising for attribute-controllable text generation (CTG). However,
vanilla prompt tuning tends to imitate training corpus characteristics beyond
the control attributes, resulting in a poor generalization ability. Moreover,
it is less able to capture the relationship between different attributes,
further limiting the control performance. In this paper, we propose a new CTG
approach, namely DisCup, which incorporates the attribute knowledge of
discriminator to optimize the control-prompts, steering a frozen CLM to produce
attribute-specific texts. Specifically, the frozen CLM model, capable of
producing multitudinous texts, is first used to generate the next-token
candidates based on the context, so as to ensure the diversity of tokens to be
predicted. Then, we leverage an attribute-discriminator to select
desired/undesired tokens from those candidates, providing the inter-attribute
knowledge. Finally, we bridge the above two traits by an unlikelihood objective
for prompt-tuning. Extensive experimental results show that DisCup can achieve
a new state-of-the-art control performance while maintaining an efficient and
high-quality text generation, only relying on around 10 virtual tokens.
Related papers
- SEP: Self-Enhanced Prompt Tuning for Visual-Language Model [93.94454894142413]
We introduce a novel approach named Self-Enhanced Prompt Tuning (SEP)
SEP explicitly incorporates discriminative prior knowledge to enhance both textual-level and visual-level embeddings.
Comprehensive evaluations across various benchmarks and tasks confirm SEP's efficacy in prompt tuning.
arXiv Detail & Related papers (2024-05-24T13:35:56Z) - Successor Features for Efficient Multisubject Controlled Text Generation [48.37713738712319]
We introduce SF-GEN, which is grounded in two primary concepts: successor features (SFs) and language model rectification.
SF-GEN seamlessly integrates the two to enable dynamic steering of text generation with no need to alter the LLM's parameters.
To the best of our knowledge, our research represents the first application of successor features in text generation.
arXiv Detail & Related papers (2023-11-03T00:17:08Z) - Uncovering Prototypical Knowledge for Weakly Open-Vocabulary Semantic
Segmentation [59.37587762543934]
This paper studies the problem of weakly open-vocabulary semantic segmentation (WOVSS)
Existing methods suffer from a granularity inconsistency regarding the usage of group tokens.
We propose the prototypical guidance network (PGSeg) that incorporates multi-modal regularization.
arXiv Detail & Related papers (2023-10-29T13:18:00Z) - Controllable Data Augmentation for Few-Shot Text Mining with Chain-of-Thought Attribute Manipulation [35.33340453046864]
Chain-of-Thought Attribute Manipulation (CoTAM) is a novel approach that generates new data from existing examples.
We leverage the chain-of-thought prompting to directly edit the text in three steps, (1) attribute decomposition, (2) manipulation proposal, and (3) sentence reconstruction.
arXiv Detail & Related papers (2023-07-14T00:10:03Z) - Scalable Learning of Latent Language Structure With Logical Offline
Cycle Consistency [71.42261918225773]
Conceptually, LOCCO can be viewed as a form of self-learning where the semantic being trained is used to generate annotations for unlabeled text.
As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model.
arXiv Detail & Related papers (2023-05-31T16:47:20Z) - FAST: Improving Controllability for Text Generation with Feedback Aware
Self-Training [25.75982440355576]
Controllable text generation systems often leverage control codes to direct various properties of the output like style and length.
Inspired by recent work on causal inference for NLP, this paper reveals a previously overlooked flaw in these control code-based conditional text generation algorithms.
We propose two simple techniques to reduce these correlations in training sets.
arXiv Detail & Related papers (2022-10-06T19:00:51Z) - Composable Text Controls in Latent Space with ODEs [97.12426987887021]
This paper proposes a new efficient approach for composable text operations in the compact latent space of text.
By connecting pretrained LMs to the latent space through efficient adaption, we then decode the sampled vectors into desired text sequences.
Experiments show that composing those operators within our approach manages to generate or edit high-quality text.
arXiv Detail & Related papers (2022-08-01T06:51:45Z) - Tailor: A Prompt-Based Approach to Attribute-Based Controlled Text
Generation [47.09041767447308]
Attribute-based Controlled Text Generation (CTG) refers to generating sentences that satisfy desirable attributes.
We propose Tailor, which represents each attribute as a pre-trained continuous vector (i.e., single-attribute prompt) and guides the generation of a fixed PLM switch to a pre-specified attribute.
Experiments on 11 attribute-specific generation tasks demonstrate strong performances of Tailor on both single-attribute and multi-attribute CTG, with 0.08% training parameters of a GPT-2.
arXiv Detail & Related papers (2022-04-28T09:09:45Z) - Attribute Alignment: Controlling Text Generation from Pre-trained
Language Models [46.19190007510232]
We propose a simple and flexible method for controlling text generation by aligning disentangled attribute representations.
In contrast to recent efforts on training a discriminator to perturb the token level distribution for an attribute, we use the same data to learn an alignment function to guide the pre-trained, non-controlled language model to generate texts with the target attribute without changing the original language model parameters.
arXiv Detail & Related papers (2021-03-20T01:51:32Z) - Control, Generate, Augment: A Scalable Framework for Multi-Attribute
Text Generation [22.70189685469752]
We introduce CGA, a conditional VAE architecture, to control, generate, and augment text.
We show the value of the individual model components in an ablation study.
We show high quality, diversity and attribute control in the generated sentences through a series of automatic and human assessments.
arXiv Detail & Related papers (2020-04-30T17:31:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.