LiFi: Lightweight Controlled Text Generation with Fine-Grained Control
Codes
- URL: http://arxiv.org/abs/2402.06930v1
- Date: Sat, 10 Feb 2024 11:53:48 GMT
- Title: LiFi: Lightweight Controlled Text Generation with Fine-Grained Control
Codes
- Authors: Chufan Shi, Deng Cai, Yujiu Yang
- Abstract summary: We present LIFI, which offers a lightweight approach with fine-grained control for controlled text generation.
We evaluate LIFI on two conventional tasks -- sentiment control and topic control -- and one newly proposed task -- stylistic novel writing.
- Score: 46.74968005604948
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the rapidly evolving field of text generation, the demand for more precise
control mechanisms has become increasingly apparent. To address this need, we
present a novel methodology, LIFI, which offers a lightweight approach with
fine-grained control for controlled text generation. Unlike previous studies
that train pre-trained language models to follow discrete, categorical, and
exclusive control codes, LIFI learns controlled text generation under the
guidance of continuous, relative, and nonexclusive control codes. These
fine-grained codes are automatically derived from an attribute classifier,
initially trained with a small amount of labeled data and subsequently employed
to label abundant unlabeled data, thus garnering more extensive supervision
signals. Moreover, to achieve efficient control, we incorporate the
fine-grained control codes with adapters, a parameter- and compute-efficient
way to steer a pre-trained language model. We evaluate LIFI on two conventional
tasks -- sentiment control and topic control -- and one newly proposed task --
stylistic novel writing. Comprehensive experimental results validate the
effectiveness of our proposed methods, demonstrating substantial performance
improvements over existing baselines.
Related papers
- Reinforcement Learning with Token-level Feedback for Controllable Text Generation [16.117006822479407]
We propose a novel reinforcement learning algorithm named TOLE which formulates TOken-LEvel rewards for controllable text generation.
Experimental results show that our algorithm can achieve superior performance on both single-attribute and multi-attribute control tasks.
arXiv Detail & Related papers (2024-03-18T08:18:37Z) - Text2Data: Low-Resource Data Generation with Textual Control [104.38011760992637]
Natural language serves as a common and straightforward control signal for humans to interact seamlessly with machines.
We propose Text2Data, a novel approach that utilizes unlabeled data to understand the underlying data distribution through an unsupervised diffusion model.
It undergoes controllable finetuning via a novel constraint optimization-based learning objective that ensures controllability and effectively counteracts catastrophic forgetting.
arXiv Detail & Related papers (2024-02-08T03:41:39Z) - Fine-grained Controllable Video Generation via Object Appearance and
Context [74.23066823064575]
We propose fine-grained controllable video generation (FACTOR) to achieve detailed control.
FACTOR aims to control objects' appearances and context, including their location and category.
Our method achieves controllability of object appearances without finetuning, which reduces the per-subject optimization efforts for the users.
arXiv Detail & Related papers (2023-12-05T17:47:33Z) - Successor Features for Efficient Multisubject Controlled Text Generation [48.37713738712319]
We introduce SF-GEN, which is grounded in two primary concepts: successor features (SFs) and language model rectification.
SF-GEN seamlessly integrates the two to enable dynamic steering of text generation with no need to alter the LLM's parameters.
To the best of our knowledge, our research represents the first application of successor features in text generation.
arXiv Detail & Related papers (2023-11-03T00:17:08Z) - Controllable Text Generation with Residual Memory Transformer [4.9329649616940205]
We propose a non-intrusive, lightweight control plugin to accompany the generation of CLM at arbitrary time steps.
The proposed plugin, namely Residual Memory Transformer (RMT), has an encoder-decoder setup, which can accept any types of control conditions.
Extensive experiments are carried out on various control tasks, in the form of both automatic and human evaluations.
arXiv Detail & Related papers (2023-09-28T08:13:33Z) - FAST: Improving Controllability for Text Generation with Feedback Aware
Self-Training [25.75982440355576]
Controllable text generation systems often leverage control codes to direct various properties of the output like style and length.
Inspired by recent work on causal inference for NLP, this paper reveals a previously overlooked flaw in these control code-based conditional text generation algorithms.
We propose two simple techniques to reduce these correlations in training sets.
arXiv Detail & Related papers (2022-10-06T19:00:51Z) - Classifiers are Better Experts for Controllable Text Generation [63.17266060165098]
We show that the proposed method significantly outperforms recent PPLM, GeDi, and DExperts on PPL and sentiment accuracy based on the external classifier of generated texts.
The same time, it is also easier to implement and tune, and has significantly fewer restrictions and requirements.
arXiv Detail & Related papers (2022-05-15T12:58:35Z) - Control Prefixes for Text Generation [17.682443394199375]
We propose a dynamic method, Control Prefixes, which allows for the inclusion of conditional input-dependent information in each prompt.
We present state-of-the-art results on several data-to-text datasets, including WebNLG.
arXiv Detail & Related papers (2021-10-15T19:32:17Z) - SideControl: Controlled Open-domain Dialogue Generation via Additive
Side Networks [10.607177634432214]
We propose a novel approach to control the generation of Transformer-based pre-trained language models: the SideControl framework.
Results show that the SideControl framework has better controllability, higher generation quality and better sample-efficiency than existing gradient-based and weighted-decoding baselines.
arXiv Detail & Related papers (2021-09-05T01:15:26Z) - Text Generation with Efficient (Soft) Q-Learning [91.47743595382758]
Reinforcement learning (RL) offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward.
We introduce a new RL formulation for text generation from the soft Q-learning perspective.
We apply the approach to a wide range of tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation.
arXiv Detail & Related papers (2021-06-14T18:48:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.