Reinforcement Learning with Dynamic Multi-Reward Weighting for
Multi-Style Controllable Generation
- URL: http://arxiv.org/abs/2402.14146v1
- Date: Wed, 21 Feb 2024 22:02:37 GMT
- Title: Reinforcement Learning with Dynamic Multi-Reward Weighting for
Multi-Style Controllable Generation
- Authors: Karin de Langis, Ryan Koo, Dongyeop Kang
- Abstract summary: Humans often employ multiple styles simultaneously.
We investigate various formulations of multiple style rewards for a reinforcement learning approach to controlled multi-style generation.
All code and data for RL pipelines with multiple style attributes will be publicly available.
- Score: 17.937198263444046
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Style is an integral component of text that expresses a diverse set of
information, including interpersonal dynamics (e.g. formality) and the author's
emotions or attitudes (e.g. disgust). Humans often employ multiple styles
simultaneously. An open question is how large language models can be explicitly
controlled so that they weave together target styles when generating text: for
example, to produce text that is both negative and non-toxic. Previous work
investigates the controlled generation of a single style, or else controlled
generation of a style and other attributes. In this paper, we expand this into
controlling multiple styles simultaneously. Specifically, we investigate
various formulations of multiple style rewards for a reinforcement learning
(RL) approach to controlled multi-style generation. These reward formulations
include calibrated outputs from discriminators and dynamic weighting by
discriminator gradient magnitudes. We find that dynamic weighting generally
outperforms static weighting approaches, and we explore its effectiveness in 2-
and 3-style control, even compared to strong baselines like plug-and-play
model. All code and data for RL pipelines with multiple style attributes will
be publicly available.
Related papers
- Unified Generative and Discriminative Training for Multi-modal Large Language Models [88.84491005030316]
Generative training has enabled Vision-Language Models (VLMs) to tackle various complex tasks.
Discriminative training, exemplified by models like CLIP, excels in zero-shot image-text classification and retrieval.
This paper proposes a unified approach that integrates the strengths of both paradigms.
arXiv Detail & Related papers (2024-11-01T01:51:31Z) - ArtWeaver: Advanced Dynamic Style Integration via Diffusion Model [73.95608242322949]
Stylized Text-to-Image Generation (STIG) aims to generate images from text prompts and style reference images.
We present ArtWeaver, a novel framework that leverages pretrained Stable Diffusion to address challenges such as misinterpreted styles and inconsistent semantics.
arXiv Detail & Related papers (2024-05-24T07:19:40Z) - Personalized Text Generation with Fine-Grained Linguistic Control [9.668216418094316]
We focus on controlling fine-grained attributes spanning multiple linguistic dimensions.
We introduce a novel benchmark to train generative models and evaluate their ability to generate personalized text.
arXiv Detail & Related papers (2024-02-07T14:41:08Z) - Successor Features for Efficient Multisubject Controlled Text Generation [48.37713738712319]
We introduce SF-GEN, which is grounded in two primary concepts: successor features (SFs) and language model rectification.
SF-GEN seamlessly integrates the two to enable dynamic steering of text generation with no need to alter the LLM's parameters.
To the best of our knowledge, our research represents the first application of successor features in text generation.
arXiv Detail & Related papers (2023-11-03T00:17:08Z) - GestureDiffuCLIP: Gesture Diffusion Model with CLIP Latents [3.229105662984031]
GestureDiffuCLIP is a neural network framework for synthesizing realistic, stylized co-speech gestures with flexible style control.
Our system learns a latent diffusion model to generate high-quality gestures and infuses the CLIP representations of style into the generator.
Our system can be extended to allow fine-grained style control of individual body parts.
arXiv Detail & Related papers (2023-03-26T03:35:46Z) - Audience-Centric Natural Language Generation via Style Infusion [5.6732899077715375]
We propose the novel task of style infusion - infusing the stylistic preferences of audiences in pretrained language generation models.
We leverage limited pairwise human judgments to bootstrap a style analysis model and augment our seed set of judgments.
Our infusion approach can generate compelling stylized examples with generic text prompts.
arXiv Detail & Related papers (2023-01-24T19:57:50Z) - Controlling Styles in Neural Machine Translation with Activation Prompt [34.53183905545485]
Controlling styles in neural machine translation (NMT) has attracted wide attention, as it is crucial for enhancing user experience.
This paper presents a new benchmark and approach for controlling styles in NMT.
We propose a method named style activation prompt (StyleAP) by prompts from stylized monolingual corpus, which does not require extra fine-tuning.
arXiv Detail & Related papers (2022-12-17T16:05:50Z) - ABINet++: Autonomous, Bidirectional and Iterative Language Modeling for
Scene Text Spotting [121.11880210592497]
We argue that the limited capacity of language models comes from 1) implicit language modeling; 2) unidirectional feature representation; and 3) language model with noise input.
We propose an autonomous, bidirectional and iterative ABINet++ for scene text spotting.
arXiv Detail & Related papers (2022-11-19T03:50:33Z) - Controllable Natural Language Generation with Contrastive Prefixes [120.12778570283956]
GPT2 generation utilizes a set of small attribute-specific vectors, called prefixes, to steer natural language generation.
We propose a novel supervised method and also an unsupervised method to train the prefixes for single-aspect control.
Experimental results on both single-aspect and multi-aspect control show that our methods can guide generation towards the desired attributes while keeping high linguistic quality.
arXiv Detail & Related papers (2022-02-27T00:31:03Z) - Prototype-to-Style: Dialogue Generation with Style-Aware Editing on
Retrieval Memory [65.98002918470543]
We introduce a new prototype-to-style framework to tackle the challenge of stylistic dialogue generation.
The framework uses an Information Retrieval (IR) system and extracts a response prototype from the retrieved response.
A stylistic response generator then takes the prototype and the desired language style as model input to obtain a high-quality and stylistic response.
arXiv Detail & Related papers (2020-04-05T14:36:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.