Audience-Centric Natural Language Generation via Style Infusion
- URL: http://arxiv.org/abs/2301.10283v1
- Date: Tue, 24 Jan 2023 19:57:50 GMT
- Title: Audience-Centric Natural Language Generation via Style Infusion
- Authors: Samraj Moorjani, Adit Krishnan, Hari Sundaram, Ewa Maslowska, Aravind
Sankar
- Abstract summary: We propose the novel task of style infusion - infusing the stylistic preferences of audiences in pretrained language generation models.
We leverage limited pairwise human judgments to bootstrap a style analysis model and augment our seed set of judgments.
Our infusion approach can generate compelling stylized examples with generic text prompts.
- Score: 5.6732899077715375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adopting contextually appropriate, audience-tailored linguistic styles is
critical to the success of user-centric language generation systems (e.g.,
chatbots, computer-aided writing, dialog systems). While existing approaches
demonstrate textual style transfer with large volumes of parallel or
non-parallel data, we argue that grounding style on audience-independent
external factors is innately limiting for two reasons. First, it is difficult
to collect large volumes of audience-specific stylistic data. Second, some
stylistic objectives (e.g., persuasiveness, memorability, empathy) are hard to
define without audience feedback.
In this paper, we propose the novel task of style infusion - infusing the
stylistic preferences of audiences in pretrained language generation models.
Since humans are better at pairwise comparisons than direct scoring - i.e., is
Sample-A more persuasive/polite/empathic than Sample-B - we leverage limited
pairwise human judgments to bootstrap a style analysis model and augment our
seed set of judgments. We then infuse the learned textual style in a GPT-2
based text generator while balancing fluency and style adoption. With
quantitative and qualitative assessments, we show that our infusion approach
can generate compelling stylized examples with generic text prompts. The code
and data are accessible at https://github.com/CrowdDynamicsLab/StyleInfusion.
Related papers
- Dynamic Multi-Reward Weighting for Multi-Style Controllable Generation [15.959784404955402]
Textual style expresses a diverse set of information, including interpersonal dynamics (e.g., formality) and the author's emotions or attitudes (e.g., disgust)
An open question is how language models can be explicitly controlled so that they weave together target styles when generating text.
One approach to such controlled generation is multi-objective reinforcement learning (RL)
We investigate various formulations of multi-style rewards, including calibrated outputs from discriminators and dynamic weighting by discriminator magnitudes.
arXiv Detail & Related papers (2024-02-21T22:02:37Z) - Context Disentangling and Prototype Inheriting for Robust Visual
Grounding [56.63007386345772]
Visual grounding (VG) aims to locate a specific target in an image based on a given language query.
We propose a novel framework with context disentangling and prototype inheriting for robust visual grounding to handle both scenes.
Our method outperforms the state-of-the-art methods in both scenarios.
arXiv Detail & Related papers (2023-12-19T09:03:53Z) - ParaGuide: Guided Diffusion Paraphrasers for Plug-and-Play Textual Style
Transfer [57.6482608202409]
Textual style transfer is the task of transforming stylistic properties of text while preserving meaning.
We introduce a novel diffusion-based framework for general-purpose style transfer that can be flexibly adapted to arbitrary target styles.
We validate the method on the Enron Email Corpus, with both human and automatic evaluations, and find that it outperforms strong baselines on formality, sentiment, and even authorship style transfer.
arXiv Detail & Related papers (2023-08-29T17:36:02Z) - Conversation Style Transfer using Few-Shot Learning [56.43383396058639]
In this paper, we introduce conversation style transfer as a few-shot learning problem.
We propose a novel in-context learning approach to solve the task with style-free dialogues as a pivot.
We show that conversation style transfer can also benefit downstream tasks.
arXiv Detail & Related papers (2023-02-16T15:27:00Z) - Unsupervised Neural Stylistic Text Generation using Transfer learning
and Adapters [66.17039929803933]
We propose a novel transfer learning framework which updates only $0.3%$ of model parameters to learn style specific attributes for response generation.
We learn style specific attributes from the PERSONALITY-CAPTIONS dataset.
arXiv Detail & Related papers (2022-10-07T00:09:22Z) - Self-supervised Context-aware Style Representation for Expressive Speech
Synthesis [23.460258571431414]
We propose a novel framework for learning style representation from plain text in a self-supervised manner.
It leverages an emotion lexicon and uses contrastive learning and deep clustering.
Our method achieves improved results according to subjective evaluations on both in-domain and out-of-domain test sets in audiobook speech.
arXiv Detail & Related papers (2022-06-25T05:29:48Z) - From Theories on Styles to their Transfer in Text: Bridging the Gap with
a Hierarchical Survey [10.822011920177408]
Style transfer aims at re-writing existing texts and creating paraphrases that exhibit desired stylistic attributes.
A handful of surveys give a methodological overview of the field, but they do not support researchers to focus on specific styles.
We organize them into a hierarchy, highlighting the challenges for the definition of each of them, and pointing out gaps in the current research landscape.
arXiv Detail & Related papers (2021-10-29T15:53:06Z) - A Review of Text Style Transfer using Deep Learning [0.0]
Text style transfer is a task of adapting and/or changing the stylistic manner in which a sentence is written.
We point out the technological advances in deep neural networks that have been the driving force behind current successes in the fields of natural language understanding and generation.
The review is structured around two key stages in the text style transfer process, namely, representation learning and sentence generation in a new style.
arXiv Detail & Related papers (2021-09-30T14:06:36Z) - Stylistic Retrieval-based Dialogue System with Unparallel Training Data [19.777894827625275]
We propose a flexible framework that adapts a generic retrieval-based dialogue system to mimic the language style of a specified persona without any parallel data.
Our approach is based on automatic generation of stylized data by learning the usage of jargon, and then rewriting the generic conversations to a stylized one by incorporating the jargon.
arXiv Detail & Related papers (2021-09-12T09:56:24Z) - Sentiment analysis in tweets: an assessment study from classical to
modern text representation models [59.107260266206445]
Short texts published on Twitter have earned significant attention as a rich source of information.
Their inherent characteristics, such as the informal, and noisy linguistic style, remain challenging to many natural language processing (NLP) tasks.
This study fulfils an assessment of existing language models in distinguishing the sentiment expressed in tweets by using a rich collection of 22 datasets.
arXiv Detail & Related papers (2021-05-29T21:05:28Z) - Improving Disentangled Text Representation Learning with
Information-Theoretic Guidance [99.68851329919858]
discrete nature of natural language makes disentangling of textual representations more challenging.
Inspired by information theory, we propose a novel method that effectively manifests disentangled representations of text.
Experiments on both conditional text generation and text-style transfer demonstrate the high quality of our disentangled representation.
arXiv Detail & Related papers (2020-06-01T03:36:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.