Multi-Style Transfer with Discriminative Feedback on Disjoint Corpus
- URL: http://arxiv.org/abs/2010.11578v2
- Date: Mon, 12 Apr 2021 09:39:42 GMT
- Title: Multi-Style Transfer with Discriminative Feedback on Disjoint Corpus
- Authors: Navita Goyal, Balaji Vasan Srinivasan, Anandhavelu Natarajan,
Abhilasha Sancheti
- Abstract summary: Style transfer has been widely explored in natural language generation with non-parallel corpus.
A common shortcoming of existing approaches is the prerequisite of joint annotations across all the stylistic dimensions.
We show the ability of our model to control styles across multiple style dimensions while preserving content of the input text.
- Score: 9.793194158416854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Style transfer has been widely explored in natural language generation with
non-parallel corpus by directly or indirectly extracting a notion of style from
source and target domain corpus. A common shortcoming of existing approaches is
the prerequisite of joint annotations across all the stylistic dimensions under
consideration. Availability of such dataset across a combination of styles
limits the extension of these setups to multiple style dimensions. While
cascading single-dimensional models across multiple styles is a possibility, it
suffers from content loss, especially when the style dimensions are not
completely independent of each other. In our work, we relax this requirement of
jointly annotated data across multiple styles by using independently acquired
data across different style dimensions without any additional annotations. We
initialize an encoder-decoder setup with transformer-based language model
pre-trained on a generic corpus and enhance its re-writing capability to
multiple target style dimensions by employing multiple style-aware language
models as discriminators. Through quantitative and qualitative evaluation, we
show the ability of our model to control styles across multiple style
dimensions while preserving content of the input text. We compare it against
baselines involving cascaded state-of-the-art uni-dimensional style transfer
models.
Related papers
- StyleDistance: Stronger Content-Independent Style Embeddings with Synthetic Parallel Examples [48.44036251656947]
Style representations aim to embed texts with similar writing styles closely and texts with different styles far apart, regardless of content.
We introduce StyleDistance, a novel approach to training stronger content-independent style embeddings.
arXiv Detail & Related papers (2024-10-16T17:25:25Z) - DPStyler: Dynamic PromptStyler for Source-Free Domain Generalization [43.67213274161226]
Source-Free Domain Generalization (SFDG) aims to develop a model that works for unseen target domains without relying on any source domain.
Research in SFDG primarily bulids upon the existing knowledge of large-scale vision-language models.
We introduce Dynamic PromptStyler (DPStyler), comprising Style Generation and Style Removal modules.
arXiv Detail & Related papers (2024-03-25T12:31:01Z) - ParaGuide: Guided Diffusion Paraphrasers for Plug-and-Play Textual Style
Transfer [57.6482608202409]
Textual style transfer is the task of transforming stylistic properties of text while preserving meaning.
We introduce a novel diffusion-based framework for general-purpose style transfer that can be flexibly adapted to arbitrary target styles.
We validate the method on the Enron Email Corpus, with both human and automatic evaluations, and find that it outperforms strong baselines on formality, sentiment, and even authorship style transfer.
arXiv Detail & Related papers (2023-08-29T17:36:02Z) - StylerDALLE: Language-Guided Style Transfer Using a Vector-Quantized
Tokenizer of a Large-Scale Generative Model [64.26721402514957]
We propose StylerDALLE, a style transfer method that uses natural language to describe abstract art styles.
Specifically, we formulate the language-guided style transfer task as a non-autoregressive token sequence translation.
To incorporate style information, we propose a Reinforcement Learning strategy with CLIP-based language supervision.
arXiv Detail & Related papers (2023-03-16T12:44:44Z) - Generating More Pertinent Captions by Leveraging Semantics and Style on
Multi-Source Datasets [56.018551958004814]
This paper addresses the task of generating fluent descriptions by training on a non-uniform combination of data sources.
Large-scale datasets with noisy image-text pairs provide a sub-optimal source of supervision.
We propose to leverage and separate semantics and descriptive style through the incorporation of a style token and keywords extracted through a retrieval component.
arXiv Detail & Related papers (2021-11-24T19:00:05Z) - StyleMeUp: Towards Style-Agnostic Sketch-Based Image Retrieval [119.03470556503942]
Crossmodal matching problem is typically solved by learning a joint embedding space where semantic content shared between photo and sketch modalities are preserved.
An effective model needs to explicitly account for this style diversity, crucially, to unseen user styles.
Our model can not only disentangle the cross-modal shared semantic content, but can adapt the disentanglement to any unseen user style as well, making the model truly agnostic.
arXiv Detail & Related papers (2021-03-29T15:44:19Z) - Exploring Contextual Word-level Style Relevance for Unsupervised Style
Transfer [60.07283363509065]
Unsupervised style transfer aims to change the style of an input sentence while preserving its original content.
We propose a novel attentional sequence-to-sequence model that exploits the relevance of each output word to the target style.
Experimental results show that our proposed model achieves state-of-the-art performance in terms of both transfer accuracy and content preservation.
arXiv Detail & Related papers (2020-05-05T10:24:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.