Learning to Generate Multiple Style Transfer Outputs for an Input
Sentence
- URL: http://arxiv.org/abs/2002.06525v1
- Date: Sun, 16 Feb 2020 07:10:45 GMT
- Title: Learning to Generate Multiple Style Transfer Outputs for an Input
Sentence
- Authors: Kevin Lin, Ming-Yu Liu, Ming-Ting Sun, Jan Kautz
- Abstract summary: We propose a one-to-many text style transfer framework to generate different style transfer results for a given input text.
We decompose the latent representation of the input sentence to a style code that captures the language style variation.
By combining the same content code with a different style code, we generate a different style transfer output.
- Score: 93.32432042381572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text style transfer refers to the task of rephrasing a given text in a
different style. While various methods have been proposed to advance the state
of the art, they often assume the transfer output follows a delta distribution,
and thus their models cannot generate different style transfer results for a
given input text. To address the limitation, we propose a one-to-many text
style transfer framework. In contrast to prior works that learn a one-to-one
mapping that converts an input sentence to one output sentence, our approach
learns a one-to-many mapping that can convert an input sentence to multiple
different output sentences, while preserving the input content. This is
achieved by applying adversarial training with a latent decomposition scheme.
Specifically, we decompose the latent representation of the input sentence to a
style code that captures the language style variation and a content code that
encodes the language style-independent content. We then combine the content
code with the style code for generating a style transfer output. By combining
the same content code with a different style code, we generate a different
style transfer output. Extensive experimental results with comparisons to
several text style transfer approaches on multiple public datasets using a
diverse set of performance metrics validate effectiveness of the proposed
approach.
Related papers
- MSSRNet: Manipulating Sequential Style Representation for Unsupervised
Text Style Transfer [82.37710853235535]
Unsupervised text style transfer task aims to rewrite a text into target style while preserving its main content.
Traditional methods rely on the use of a fixed-sized vector to regulate text style, which is difficult to accurately convey the style strength for each individual token.
Our proposed method addresses this issue by assigning individual style vector to each token in a text, allowing for fine-grained control and manipulation of the style strength.
arXiv Detail & Related papers (2023-06-12T13:12:29Z) - StylerDALLE: Language-Guided Style Transfer Using a Vector-Quantized
Tokenizer of a Large-Scale Generative Model [64.26721402514957]
We propose StylerDALLE, a style transfer method that uses natural language to describe abstract art styles.
Specifically, we formulate the language-guided style transfer task as a non-autoregressive token sequence translation.
To incorporate style information, we propose a Reinforcement Learning strategy with CLIP-based language supervision.
arXiv Detail & Related papers (2023-03-16T12:44:44Z) - StyleFlow: Disentangle Latent Representations via Normalizing Flow for
Unsupervised Text Style Transfer [5.439842512864442]
Style transfer aims to alter the style of a sentence while preserving its content.
In this paper, we propose a novel disentanglement-based style transfer model StyleFlow to enhance content preservation.
arXiv Detail & Related papers (2022-12-19T17:59:18Z) - DiffStyler: Controllable Dual Diffusion for Text-Driven Image
Stylization [66.42741426640633]
DiffStyler is a dual diffusion processing architecture to control the balance between the content and style of diffused results.
We propose a content image-based learnable noise on which the reverse denoising process is based, enabling the stylization results to better preserve the structure information of the content image.
arXiv Detail & Related papers (2022-11-19T12:30:44Z) - SLOGAN: Handwriting Style Synthesis for Arbitrary-Length and
Out-of-Vocabulary Text [35.83345711291558]
We propose a novel method that can synthesize parameterized and controllable handwriting Styles for arbitrary-Length and Out-of-vocabulary text.
We embed the text content by providing an easily obtainable printed style image, so that the diversity of the content can be flexibly achieved.
Our method can synthesize words that are not included in the training vocabulary and with various new styles.
arXiv Detail & Related papers (2022-02-23T12:13:27Z) - Multi-Pair Text Style Transfer on Unbalanced Data [3.4773470589069473]
Text-style transfer aims to convert text given in one domain into another by paraphrasing the sentence or substituting the keywords without altering the content.
We developed a task adaptive meta-learning framework that can simultaneously perform a multi-pair text-style transfer.
Results show that our method leads to better quantitative performance as well as coherent style variations.
arXiv Detail & Related papers (2021-06-20T03:20:43Z) - LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer [14.507559615347304]
We propose a coarse-to-fine editor for style transfer that transforms text using Levenshtein edit operations.
Unlike prior single-span edit methods, our method concurrently edits multiple spans in the source text.
Our method outperforms existing generation and editing style transfer methods on sentiment (Yelp, Amazon) and politeness (Polite) transfer.
arXiv Detail & Related papers (2021-05-18T00:08:30Z) - TextSETTR: Few-Shot Text Style Extraction and Tunable Targeted Restyling [23.60231661500702]
We present a novel approach to the problem of text style transfer.
Our method makes use of readily-available unlabeled text by relying on the implicit connection in style between adjacent sentences.
We demonstrate that training on unlabeled Amazon reviews data results in a model that is competitive on sentiment transfer.
arXiv Detail & Related papers (2020-10-08T07:06:38Z) - Exploring Contextual Word-level Style Relevance for Unsupervised Style
Transfer [60.07283363509065]
Unsupervised style transfer aims to change the style of an input sentence while preserving its original content.
We propose a novel attentional sequence-to-sequence model that exploits the relevance of each output word to the target style.
Experimental results show that our proposed model achieves state-of-the-art performance in terms of both transfer accuracy and content preservation.
arXiv Detail & Related papers (2020-05-05T10:24:28Z) - Contextual Text Style Transfer [73.66285813595616]
Contextual Text Style Transfer aims to translate a sentence into a desired style with its surrounding context taken into account.
We propose a Context-Aware Style Transfer (CAST) model, which uses two separate encoders for each input sentence and its surrounding context.
Two new benchmarks, Enron-Context and Reddit-Context, are introduced for formality and offensiveness style transfer.
arXiv Detail & Related papers (2020-04-30T23:01:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.