StyleFlow: Disentangle Latent Representations via Normalizing Flow for
Unsupervised Text Style Transfer
- URL: http://arxiv.org/abs/2212.09670v1
- Date: Mon, 19 Dec 2022 17:59:18 GMT
- Title: StyleFlow: Disentangle Latent Representations via Normalizing Flow for
Unsupervised Text Style Transfer
- Authors: Kangchen Zhu, Zhiliang Tian, Ruifeng Luo, Xiaoguang Mao
- Abstract summary: Style transfer aims to alter the style of a sentence while preserving its content.
In this paper, we propose a novel disentanglement-based style transfer model StyleFlow to enhance content preservation.
- Score: 5.439842512864442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text style transfer aims to alter the style of a sentence while preserving
its content. Due to the lack of parallel corpora, most recent work focuses on
unsupervised methods and often uses cycle construction to train models. Since
cycle construction helps to improve the style transfer ability of the model by
rebuilding transferred sentences back to original-style sentences, it brings
about a content loss in unsupervised text style transfer tasks. In this paper,
we propose a novel disentanglement-based style transfer model StyleFlow to
enhance content preservation. Instead of the typical encoder-decoder scheme,
StyleFlow can not only conduct the forward process to obtain the output, but
also infer to the input through the output. We design an attention-aware
coupling layers to disentangle the content representations and the style
representations of a sentence. Besides, we propose a data augmentation method
based on Normalizing Flow to improve the robustness of the model. Experiment
results demonstrate that our model preserves content effectively and achieves
the state-of-the-art performance on the most metrics.
Related papers
- Prefix-Tuning Based Unsupervised Text Style Transfer [29.86587278794342]
Unsupervised text style transfer aims at training a generative model that can alter the style of the input sentence while preserving its content.
In this paper, we employ powerful pre-trained large language models and present a new prefix-tuning-based method for unsupervised text style transfer.
arXiv Detail & Related papers (2023-10-23T06:13:08Z) - ParaGuide: Guided Diffusion Paraphrasers for Plug-and-Play Textual Style
Transfer [57.6482608202409]
Textual style transfer is the task of transforming stylistic properties of text while preserving meaning.
We introduce a novel diffusion-based framework for general-purpose style transfer that can be flexibly adapted to arbitrary target styles.
We validate the method on the Enron Email Corpus, with both human and automatic evaluations, and find that it outperforms strong baselines on formality, sentiment, and even authorship style transfer.
arXiv Detail & Related papers (2023-08-29T17:36:02Z) - DiffStyler: Controllable Dual Diffusion for Text-Driven Image
Stylization [66.42741426640633]
DiffStyler is a dual diffusion processing architecture to control the balance between the content and style of diffused results.
We propose a content image-based learnable noise on which the reverse denoising process is based, enabling the stylization results to better preserve the structure information of the content image.
arXiv Detail & Related papers (2022-11-19T12:30:44Z) - StoryTrans: Non-Parallel Story Author-Style Transfer with Discourse
Representations and Content Enhancing [73.81778485157234]
Long texts usually involve more complicated author linguistic preferences such as discourse structures than sentences.
We formulate the task of non-parallel story author-style transfer, which requires transferring an input story into a specified author style.
We use an additional training objective to disentangle stylistic features from the learned discourse representation to prevent the model from degenerating to an auto-encoder.
arXiv Detail & Related papers (2022-08-29T08:47:49Z) - GTAE: Graph-Transformer based Auto-Encoders for Linguistic-Constrained
Text Style Transfer [119.70961704127157]
Non-parallel text style transfer has attracted increasing research interests in recent years.
Current approaches still lack the ability to preserve the content and even logic of original sentences.
We propose a method called Graph Transformer based Auto-GTAE, which models a sentence as a linguistic graph and performs feature extraction and style transfer at the graph level.
arXiv Detail & Related papers (2021-02-01T11:08:45Z) - TextSETTR: Few-Shot Text Style Extraction and Tunable Targeted Restyling [23.60231661500702]
We present a novel approach to the problem of text style transfer.
Our method makes use of readily-available unlabeled text by relying on the implicit connection in style between adjacent sentences.
We demonstrate that training on unlabeled Amazon reviews data results in a model that is competitive on sentiment transfer.
arXiv Detail & Related papers (2020-10-08T07:06:38Z) - Exploring Contextual Word-level Style Relevance for Unsupervised Style
Transfer [60.07283363509065]
Unsupervised style transfer aims to change the style of an input sentence while preserving its original content.
We propose a novel attentional sequence-to-sequence model that exploits the relevance of each output word to the target style.
Experimental results show that our proposed model achieves state-of-the-art performance in terms of both transfer accuracy and content preservation.
arXiv Detail & Related papers (2020-05-05T10:24:28Z) - Contextual Text Style Transfer [73.66285813595616]
Contextual Text Style Transfer aims to translate a sentence into a desired style with its surrounding context taken into account.
We propose a Context-Aware Style Transfer (CAST) model, which uses two separate encoders for each input sentence and its surrounding context.
Two new benchmarks, Enron-Context and Reddit-Context, are introduced for formality and offensiveness style transfer.
arXiv Detail & Related papers (2020-04-30T23:01:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.