Contextual Text Style Transfer
- URL: http://arxiv.org/abs/2005.00136v1
- Date: Thu, 30 Apr 2020 23:01:12 GMT
- Title: Contextual Text Style Transfer
- Authors: Yu Cheng, Zhe Gan, Yizhe Zhang, Oussama Elachqar, Dianqi Li, Jingjing
Liu
- Abstract summary: Contextual Text Style Transfer aims to translate a sentence into a desired style with its surrounding context taken into account.
We propose a Context-Aware Style Transfer (CAST) model, which uses two separate encoders for each input sentence and its surrounding context.
Two new benchmarks, Enron-Context and Reddit-Context, are introduced for formality and offensiveness style transfer.
- Score: 73.66285813595616
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a new task, Contextual Text Style Transfer - translating a
sentence into a desired style with its surrounding context taken into account.
This brings two key challenges to existing style transfer approaches: ($i$) how
to preserve the semantic meaning of target sentence and its consistency with
surrounding context during transfer; ($ii$) how to train a robust model with
limited labeled data accompanied with context. To realize high-quality style
transfer with natural context preservation, we propose a Context-Aware Style
Transfer (CAST) model, which uses two separate encoders for each input sentence
and its surrounding context. A classifier is further trained to ensure
contextual consistency of the generated sentence. To compensate for the lack of
parallel data, additional self-reconstruction and back-translation losses are
introduced to leverage non-parallel data in a semi-supervised fashion. Two new
benchmarks, Enron-Context and Reddit-Context, are introduced for formality and
offensiveness style transfer. Experimental results on these datasets
demonstrate the effectiveness of the proposed CAST model over state-of-the-art
methods across style accuracy, content preservation and contextual consistency
metrics.
Related papers
- Contextualized Diffusion Models for Text-Guided Image and Video Generation [67.69171154637172]
Conditional diffusion models have exhibited superior performance in high-fidelity text-guided visual generation and editing.
We propose a novel and general contextualized diffusion model (ContextDiff) by incorporating the cross-modal context encompassing interactions and alignments between text condition and visual sample.
We generalize our model to both DDPMs and DDIMs with theoretical derivations, and demonstrate the effectiveness of our model in evaluations with two challenging tasks: text-to-image generation, and text-to-video editing.
arXiv Detail & Related papers (2024-02-26T15:01:16Z) - ParaGuide: Guided Diffusion Paraphrasers for Plug-and-Play Textual Style
Transfer [57.6482608202409]
Textual style transfer is the task of transforming stylistic properties of text while preserving meaning.
We introduce a novel diffusion-based framework for general-purpose style transfer that can be flexibly adapted to arbitrary target styles.
We validate the method on the Enron Email Corpus, with both human and automatic evaluations, and find that it outperforms strong baselines on formality, sentiment, and even authorship style transfer.
arXiv Detail & Related papers (2023-08-29T17:36:02Z) - StyleFlow: Disentangle Latent Representations via Normalizing Flow for
Unsupervised Text Style Transfer [5.439842512864442]
Style transfer aims to alter the style of a sentence while preserving its content.
In this paper, we propose a novel disentanglement-based style transfer model StyleFlow to enhance content preservation.
arXiv Detail & Related papers (2022-12-19T17:59:18Z) - StoryTrans: Non-Parallel Story Author-Style Transfer with Discourse
Representations and Content Enhancing [73.81778485157234]
Long texts usually involve more complicated author linguistic preferences such as discourse structures than sentences.
We formulate the task of non-parallel story author-style transfer, which requires transferring an input story into a specified author style.
We use an additional training objective to disentangle stylistic features from the learned discourse representation to prevent the model from degenerating to an auto-encoder.
arXiv Detail & Related papers (2022-08-29T08:47:49Z) - Gradient-guided Unsupervised Text Style Transfer via Contrastive
Learning [6.799826701166569]
We propose a gradient-guided model through a contrastive paradigm for text style transfer, to explicitly gather similar semantic sentences.
Experiments on two datasets show the effectiveness of our proposed approach, as compared to the state-of-the-arts.
arXiv Detail & Related papers (2022-01-23T12:45:00Z) - Exploring Contextual Word-level Style Relevance for Unsupervised Style
Transfer [60.07283363509065]
Unsupervised style transfer aims to change the style of an input sentence while preserving its original content.
We propose a novel attentional sequence-to-sequence model that exploits the relevance of each output word to the target style.
Experimental results show that our proposed model achieves state-of-the-art performance in terms of both transfer accuracy and content preservation.
arXiv Detail & Related papers (2020-05-05T10:24:28Z) - Learning to Select Bi-Aspect Information for Document-Scale Text Content
Manipulation [50.01708049531156]
We focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer.
In detail, the input is a set of structured records and a reference text for describing another recordset.
The output is a summary that accurately describes the partial content in the source recordset with the same writing style of the reference.
arXiv Detail & Related papers (2020-02-24T12:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.