Syntax Matters! Syntax-Controlled in Text Style Transfer
- URL: http://arxiv.org/abs/2108.05869v1
- Date: Thu, 12 Aug 2021 17:35:23 GMT
- Title: Syntax Matters! Syntax-Controlled in Text Style Transfer
- Authors: Zhiqiang Hu, Roy Ka-Wei Lee, Charu C. Aggarwal
- Abstract summary: Existing text style transfer (TST) methods rely on style classifiers to disentangle the text's content and style attributes.
We propose a novel Syntax-Aware Controllable Generation (SACG) model, which includes a syntax-aware style classifier.
We show that our proposed method significantly outperforms the state-of-the-art methods.
- Score: 24.379552683296392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing text style transfer (TST) methods rely on style classifiers to
disentangle the text's content and style attributes for text style transfer.
While the style classifier plays a critical role in existing TST methods, there
is no known investigation on its effect on the TST methods. In this paper, we
conduct an empirical study on the limitations of the style classifiers used in
existing TST methods. We demonstrate that the existing style classifiers cannot
learn sentence syntax effectively and ultimately worsen existing TST models'
performance. To address this issue, we propose a novel Syntax-Aware
Controllable Generation (SACG) model, which includes a syntax-aware style
classifier that ensures learned style latent representations effectively
capture the syntax information for TST. Through extensive experiments on two
popular TST tasks, we show that our proposed method significantly outperforms
the state-of-the-art methods. Our case studies have also demonstrated SACG's
ability to generate fluent target-style sentences that preserved the original
content.
Related papers
- Style-Specific Neurons for Steering LLMs in Text Style Transfer [55.06697862691798]
Text style transfer (TST) aims to modify the style of a text without altering its original meaning.
We present sNeuron-TST, a novel approach for steering large language models using style-specific neurons.
arXiv Detail & Related papers (2024-10-01T11:25:36Z) - Text Style Transfer: An Introductory Overview [0.1534667887016089]
Text Style Transfer (TST) is a pivotal task in natural language generation to manipulate text style attributes while preserving style-independent content.
The attributes targeted in TST can vary widely, including politeness, authorship, mitigation of offensive language, modification of feelings, and adjustment of text formality.
This paper provides an introductory overview of TST, addressing its challenges, existing approaches, datasets, evaluation measures, subtasks, and applications.
arXiv Detail & Related papers (2024-07-20T09:54:55Z) - Style Mixture of Experts for Expressive Text-To-Speech Synthesis [7.6732312922460055]
StyleMoE is an approach that addresses the issue of learning averaged style representations in the style encoder.
The proposed method replaces the style encoder in a TTS framework with a Mixture of Experts layer.
Our experiments, both objective and subjective, demonstrate improved style transfer for diverse and unseen reference speech.
arXiv Detail & Related papers (2024-06-05T22:17:47Z) - Expressive TTS Driven by Natural Language Prompts Using Few Human
Annotations [12.891344121936902]
Expressive text-to-speech (TTS) aims to synthesize speeches with human-like tones, moods, or even artistic attributes.
Recent advancements in TTS empower users with the ability to directly control synthesis style through natural language prompts.
We present FreeStyleTTS (FS-TTS), a controllable expressive TTS model with minimal human annotations.
arXiv Detail & Related papers (2023-11-02T14:20:37Z) - ParaGuide: Guided Diffusion Paraphrasers for Plug-and-Play Textual Style
Transfer [57.6482608202409]
Textual style transfer is the task of transforming stylistic properties of text while preserving meaning.
We introduce a novel diffusion-based framework for general-purpose style transfer that can be flexibly adapted to arbitrary target styles.
We validate the method on the Enron Email Corpus, with both human and automatic evaluations, and find that it outperforms strong baselines on formality, sentiment, and even authorship style transfer.
arXiv Detail & Related papers (2023-08-29T17:36:02Z) - StylerDALLE: Language-Guided Style Transfer Using a Vector-Quantized
Tokenizer of a Large-Scale Generative Model [64.26721402514957]
We propose StylerDALLE, a style transfer method that uses natural language to describe abstract art styles.
Specifically, we formulate the language-guided style transfer task as a non-autoregressive token sequence translation.
To incorporate style information, we propose a Reinforcement Learning strategy with CLIP-based language supervision.
arXiv Detail & Related papers (2023-03-16T12:44:44Z) - On Text Style Transfer via Style Masked Language Models [5.754152248672319]
Text Style Transfer (TST) is performable through approaches such as latent space disentanglement, cycleconsistency losses, prototype editing.
We present a prototype editing approach, which involves two key phases a) Masking of source style-associated tokens and b) Reconstruction of this source-style masked sentence conditioned with the target style.
We empirically show that this non-generational approach well suites the "content preserving" criteria for a task like TST, even for a complex baseline like Discourse.
arXiv Detail & Related papers (2022-10-12T16:44:06Z) - VAE based Text Style Transfer with Pivot Words Enhancement Learning [5.717913255287939]
We propose a VAE based Text Style Transfer with pivOt Words Enhancement leaRning (VT-STOWER) method.
We introduce pivot words learning, which is applied to learn decisive words for a specific style.
The proposed VT-STOWER can be scaled to different TST scenarios with a novel and flexible style strength control mechanism.
arXiv Detail & Related papers (2021-12-06T16:41:26Z) - Fine-grained style control in Transformer-based Text-to-speech Synthesis [78.92428622630861]
We present a novel architecture to realize fine-grained style control on the Transformer-based text-to-speech synthesis (TransformerTTS)
We model the speaking style by extracting a time sequence of local style tokens (LST) from the reference speech.
Experiments show that with fine-grained style control, our system performs better in terms of naturalness, intelligibility, and style transferability.
arXiv Detail & Related papers (2021-10-12T19:50:02Z) - Transductive Learning for Unsupervised Text Style Transfer [60.65782243927698]
Unsupervised style transfer models are mainly based on an inductive learning approach.
We propose a novel transductive learning approach based on a retrieval-based context-aware style representation.
arXiv Detail & Related papers (2021-09-16T08:57:20Z) - StylePTB: A Compositional Benchmark for Fine-grained Controllable Text
Style Transfer [90.6768813620898]
Style transfer aims to controllably generate text with targeted stylistic changes while maintaining core meaning from the source sentence constant.
We introduce a large-scale benchmark, StylePTB, with paired sentences undergoing 21 fine-grained stylistic changes spanning atomic lexical, syntactic, semantic, and thematic transfers of text.
We find that existing methods on StylePTB struggle to model fine-grained changes and have an even more difficult time composing multiple styles.
arXiv Detail & Related papers (2021-04-12T04:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.