TextSETTR: Few-Shot Text Style Extraction and Tunable Targeted Restyling
- URL: http://arxiv.org/abs/2010.03802v3
- Date: Wed, 23 Jun 2021 06:16:15 GMT
- Title: TextSETTR: Few-Shot Text Style Extraction and Tunable Targeted Restyling
- Authors: Parker Riley, Noah Constant, Mandy Guo, Girish Kumar, David Uthus,
Zarana Parekh
- Abstract summary: We present a novel approach to the problem of text style transfer.
Our method makes use of readily-available unlabeled text by relying on the implicit connection in style between adjacent sentences.
We demonstrate that training on unlabeled Amazon reviews data results in a model that is competitive on sentiment transfer.
- Score: 23.60231661500702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel approach to the problem of text style transfer. Unlike
previous approaches requiring style-labeled training data, our method makes use
of readily-available unlabeled text by relying on the implicit connection in
style between adjacent sentences, and uses labeled data only at inference time.
We adapt T5 (Raffel et al., 2020), a strong pretrained text-to-text model, to
extract a style vector from text and use it to condition the decoder to perform
style transfer. As our label-free training results in a style vector space
encoding many facets of style, we recast transfers as "targeted restyling"
vector operations that adjust specific attributes of the input while preserving
others. We demonstrate that training on unlabeled Amazon reviews data results
in a model that is competitive on sentiment transfer, even compared to models
trained fully on labeled data. Furthermore, applying our novel method to a
diverse corpus of unlabeled web text results in a single model capable of
transferring along multiple dimensions of style (dialect, emotiveness,
formality, politeness, sentiment) despite no additional training and using only
a handful of exemplars at inference time.
Related papers
- ParaGuide: Guided Diffusion Paraphrasers for Plug-and-Play Textual Style
Transfer [57.6482608202409]
Textual style transfer is the task of transforming stylistic properties of text while preserving meaning.
We introduce a novel diffusion-based framework for general-purpose style transfer that can be flexibly adapted to arbitrary target styles.
We validate the method on the Enron Email Corpus, with both human and automatic evaluations, and find that it outperforms strong baselines on formality, sentiment, and even authorship style transfer.
arXiv Detail & Related papers (2023-08-29T17:36:02Z) - MSSRNet: Manipulating Sequential Style Representation for Unsupervised
Text Style Transfer [82.37710853235535]
Unsupervised text style transfer task aims to rewrite a text into target style while preserving its main content.
Traditional methods rely on the use of a fixed-sized vector to regulate text style, which is difficult to accurately convey the style strength for each individual token.
Our proposed method addresses this issue by assigning individual style vector to each token in a text, allowing for fine-grained control and manipulation of the style strength.
arXiv Detail & Related papers (2023-06-12T13:12:29Z) - StylerDALLE: Language-Guided Style Transfer Using a Vector-Quantized
Tokenizer of a Large-Scale Generative Model [64.26721402514957]
We propose StylerDALLE, a style transfer method that uses natural language to describe abstract art styles.
Specifically, we formulate the language-guided style transfer task as a non-autoregressive token sequence translation.
To incorporate style information, we propose a Reinforcement Learning strategy with CLIP-based language supervision.
arXiv Detail & Related papers (2023-03-16T12:44:44Z) - Conversation Style Transfer using Few-Shot Learning [56.43383396058639]
In this paper, we introduce conversation style transfer as a few-shot learning problem.
We propose a novel in-context learning approach to solve the task with style-free dialogues as a pivot.
We show that conversation style transfer can also benefit downstream tasks.
arXiv Detail & Related papers (2023-02-16T15:27:00Z) - StyleFlow: Disentangle Latent Representations via Normalizing Flow for
Unsupervised Text Style Transfer [5.439842512864442]
Style transfer aims to alter the style of a sentence while preserving its content.
In this paper, we propose a novel disentanglement-based style transfer model StyleFlow to enhance content preservation.
arXiv Detail & Related papers (2022-12-19T17:59:18Z) - Exploring Contextual Word-level Style Relevance for Unsupervised Style
Transfer [60.07283363509065]
Unsupervised style transfer aims to change the style of an input sentence while preserving its original content.
We propose a novel attentional sequence-to-sequence model that exploits the relevance of each output word to the target style.
Experimental results show that our proposed model achieves state-of-the-art performance in terms of both transfer accuracy and content preservation.
arXiv Detail & Related papers (2020-05-05T10:24:28Z) - Contextual Text Style Transfer [73.66285813595616]
Contextual Text Style Transfer aims to translate a sentence into a desired style with its surrounding context taken into account.
We propose a Context-Aware Style Transfer (CAST) model, which uses two separate encoders for each input sentence and its surrounding context.
Two new benchmarks, Enron-Context and Reddit-Context, are introduced for formality and offensiveness style transfer.
arXiv Detail & Related papers (2020-04-30T23:01:12Z) - ST$^2$: Small-data Text Style Transfer via Multi-task Meta-Learning [14.271083093944753]
Text style transfer aims to paraphrase a sentence in one style into another while preserving content.
Due to lack of parallel training data, state-of-art methods are unsupervised and rely on large datasets that share content.
In this work, we develop a meta-learning framework to transfer between any kind of text styles.
arXiv Detail & Related papers (2020-04-24T13:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.