LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer
- URL: http://arxiv.org/abs/2105.08206v1
- Date: Tue, 18 May 2021 00:08:30 GMT
- Title: LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer
- Authors: Machel Reid and Victor Zhong
- Abstract summary: We propose a coarse-to-fine editor for style transfer that transforms text using Levenshtein edit operations.
Unlike prior single-span edit methods, our method concurrently edits multiple spans in the source text.
Our method outperforms existing generation and editing style transfer methods on sentiment (Yelp, Amazon) and politeness (Polite) transfer.
- Score: 14.507559615347304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many types of text style transfer can be achieved with only small, precise
edits (e.g. sentiment transfer from I had a terrible time... to I had a great
time...). We propose a coarse-to-fine editor for style transfer that transforms
text using Levenshtein edit operations (e.g. insert, replace, delete). Unlike
prior single-span edit methods, our method concurrently edits multiple spans in
the source text. To train without parallel style text pairs (e.g. pairs of +/-
sentiment statements), we propose an unsupervised data synthesis procedure. We
first convert text to style-agnostic templates using style classifier attention
(e.g. I had a SLOT time...), then fill in slots in these templates using
fine-tuned pretrained language models. Our method outperforms existing
generation and editing style transfer methods on sentiment (Yelp, Amazon) and
politeness (Polite) transfer. In particular, multi-span editing achieves higher
performance and more diverse output than single-span editing. Moreover,
compared to previous methods on unsupervised data synthesis, our method results
in higher quality parallel style pairs and improves model performance.
Related papers
- TinyStyler: Efficient Few-Shot Text Style Transfer with Authorship Embeddings [51.30454130214374]
We introduce TinyStyler, a lightweight but effective approach to perform efficient, few-shot text style transfer.
We evaluate TinyStyler's ability to perform text attribute style transfer with automatic and human evaluations.
Our model has been made publicly available at https://huggingface.co/tinystyler/tinystyler.
arXiv Detail & Related papers (2024-06-21T18:41:22Z) - Unsupervised Text Style Transfer via LLMs and Attention Masking with
Multi-way Interactions [18.64326057581588]
Unsupervised Text Style Transfer (UTST) has emerged as a critical task within the domain of Natural Language Processing (NLP)
We propose four ways of interactions, that are pipeline framework with tuned orders; knowledge distillation from Large Language Models (LLMs) to attention masking model; in-context learning with constructed parallel examples.
We empirically show these multi-way interactions can improve the baselines in certain perspective of style strength, content preservation and text fluency.
arXiv Detail & Related papers (2024-02-21T09:28:02Z) - Copy Is All You Need [66.00852205068327]
We formulate text generation as progressively copying text segments from an existing text collection.
Our approach achieves better generation quality according to both automatic and human evaluations.
Our approach attains additional performance gains by simply scaling up to larger text collections.
arXiv Detail & Related papers (2023-07-13T05:03:26Z) - SimpleStyle: An Adaptable Style Transfer Approach [6.993665837027786]
We present SimpleStyle, a minimalist yet effective approach for style-transfer composed of two simple ingredients: controlled denoising and output filtering.
We apply SimpleStyle to transfer a wide range of text attributes appearing in real-world textual data from social networks.
We show that teaching a student model to generate the output of SimpleStyle can result in a system that performs style transfer of equivalent quality with only a single greedy-decoded sample.
arXiv Detail & Related papers (2022-12-20T18:12:49Z) - Text Revision by On-the-Fly Representation Optimization [76.11035270753757]
Current state-of-the-art methods formulate these tasks as sequence-to-sequence learning problems.
We present an iterative in-place editing approach for text revision, which requires no parallel data.
It achieves competitive and even better performance than state-of-the-art supervised methods on text simplification.
arXiv Detail & Related papers (2022-04-15T07:38:08Z) - Fine-grained style control in Transformer-based Text-to-speech Synthesis [78.92428622630861]
We present a novel architecture to realize fine-grained style control on the Transformer-based text-to-speech synthesis (TransformerTTS)
We model the speaking style by extracting a time sequence of local style tokens (LST) from the reference speech.
Experiments show that with fine-grained style control, our system performs better in terms of naturalness, intelligibility, and style transferability.
arXiv Detail & Related papers (2021-10-12T19:50:02Z) - Text Editing by Command [82.50904226312451]
A prevailing paradigm in neural text generation is one-shot generation, where text is produced in a single step.
We address this limitation with an interactive text generation setting in which the user interacts with the system by issuing commands to edit existing text.
We show that our Interactive Editor, a transformer-based model trained on this dataset, outperforms baselines and obtains positive results in both automatic and human evaluations.
arXiv Detail & Related papers (2020-10-24T08:00:30Z) - Learning to Generate Multiple Style Transfer Outputs for an Input
Sentence [93.32432042381572]
We propose a one-to-many text style transfer framework to generate different style transfer results for a given input text.
We decompose the latent representation of the input sentence to a style code that captures the language style variation.
By combining the same content code with a different style code, we generate a different style transfer output.
arXiv Detail & Related papers (2020-02-16T07:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.