Multi-Pair Text Style Transfer on Unbalanced Data
- URL: http://arxiv.org/abs/2106.10608v1
- Date: Sun, 20 Jun 2021 03:20:43 GMT
- Title: Multi-Pair Text Style Transfer on Unbalanced Data
- Authors: Xing Han, Jessica Lundin
- Abstract summary: Text-style transfer aims to convert text given in one domain into another by paraphrasing the sentence or substituting the keywords without altering the content.
We developed a task adaptive meta-learning framework that can simultaneously perform a multi-pair text-style transfer.
Results show that our method leads to better quantitative performance as well as coherent style variations.
- Score: 3.4773470589069473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text-style transfer aims to convert text given in one domain into another by
paraphrasing the sentence or substituting the keywords without altering the
content. By necessity, state-of-the-art methods have evolved to accommodate
nonparallel training data, as it is frequently the case there are multiple data
sources of unequal size, with a mixture of labeled and unlabeled sentences.
Moreover, the inherent style defined within each source might be distinct. A
generic bidirectional (e.g., formal $\Leftrightarrow$ informal) style transfer
regardless of different groups may not generalize well to different
applications. In this work, we developed a task adaptive meta-learning
framework that can simultaneously perform a multi-pair text-style transfer
using a single model. The proposed method can adaptively balance the difference
of meta-knowledge across multiple tasks. Results show that our method leads to
better quantitative performance as well as coherent style variations. Common
challenges of unbalanced data and mismatched domains are handled well by this
method.
Related papers
- Unsupervised Text Style Transfer via LLMs and Attention Masking with
Multi-way Interactions [18.64326057581588]
Unsupervised Text Style Transfer (UTST) has emerged as a critical task within the domain of Natural Language Processing (NLP)
We propose four ways of interactions, that are pipeline framework with tuned orders; knowledge distillation from Large Language Models (LLMs) to attention masking model; in-context learning with constructed parallel examples.
We empirically show these multi-way interactions can improve the baselines in certain perspective of style strength, content preservation and text fluency.
arXiv Detail & Related papers (2024-02-21T09:28:02Z) - ParaGuide: Guided Diffusion Paraphrasers for Plug-and-Play Textual Style
Transfer [57.6482608202409]
Textual style transfer is the task of transforming stylistic properties of text while preserving meaning.
We introduce a novel diffusion-based framework for general-purpose style transfer that can be flexibly adapted to arbitrary target styles.
We validate the method on the Enron Email Corpus, with both human and automatic evaluations, and find that it outperforms strong baselines on formality, sentiment, and even authorship style transfer.
arXiv Detail & Related papers (2023-08-29T17:36:02Z) - Text Revision by On-the-Fly Representation Optimization [76.11035270753757]
Current state-of-the-art methods formulate these tasks as sequence-to-sequence learning problems.
We present an iterative in-place editing approach for text revision, which requires no parallel data.
It achieves competitive and even better performance than state-of-the-art supervised methods on text simplification.
arXiv Detail & Related papers (2022-04-15T07:38:08Z) - Generic resources are what you need: Style transfer tasks without
task-specific parallel training data [4.181049191386633]
Style transfer aims to rewrite a source text in a different target style while preserving its content.
We propose a novel approach to this task that leverages generic resources.
We adopt a multi-step procedure which builds on a generic pre-trained sequence-to-sequence model.
arXiv Detail & Related papers (2021-09-09T20:15:02Z) - Fake it Till You Make it: Self-Supervised Semantic Shifts for
Monolingual Word Embedding Tasks [58.87961226278285]
We propose a self-supervised approach to model lexical semantic change.
We show that our method can be used for the detection of semantic change with any alignment method.
We illustrate the utility of our techniques using experimental results on three different datasets.
arXiv Detail & Related papers (2021-01-30T18:59:43Z) - Conditioned Text Generation with Transfer for Closed-Domain Dialogue
Systems [65.48663492703557]
We show how to optimally train and control the generation of intent-specific sentences using a conditional variational autoencoder.
We introduce a new protocol called query transfer that allows to leverage a large unlabelled dataset.
arXiv Detail & Related papers (2020-11-03T14:06:10Z) - Multi-Style Transfer with Discriminative Feedback on Disjoint Corpus [9.793194158416854]
Style transfer has been widely explored in natural language generation with non-parallel corpus.
A common shortcoming of existing approaches is the prerequisite of joint annotations across all the stylistic dimensions.
We show the ability of our model to control styles across multiple style dimensions while preserving content of the input text.
arXiv Detail & Related papers (2020-10-22T10:16:29Z) - Contextual Text Style Transfer [73.66285813595616]
Contextual Text Style Transfer aims to translate a sentence into a desired style with its surrounding context taken into account.
We propose a Context-Aware Style Transfer (CAST) model, which uses two separate encoders for each input sentence and its surrounding context.
Two new benchmarks, Enron-Context and Reddit-Context, are introduced for formality and offensiveness style transfer.
arXiv Detail & Related papers (2020-04-30T23:01:12Z) - ST$^2$: Small-data Text Style Transfer via Multi-task Meta-Learning [14.271083093944753]
Text style transfer aims to paraphrase a sentence in one style into another while preserving content.
Due to lack of parallel training data, state-of-art methods are unsupervised and rely on large datasets that share content.
In this work, we develop a meta-learning framework to transfer between any kind of text styles.
arXiv Detail & Related papers (2020-04-24T13:36:38Z) - Learning to Generate Multiple Style Transfer Outputs for an Input
Sentence [93.32432042381572]
We propose a one-to-many text style transfer framework to generate different style transfer results for a given input text.
We decompose the latent representation of the input sentence to a style code that captures the language style variation.
By combining the same content code with a different style code, we generate a different style transfer output.
arXiv Detail & Related papers (2020-02-16T07:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.