Conversation Style Transfer using Few-Shot Learning
- URL: http://arxiv.org/abs/2302.08362v2
- Date: Thu, 21 Sep 2023 23:14:02 GMT
- Title: Conversation Style Transfer using Few-Shot Learning
- Authors: Shamik Roy, Raphael Shu, Nikolaos Pappas, Elman Mansimov, Yi Zhang,
Saab Mansour and Dan Roth
- Abstract summary: In this paper, we introduce conversation style transfer as a few-shot learning problem.
We propose a novel in-context learning approach to solve the task with style-free dialogues as a pivot.
We show that conversation style transfer can also benefit downstream tasks.
- Score: 56.43383396058639
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional text style transfer approaches focus on sentence-level style
transfer without considering contextual information, and the style is described
with attributes (e.g., formality). When applying style transfer in
conversations such as task-oriented dialogues, existing approaches suffer from
these limitations as context can play an important role and the style
attributes are often difficult to define in conversations. In this paper, we
introduce conversation style transfer as a few-shot learning problem, where the
model learns to perform style transfer by observing only a few example
dialogues in the target style. We propose a novel in-context learning approach
to solve the task with style-free dialogues as a pivot. Human evaluation shows
that by incorporating multi-turn context, the model is able to match the target
style while having better appropriateness and semantic correctness compared to
utterance/sentence-level style transfer. Additionally, we show that
conversation style transfer can also benefit downstream tasks. For example, in
multi-domain intent classification tasks, the F1 scores improve after
transferring the style of training data to match the style of the test data.
Related papers
- SETTP: Style Extraction and Tunable Inference via Dual-level Transferable Prompt Learning [22.04285529067442]
Style Extraction and Tunable Inference via Dual-level Transferable Prompt Learning is proposed.
SETTP learns source style-level prompts containing fundamental style characteristics from high-resource style transfer.
Experiments show SETTP requires only 1/20th of the data volume to achieve performance comparable to state-of-the-art methods.
arXiv Detail & Related papers (2024-07-22T11:34:48Z) - ParaGuide: Guided Diffusion Paraphrasers for Plug-and-Play Textual Style
Transfer [57.6482608202409]
Textual style transfer is the task of transforming stylistic properties of text while preserving meaning.
We introduce a novel diffusion-based framework for general-purpose style transfer that can be flexibly adapted to arbitrary target styles.
We validate the method on the Enron Email Corpus, with both human and automatic evaluations, and find that it outperforms strong baselines on formality, sentiment, and even authorship style transfer.
arXiv Detail & Related papers (2023-08-29T17:36:02Z) - Don't lose the message while paraphrasing: A study on content preserving
style transfer [61.38460184163704]
Content preservation is critical for real-world applications of style transfer studies.
We compare various style transfer models on the example of the formality transfer domain.
We conduct a precise comparative study of several state-of-the-art techniques for style transfer.
arXiv Detail & Related papers (2023-08-17T15:41:08Z) - TranSTYLer: Multimodal Behavioral Style Transfer for Facial and Body
Gestures Generation [2.7317088388886384]
This paper addresses the challenge of transferring the behavior expressivity style of a virtual agent to another one.
We propose a multimodal transformer based model that synthesizes the multimodal behaviors of a source speaker with the style of a target speaker.
arXiv Detail & Related papers (2023-08-08T15:42:35Z) - StoryTrans: Non-Parallel Story Author-Style Transfer with Discourse
Representations and Content Enhancing [73.81778485157234]
Long texts usually involve more complicated author linguistic preferences such as discourse structures than sentences.
We formulate the task of non-parallel story author-style transfer, which requires transferring an input story into a specified author style.
We use an additional training objective to disentangle stylistic features from the learned discourse representation to prevent the model from degenerating to an auto-encoder.
arXiv Detail & Related papers (2022-08-29T08:47:49Z) - Spoken Style Learning with Multi-modal Hierarchical Context Encoding for
Conversational Text-to-Speech Synthesis [59.27994987902646]
The study about learning spoken styles from historical conversations is still in its infancy.
Only the transcripts of the historical conversations are considered, which neglects the spoken styles in historical speeches.
We propose a spoken style learning approach with multi-modal hierarchical context encoding.
arXiv Detail & Related papers (2021-06-11T08:33:52Z) - Exploring Contextual Word-level Style Relevance for Unsupervised Style
Transfer [60.07283363509065]
Unsupervised style transfer aims to change the style of an input sentence while preserving its original content.
We propose a novel attentional sequence-to-sequence model that exploits the relevance of each output word to the target style.
Experimental results show that our proposed model achieves state-of-the-art performance in terms of both transfer accuracy and content preservation.
arXiv Detail & Related papers (2020-05-05T10:24:28Z) - ST$^2$: Small-data Text Style Transfer via Multi-task Meta-Learning [14.271083093944753]
Text style transfer aims to paraphrase a sentence in one style into another while preserving content.
Due to lack of parallel training data, state-of-art methods are unsupervised and rely on large datasets that share content.
In this work, we develop a meta-learning framework to transfer between any kind of text styles.
arXiv Detail & Related papers (2020-04-24T13:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.