SETTP: Style Extraction and Tunable Inference via Dual-level Transferable Prompt Learning
- URL: http://arxiv.org/abs/2407.15556v1
- Date: Mon, 22 Jul 2024 11:34:48 GMT
- Title: SETTP: Style Extraction and Tunable Inference via Dual-level Transferable Prompt Learning
- Authors: Chunzhen Jin, Yongfeng Huang, Yaqi Wang, Peng Cao, Osmar Zaiane,
- Abstract summary: Style Extraction and Tunable Inference via Dual-level Transferable Prompt Learning is proposed.
SETTP learns source style-level prompts containing fundamental style characteristics from high-resource style transfer.
Experiments show SETTP requires only 1/20th of the data volume to achieve performance comparable to state-of-the-art methods.
- Score: 22.04285529067442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text style transfer, an important research direction in natural language processing, aims to adapt the text to various preferences but often faces challenges with limited resources. In this work, we introduce a novel method termed Style Extraction and Tunable Inference via Dual-level Transferable Prompt Learning (SETTP) for effective style transfer in low-resource scenarios. First, SETTP learns source style-level prompts containing fundamental style characteristics from high-resource style transfer. During training, the source style-level prompts are transferred through an attention module to derive a target style-level prompt for beneficial knowledge provision in low-resource style transfer. Additionally, we propose instance-level prompts obtained by clustering the target resources based on the semantic content to reduce semantic bias. We also propose an automated evaluation approach of style similarity based on alignment with human evaluations using ChatGPT-4. Our experiments across three resourceful styles show that SETTP requires only 1/20th of the data volume to achieve performance comparable to state-of-the-art methods. In tasks involving scarce data like writing style and role style, SETTP outperforms previous methods by 16.24\%.
Related papers
- Authorship Style Transfer with Policy Optimization [26.34892894935038]
Authorship style transfer aims to rewrite a given text into a specified target while preserving the original meaning in the source.
Existing approaches rely on the availability of a large number of target style exemplars for model training.
arXiv Detail & Related papers (2024-03-12T19:34:54Z) - STEER: Unified Style Transfer with Expert Reinforcement [71.3995732115262]
STEER: Unified Style Transfer with Expert Reinforcement, is a unified frame-work developed to overcome the challenge of limited parallel data for style transfer.
We show STEER is robust, maintaining its style transfer capabilities on out-of-domain data, and surpassing nearly all baselines across various styles.
arXiv Detail & Related papers (2023-11-13T09:02:30Z) - ParaGuide: Guided Diffusion Paraphrasers for Plug-and-Play Textual Style
Transfer [57.6482608202409]
Textual style transfer is the task of transforming stylistic properties of text while preserving meaning.
We introduce a novel diffusion-based framework for general-purpose style transfer that can be flexibly adapted to arbitrary target styles.
We validate the method on the Enron Email Corpus, with both human and automatic evaluations, and find that it outperforms strong baselines on formality, sentiment, and even authorship style transfer.
arXiv Detail & Related papers (2023-08-29T17:36:02Z) - MSSRNet: Manipulating Sequential Style Representation for Unsupervised
Text Style Transfer [82.37710853235535]
Unsupervised text style transfer task aims to rewrite a text into target style while preserving its main content.
Traditional methods rely on the use of a fixed-sized vector to regulate text style, which is difficult to accurately convey the style strength for each individual token.
Our proposed method addresses this issue by assigning individual style vector to each token in a text, allowing for fine-grained control and manipulation of the style strength.
arXiv Detail & Related papers (2023-06-12T13:12:29Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Conversation Style Transfer using Few-Shot Learning [56.43383396058639]
In this paper, we introduce conversation style transfer as a few-shot learning problem.
We propose a novel in-context learning approach to solve the task with style-free dialogues as a pivot.
We show that conversation style transfer can also benefit downstream tasks.
arXiv Detail & Related papers (2023-02-16T15:27:00Z) - Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Arbitrary Textual
Style Transfer with Small Language Models [27.454582992694974]
We propose a method for arbitrary textual style transfer (TST)
Our method, Prompt-and-Rerank, is based on a mathematical formulation of the TST task.
Empirically, our method enables small pre-trained language models to perform on par with state-of-the-art large-scale models.
arXiv Detail & Related papers (2022-05-23T17:57:15Z) - VAE based Text Style Transfer with Pivot Words Enhancement Learning [5.717913255287939]
We propose a VAE based Text Style Transfer with pivOt Words Enhancement leaRning (VT-STOWER) method.
We introduce pivot words learning, which is applied to learn decisive words for a specific style.
The proposed VT-STOWER can be scaled to different TST scenarios with a novel and flexible style strength control mechanism.
arXiv Detail & Related papers (2021-12-06T16:41:26Z) - Transductive Learning for Unsupervised Text Style Transfer [60.65782243927698]
Unsupervised style transfer models are mainly based on an inductive learning approach.
We propose a novel transductive learning approach based on a retrieval-based context-aware style representation.
arXiv Detail & Related papers (2021-09-16T08:57:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.