Authorship Style Transfer with Policy Optimization
- URL: http://arxiv.org/abs/2403.08043v2
- Date: Sun, 28 Jul 2024 04:29:43 GMT
- Title: Authorship Style Transfer with Policy Optimization
- Authors: Shuai Liu, Shantanu Agarwal, Jonathan May,
- Abstract summary: Authorship style transfer aims to rewrite a given text into a specified target while preserving the original meaning in the source.
Existing approaches rely on the availability of a large number of target style exemplars for model training.
- Score: 26.34892894935038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Authorship style transfer aims to rewrite a given text into a specified target while preserving the original meaning in the source. Existing approaches rely on the availability of a large number of target style exemplars for model training. However, these overlook cases where a limited number of target style examples are available. The development of parameter-efficient transfer learning techniques and policy optimization (PO) approaches suggest lightweight PO is a feasible approach to low-resource style transfer. In this work, we propose a simple two-stage tune-and-optimize technique for low-resource textual style transfer. We apply our technique to authorship transfer as well as a larger-data native language style task and in both cases find it outperforms state-of-the-art baseline models.
Related papers
- SETTP: Style Extraction and Tunable Inference via Dual-level Transferable Prompt Learning [22.04285529067442]
Style Extraction and Tunable Inference via Dual-level Transferable Prompt Learning is proposed.
SETTP learns source style-level prompts containing fundamental style characteristics from high-resource style transfer.
Experiments show SETTP requires only 1/20th of the data volume to achieve performance comparable to state-of-the-art methods.
arXiv Detail & Related papers (2024-07-22T11:34:48Z) - TinyStyler: Efficient Few-Shot Text Style Transfer with Authorship Embeddings [51.30454130214374]
We introduce TinyStyler, a lightweight but effective approach to perform efficient, few-shot text style transfer.
We evaluate TinyStyler's ability to perform text attribute style transfer with automatic and human evaluations.
Our model has been made publicly available at https://huggingface.co/tinystyler/tinystyler.
arXiv Detail & Related papers (2024-06-21T18:41:22Z) - Diffusion-based Human Motion Style Transfer with Semantic Guidance [23.600154466988073]
We propose a novel framework for few-shot style transfer learning based on the diffusion model.
In the first stage, we pre-train a diffusion-based text-to-motion model as a generative prior.
In the second stage, based on the single style example, we fine-tune the pre-trained diffusion model in a few-shot manner to make it capable of style transfer.
arXiv Detail & Related papers (2024-03-20T05:52:11Z) - STEER: Unified Style Transfer with Expert Reinforcement [71.3995732115262]
STEER: Unified Style Transfer with Expert Reinforcement, is a unified frame-work developed to overcome the challenge of limited parallel data for style transfer.
We show STEER is robust, maintaining its style transfer capabilities on out-of-domain data, and surpassing nearly all baselines across various styles.
arXiv Detail & Related papers (2023-11-13T09:02:30Z) - ParaGuide: Guided Diffusion Paraphrasers for Plug-and-Play Textual Style
Transfer [57.6482608202409]
Textual style transfer is the task of transforming stylistic properties of text while preserving meaning.
We introduce a novel diffusion-based framework for general-purpose style transfer that can be flexibly adapted to arbitrary target styles.
We validate the method on the Enron Email Corpus, with both human and automatic evaluations, and find that it outperforms strong baselines on formality, sentiment, and even authorship style transfer.
arXiv Detail & Related papers (2023-08-29T17:36:02Z) - A Unified Arbitrary Style Transfer Framework via Adaptive Contrastive
Learning [84.8813842101747]
Unified Contrastive Arbitrary Style Transfer (UCAST) is a novel style representation learning and transfer framework.
We present an adaptive contrastive learning scheme for style transfer by introducing an input-dependent temperature.
Our framework consists of three key components, i.e., a parallel contrastive learning scheme for style representation and style transfer, a domain enhancement module for effective learning of style distribution, and a generative network for style transfer.
arXiv Detail & Related papers (2023-03-09T04:35:00Z) - Conversation Style Transfer using Few-Shot Learning [56.43383396058639]
In this paper, we introduce conversation style transfer as a few-shot learning problem.
We propose a novel in-context learning approach to solve the task with style-free dialogues as a pivot.
We show that conversation style transfer can also benefit downstream tasks.
arXiv Detail & Related papers (2023-02-16T15:27:00Z) - Semi-supervised Formality Style Transfer using Language Model
Discriminator and Mutual Information Maximization [52.867459839641526]
Formality style transfer is the task of converting informal sentences to grammatically-correct formal sentences.
We propose a semi-supervised formality style transfer model that utilizes a language model-based discriminator to maximize the likelihood of the output sentence being formal.
Experiments showed that our model outperformed previous state-of-the-art baselines significantly in terms of both automated metrics and human judgement.
arXiv Detail & Related papers (2020-10-10T21:05:56Z) - ST$^2$: Small-data Text Style Transfer via Multi-task Meta-Learning [14.271083093944753]
Text style transfer aims to paraphrase a sentence in one style into another while preserving content.
Due to lack of parallel training data, state-of-art methods are unsupervised and rely on large datasets that share content.
In this work, we develop a meta-learning framework to transfer between any kind of text styles.
arXiv Detail & Related papers (2020-04-24T13:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.