Learning to Rewrite Prompts for Personalized Text Generation
- URL: http://arxiv.org/abs/2310.00152v2
- Date: Thu, 8 Feb 2024 18:23:33 GMT
- Title: Learning to Rewrite Prompts for Personalized Text Generation
- Authors: Cheng Li, Mingyang Zhang, Qiaozhu Mei, Weize Kong, Michael Bendersky
- Abstract summary: We propose a novel method to automatically revise prompts for personalized text generation.
The proposed method takes the initial prompts generated by a state-of-the-art, multistage framework for personalized generation and rewrites a few critical components.
In-depth analysis of the rewritten prompts shows that they are not only human readable, but also able to guide manual revision of prompts.
- Score: 27.50476377270294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facilitated by large language models (LLMs), personalized text generation has
become a rapidly growing research direction. Most existing studies focus on
designing specialized models for a particular domain, or they require
fine-tuning the LLMs to generate personalized text. We consider a typical
scenario in which the large language model, which generates personalized
output, is frozen and can only be accessed through APIs. Under this constraint,
all one can do is to improve the input text (i.e., text prompts) sent to the
LLM, a procedure that is usually done manually. In this paper, we propose a
novel method to automatically revise prompts for personalized text generation.
The proposed method takes the initial prompts generated by a state-of-the-art,
multistage framework for personalized generation and rewrites a few critical
components that summarize and synthesize the personal context. The prompt
rewriter employs a training paradigm that chains together supervised learning
(SL) and reinforcement learning (RL), where SL reduces the search space of RL
and RL facilitates end-to-end training of the rewriter. Using datasets from
three representative domains, we demonstrate that the rewritten prompts
outperform both the original prompts and the prompts optimized via supervised
learning or reinforcement learning alone. In-depth analysis of the rewritten
prompts shows that they are not only human readable, but also able to guide
manual revision of prompts when there is limited resource to employ
reinforcement learning to train the prompt rewriter, or when it is costly to
deploy an automatic prompt rewriter for inference.
Related papers
- Selective Prompting Tuning for Personalized Conversations with LLMs [31.28284591597932]
We propose textbfSelective textbfPrompt textbfTuning (SPT), which softly prompts large language models (LLMs) for personalized conversations in a selective way.
SPT significantly enhances response diversity by up to 90%, along with improvements in other critical performance indicators.
arXiv Detail & Related papers (2024-06-26T09:03:52Z) - Learning to Prompt with Text Only Supervision for Vision-Language Models [107.282881515667]
One branch of methods adapts CLIP by learning prompts using visual information.
An alternative approach resorts to training-free methods by generating class descriptions from large language models.
We propose to combine the strengths of both streams by learning prompts using only text data.
arXiv Detail & Related papers (2024-01-04T18:59:49Z) - PerPLM: Personalized Fine-tuning of Pretrained Language Models via
Writer-specific Intermediate Learning and Prompts [16.59511985633798]
Pretrained language models (PLMs) are powerful tools for capturing context.
PLMs are typically pretrained and fine-tuned for universal use across different writers.
This study aims to improve the accuracy of text understanding tasks by personalizing the fine-tuning of PLMs for specific writers.
arXiv Detail & Related papers (2023-09-14T14:03:48Z) - Guiding Large Language Models via Directional Stimulus Prompting [114.84930073977672]
We introduce Directional Stimulus Prompting, a novel framework for guiding black-box large language models (LLMs) toward specific desired outputs.
Instead of directly adjusting LLMs, our method employs a small tunable policy model to generate an auxiliary directional stimulus prompt for each input instance.
arXiv Detail & Related papers (2023-02-22T17:44:15Z) - TEMPERA: Test-Time Prompting via Reinforcement Learning [57.48657629588436]
We propose Test-time Prompt Editing using Reinforcement learning (TEMPERA)
In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge.
Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
arXiv Detail & Related papers (2022-11-21T22:38:20Z) - Decoupling Knowledge from Memorization: Retrieval-augmented Prompt
Learning [113.58691755215663]
We develop RetroPrompt to help a model strike a balance between generalization and memorization.
In contrast with vanilla prompt learning, RetroPrompt constructs an open-book knowledge-store from training instances.
Extensive experiments demonstrate that RetroPrompt can obtain better performance in both few-shot and zero-shot settings.
arXiv Detail & Related papers (2022-05-29T16:07:30Z) - Learning to Transfer Prompts for Text Generation [97.64625999380425]
We propose a novel prompt-based method (PTG) for text generation in a transferable setting.
First, PTG learns a set of source prompts for various source generation tasks and then transfers these prompts as target prompts to perform target generation tasks.
In extensive experiments, PTG yields competitive or better results than fine-tuning methods.
arXiv Detail & Related papers (2022-05-03T14:53:48Z) - Context-Tuning: Learning Contextualized Prompts for Natural Language
Generation [52.835877179365525]
We propose a novel continuous prompting approach, called Context-Tuning, to fine-tuning PLMs for natural language generation.
Firstly, the prompts are derived based on the input text, so that they can elicit useful knowledge from PLMs for generation.
Secondly, to further enhance the relevance of the generated text to the inputs, we utilize continuous inverse prompting to refine the process of natural language generation.
arXiv Detail & Related papers (2022-01-21T12:35:28Z) - Prompt-Learning for Fine-Grained Entity Typing [40.983849729537795]
We investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot and zero-shot scenarios.
We propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types.
arXiv Detail & Related papers (2021-08-24T09:39:35Z) - Controllable Generation from Pre-trained Language Models via Inverse
Prompting [47.23315683944257]
We propose an innovative method, inverse prompting, to better control text generation.
Inverse prompting uses generated text to inversely predict the prompt during beam search.
Our results show that our proposed method substantially outperforms the baselines.
arXiv Detail & Related papers (2021-03-19T08:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.