P3: Prompts Promote Prompting
- URL: http://arxiv.org/abs/2507.15675v1
- Date: Mon, 21 Jul 2025 14:37:46 GMT
- Title: P3: Prompts Promote Prompting
- Authors: Xinyu Zhang, Yuanquan Hu, Fangchao Liu, Zhicheng Dou,
- Abstract summary: Large language model (LLM) applications often employ multi-component prompts, comprising both system and user prompts.<n>In this work, we introduce P3, a novel self-improvement framework that concurrently optimize both system and user prompts.<n>Extensive experiments on general tasks demonstrate that P3 achieves superior performance in the realm of automatic prompt optimization.
- Score: 26.16464064171255
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Current large language model (LLM) applications often employ multi-component prompts, comprising both system and user prompts, to guide model behaviors. While recent advancements have demonstrated the efficacy of automatically optimizing either the system or user prompt to boost performance, such unilateral approaches often yield suboptimal outcomes due to the interdependent nature of these components. In this work, we introduce P3, a novel self-improvement framework that concurrently optimizes both system and user prompts through an iterative process. The offline optimized prompts are further leveraged to promote online prompting by performing query-dependent prompt optimization. Extensive experiments on general tasks (e.g., Arena-hard and Alpaca-eval) and reasoning tasks (e.g., GSM8K and GPQA) demonstrate that P3 achieves superior performance in the realm of automatic prompt optimization. Our results highlight the effectiveness of a holistic optimization strategy in enhancing LLM performance across diverse domains.
Related papers
- Grammar-Guided Evolutionary Search for Discrete Prompt Optimisation [63.97051732013936]
We propose an evolutionary search approach to automated discrete prompt optimisation consisting of two phases.<n>In the first phase, grammar-guided genetic programming is invoked to synthesise prompt-creating programmes.<n>In the second phase, local search is applied to explore the neighbourhoods of best-performing programmes.
arXiv Detail & Related papers (2025-07-14T14:34:15Z) - ORPP: Self-Optimizing Role-playing Prompts to Enhance Language Model Capabilities [64.24517317344959]
High-quality prompts are crucial for eliciting outstanding performance from large language models on complex tasks.<n>We propose ORPP, a framework that enhances model performance by optimizing and generating role-playing prompts.<n>We show that ORPP not only matches but in most cases surpasses existing mainstream prompt optimization methods in terms of performance.
arXiv Detail & Related papers (2025-06-03T05:51:35Z) - System Prompt Optimization with Meta-Learning [60.04718679054704]
We introduce the novel problem of bilevel system prompt optimization, whose objective is to design system prompts that are robust to diverse user prompts.<n>We propose a meta-learning framework, which meta-learns the system prompt by optimizing it over various user prompts across multiple datasets.<n>We conduct experiments on 14 unseen datasets spanning 5 different domains, on which we show that our approach produces system prompts that generalize effectively to diverse user prompts.
arXiv Detail & Related papers (2025-05-14T16:46:15Z) - Bandit-Based Prompt Design Strategy Selection Improves Prompt Optimizers [1.5845117761091052]
We introduce sTrategy Selection (OPTS), which implements explicit selection mechanisms for prompt design.<n>We propose three mechanisms, including a Thompson sampling-based approach, and integrate them into EvoPrompt.<n>Our results show that the selection of prompt design strategies improves the performance of EvoPrompt.
arXiv Detail & Related papers (2025-03-03T04:24:04Z) - TAPO: Task-Referenced Adaptation for Prompt Optimization [18.533289140594146]
We introduce TAPO, a multitask-aware prompt optimization framework composed of three key modules.<n>First, a task-aware metric selection module is proposed to enhance task-specific prompt generation capabilities.<n>Second, we present a multi-metrics evaluation module to jointly evaluate prompts from multiple perspectives.<n>Third, an evolution-based optimization framework is introduced for automatic prompt refinement, which improves adaptability across various tasks.
arXiv Detail & Related papers (2025-01-12T02:43:59Z) - SPRIG: Improving Large Language Model Performance by System Prompt Optimization [45.96513122345295]
Large Language Models (LLMs) have shown impressive capabilities in many scenarios, but their performance depends on the choice of prompt.
We propose SPRIG, an edit-based genetic algorithm that iteratively constructs prompts from prespecified components to maximize the model's performance in general scenarios.
We evaluate the performance of system prompts on a collection of 47 different types of tasks to ensure generalizability.
arXiv Detail & Related papers (2024-10-18T18:51:44Z) - AMPO: Automatic Multi-Branched Prompt Optimization [43.586044739174646]
We present AMPO, an automatic prompt optimization method that can iteratively develop a multi-branched prompt using failure cases as feedback.
In experiments across five tasks, AMPO consistently achieves the best results.
arXiv Detail & Related papers (2024-10-11T10:34:28Z) - QPO: Query-dependent Prompt Optimization via Multi-Loop Offline Reinforcement Learning [58.767866109043055]
We introduce Query-dependent Prompt Optimization (QPO), which iteratively fine-tune a small pretrained language model to generate optimal prompts tailored to the input queries.<n>We derive insights from offline prompting demonstration data, which already exists in large quantities as a by-product of benchmarking diverse prompts on open-sourced tasks.<n> Experiments on various LLM scales and diverse NLP and math tasks demonstrate the efficacy and cost-efficiency of our method in both zero-shot and few-shot scenarios.
arXiv Detail & Related papers (2024-08-20T03:06:48Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.