TAPO: Task-Referenced Adaptation for Prompt Optimization
- URL: http://arxiv.org/abs/2501.06689v3
- Date: Wed, 26 Feb 2025 16:36:55 GMT
- Title: TAPO: Task-Referenced Adaptation for Prompt Optimization
- Authors: Wenxin Luo, Weirui Wang, Xiaopeng Li, Weibo Zhou, Pengyue Jia, Xiangyu Zhao,
- Abstract summary: We introduce TAPO, a multitask-aware prompt optimization framework composed of three key modules.<n>First, a task-aware metric selection module is proposed to enhance task-specific prompt generation capabilities.<n>Second, we present a multi-metrics evaluation module to jointly evaluate prompts from multiple perspectives.<n>Third, an evolution-based optimization framework is introduced for automatic prompt refinement, which improves adaptability across various tasks.
- Score: 18.533289140594146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prompt engineering can significantly improve the performance of large language models (LLMs), with automated prompt optimization (APO) gaining significant attention due to the time-consuming and laborious nature of manual prompt design. However, much of the existing work in APO overlooks task-specific characteristics, resulting in prompts that lack domain specificity and are not well-suited for task-specific optimization. In this paper, we introduce TAPO, a multitask-aware prompt optimization framework composed of three key modules. First, a task-aware metric selection module is proposed to enhance task-specific prompt generation capabilities. Second, we present a multi-metrics evaluation module to jointly evaluate prompts from multiple perspectives. Third, an evolution-based optimization framework is introduced for automatic prompt refinement, which improves adaptability across various tasks. Extensive experiments on six datasets demonstrate the effectiveness of our approach, and our code is publicly available.
Related papers
- LatentPrompt: Optimizing Promts in Latent Space [20.80689930065897]
We present LatentPrompt, a model-agnostic framework for prompt optimization.<n>Our method embeds seed prompts in a continuous latent space and systematically explores this space to identify prompts that maximize task-specific performance.<n>In a proof-of-concept study on the Financial PhraseBank sentiment classification benchmark, LatentPrompt increased classification accuracy by approximately 3 percent after a single optimization cycle.
arXiv Detail & Related papers (2025-08-04T14:17:29Z) - MOPrompt: Multi-objective Semantic Evolution for Prompt Optimization [0.0699049312989311]
MOPrompt is a novel framework designed to optimize prompts for both accuracy and context size (measured in tokens) simultaneously.<n>We evaluate MOPrompt on a sentiment analysis task in Portuguese, using Gemma-2B and Sabiazinho-3 as evaluation models.
arXiv Detail & Related papers (2025-08-03T01:50:43Z) - Grammar-Guided Evolutionary Search for Discrete Prompt Optimisation [63.97051732013936]
We propose an evolutionary search approach to automated discrete prompt optimisation consisting of two phases.<n>In the first phase, grammar-guided genetic programming is invoked to synthesise prompt-creating programmes.<n>In the second phase, local search is applied to explore the neighbourhoods of best-performing programmes.
arXiv Detail & Related papers (2025-07-14T14:34:15Z) - ORPP: Self-Optimizing Role-playing Prompts to Enhance Language Model Capabilities [64.24517317344959]
High-quality prompts are crucial for eliciting outstanding performance from large language models on complex tasks.<n>We propose ORPP, a framework that enhances model performance by optimizing and generating role-playing prompts.<n>We show that ORPP not only matches but in most cases surpasses existing mainstream prompt optimization methods in terms of performance.
arXiv Detail & Related papers (2025-06-03T05:51:35Z) - System Prompt Optimization with Meta-Learning [60.04718679054704]
We introduce the novel problem of bilevel system prompt optimization, whose objective is to design system prompts that are robust to diverse user prompts.<n>We propose a meta-learning framework, which meta-learns the system prompt by optimizing it over various user prompts across multiple datasets.<n>We conduct experiments on 14 unseen datasets spanning 5 different domains, on which we show that our approach produces system prompts that generalize effectively to diverse user prompts.
arXiv Detail & Related papers (2025-05-14T16:46:15Z) - MARS: A Multi-Agent Framework Incorporating Socratic Guidance for Automated Prompt Optimization [30.748085697067154]
We propose a Multi-Agent framework incorporating Socratic guidance (MARS)
MARS comprises seven agents, each with distinct functionalities, which autonomously use the Planner to devise an optimization path.
We conduct extensive experiments on various datasets to validate the effectiveness of our method.
arXiv Detail & Related papers (2025-03-21T06:19:55Z) - iPrOp: Interactive Prompt Optimization for Large Language Models with a Human in the Loop [10.210078164737245]
This paper introduces $textitiPrOp$, a novel Interactive Prompt Optimization system.
With human intervention in the optimization loop, $textitiPrOp$ offers users the flexibility to assess evolving prompts.
arXiv Detail & Related papers (2024-12-17T08:09:15Z) - SPRIG: Improving Large Language Model Performance by System Prompt Optimization [45.96513122345295]
Large Language Models (LLMs) have shown impressive capabilities in many scenarios, but their performance depends on the choice of prompt.
We propose SPRIG, an edit-based genetic algorithm that iteratively constructs prompts from prespecified components to maximize the model's performance in general scenarios.
We evaluate the performance of system prompts on a collection of 47 different types of tasks to ensure generalizability.
arXiv Detail & Related papers (2024-10-18T18:51:44Z) - AMPO: Automatic Multi-Branched Prompt Optimization [43.586044739174646]
We present AMPO, an automatic prompt optimization method that can iteratively develop a multi-branched prompt using failure cases as feedback.
In experiments across five tasks, AMPO consistently achieves the best results.
arXiv Detail & Related papers (2024-10-11T10:34:28Z) - CriSPO: Multi-Aspect Critique-Suggestion-guided Automatic Prompt Optimization for Text Generation [18.39379838806384]
We propose a novel critique-suggestion-guided automatic Prompt Optimization (CriSPO) approach.
CriSPO introduces a critique-suggestion module as its core component.
This module spontaneously discovers aspects, and compares generated reference texts across these aspects, providing actionable suggestions for prompt modification.
To further improve CriSPO with multi-metric optimization, we introduce an Automatic Suffix Tuning (AST) extension to enhance the performance of task prompts across multiple metrics.
arXiv Detail & Related papers (2024-10-03T17:57:01Z) - M$^2$PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning [90.75075886543404]
Multimodal Large Language Models (MLLMs) demonstrate remarkable performance across a wide range of domains.
In this work, we introduce a novel Multimodal Prompt Tuning (M$2$PT) approach for efficient instruction tuning of MLLMs.
arXiv Detail & Related papers (2024-09-24T01:40:24Z) - QPO: Query-dependent Prompt Optimization via Multi-Loop Offline Reinforcement Learning [58.767866109043055]
We introduce Query-dependent Prompt Optimization (QPO), which iteratively fine-tune a small pretrained language model to generate optimal prompts tailored to the input queries.
We derive insights from offline prompting demonstration data, which already exists in large quantities as a by-product of benchmarking diverse prompts on open-sourced tasks.
Experiments on various LLM scales and diverse NLP and math tasks demonstrate the efficacy and cost-efficiency of our method in both zero-shot and few-shot scenarios.
arXiv Detail & Related papers (2024-08-20T03:06:48Z) - MAPO: Boosting Large Language Model Performance with Model-Adaptive Prompt Optimization [73.7779735046424]
We show that different prompts should be adapted to different Large Language Models (LLM) to enhance their capabilities across various downstream tasks in NLP.
We then propose a model-adaptive prompt (MAPO) method that optimize the original prompts for each specific LLM in downstream tasks.
arXiv Detail & Related papers (2024-07-04T18:39:59Z) - PromptWizard: Task-Aware Prompt Optimization Framework [2.618253052454435]
Large language models (LLMs) have transformed AI across diverse domains.
Manual prompt engineering is both labor-intensive and domain-specific.
We introduce PromptWizard, a novel, fully automated framework for discrete prompt optimization.
arXiv Detail & Related papers (2024-05-28T17:08:31Z) - Efficient Prompt Optimization Through the Lens of Best Arm Identification [50.56113809171805]
This work provides a principled framework, TRIPLE, to efficiently perform prompt selection under an explicit budget constraint.
It is built on a novel connection established between prompt optimization and fixed-budget best arm identification (BAI-FB) in multi-armed bandits (MAB)
arXiv Detail & Related papers (2024-02-15T05:31:13Z) - Multitask Vision-Language Prompt Tuning [103.5967011236282]
We propose multitask vision-language prompt tuning (MV)
MV incorporates cross-task knowledge into prompt tuning for vision-language models.
Results in 20 vision tasks demonstrate that the proposed approach outperforms all single-task baseline prompt tuning methods.
arXiv Detail & Related papers (2022-11-21T18:41:44Z) - Prompt Tuning with Soft Context Sharing for Vision-Language Models [42.61889428498378]
We propose a novel method to tune pre-trained vision-language models on multiple target few-shot tasks jointly.
We show that SoftCPT significantly outperforms single-task prompt tuning methods.
arXiv Detail & Related papers (2022-08-29T10:19:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.