TAPO: Task-Referenced Adaptation for Prompt Optimization
- URL: http://arxiv.org/abs/2501.06689v2
- Date: Sun, 19 Jan 2025 10:51:27 GMT
- Title: TAPO: Task-Referenced Adaptation for Prompt Optimization
- Authors: Wenxin Luo, Weirui Wang, Xiaopeng Li, Weibo Zhou, Pengyue Jia, Xiangyu Zhao,
- Abstract summary: We introduce TAPO, a multitask-aware prompt optimization framework composed of three key modules.
First, a task-aware metric selection module is proposed to enhance task-specific prompt generation capabilities.
Second, we present a multi-metrics evaluation module to jointly evaluate prompts from multiple perspectives.
Third, an evolution-based optimization framework is introduced for automatic prompt refinement, which improves adaptability across various tasks.
- Score: 18.533289140594146
- License:
- Abstract: Prompt engineering can significantly improve the performance of large language models (LLMs), with automated prompt optimization (APO) gaining significant attention due to the time-consuming and laborious nature of manual prompt design. However, much of the existing work in APO overlooks task-specific characteristics, resulting in prompts that lack domain specificity and are not well-suited for task-specific optimization. In this paper, we introduce TAPO, a multitask-aware prompt optimization framework composed of three key modules. First, a task-aware metric selection module is proposed to enhance task-specific prompt generation capabilities. Second, we present a multi-metrics evaluation module to jointly evaluate prompts from multiple perspectives. Third, an evolution-based optimization framework is introduced for automatic prompt refinement, which improves adaptability across various tasks. Extensive experiments on six datasets demonstrate the effectiveness of our approach, and our code is publicly available.
Related papers
- iPrOp: Interactive Prompt Optimization for Large Language Models with a Human in the Loop [10.210078164737245]
This paper introduces $textitiPrOp$, a novel Interactive Prompt Optimization system.
With human intervention in the optimization loop, $textitiPrOp$ offers users the flexibility to assess evolving prompts.
arXiv Detail & Related papers (2024-12-17T08:09:15Z) - SPRIG: Improving Large Language Model Performance by System Prompt Optimization [45.96513122345295]
Large Language Models (LLMs) have shown impressive capabilities in many scenarios, but their performance depends on the choice of prompt.
We propose SPRIG, an edit-based genetic algorithm that iteratively constructs prompts from prespecified components to maximize the model's performance in general scenarios.
We evaluate the performance of system prompts on a collection of 47 different types of tasks to ensure generalizability.
arXiv Detail & Related papers (2024-10-18T18:51:44Z) - AMPO: Automatic Multi-Branched Prompt Optimization [43.586044739174646]
We present AMPO, an automatic prompt optimization method that can iteratively develop a multi-branched prompt using failure cases as feedback.
In experiments across five tasks, AMPO consistently achieves the best results.
arXiv Detail & Related papers (2024-10-11T10:34:28Z) - CriSPO: Multi-Aspect Critique-Suggestion-guided Automatic Prompt Optimization for Text Generation [18.39379838806384]
We propose a novel critique-suggestion-guided automatic Prompt Optimization (CriSPO) approach.
CriSPO introduces a critique-suggestion module as its core component.
This module spontaneously discovers aspects, and compares generated reference texts across these aspects, providing actionable suggestions for prompt modification.
To further improve CriSPO with multi-metric optimization, we introduce an Automatic Suffix Tuning (AST) extension to enhance the performance of task prompts across multiple metrics.
arXiv Detail & Related papers (2024-10-03T17:57:01Z) - M$^2$PT: Multimodal Prompt Tuning for Zero-shot Instruction Learning [90.75075886543404]
Multimodal Large Language Models (MLLMs) demonstrate remarkable performance across a wide range of domains.
In this work, we introduce a novel Multimodal Prompt Tuning (M$2$PT) approach for efficient instruction tuning of MLLMs.
arXiv Detail & Related papers (2024-09-24T01:40:24Z) - QPO: Query-dependent Prompt Optimization via Multi-Loop Offline Reinforcement Learning [58.767866109043055]
We introduce Query-dependent Prompt Optimization (QPO), which iteratively fine-tune a small pretrained language model to generate optimal prompts tailored to the input queries.
We derive insights from offline prompting demonstration data, which already exists in large quantities as a by-product of benchmarking diverse prompts on open-sourced tasks.
Experiments on various LLM scales and diverse NLP and math tasks demonstrate the efficacy and cost-efficiency of our method in both zero-shot and few-shot scenarios.
arXiv Detail & Related papers (2024-08-20T03:06:48Z) - MAPO: Boosting Large Language Model Performance with Model-Adaptive Prompt Optimization [73.7779735046424]
We show that different prompts should be adapted to different Large Language Models (LLM) to enhance their capabilities across various downstream tasks in NLP.
We then propose a model-adaptive prompt (MAPO) method that optimize the original prompts for each specific LLM in downstream tasks.
arXiv Detail & Related papers (2024-07-04T18:39:59Z) - Efficient Prompt Optimization Through the Lens of Best Arm Identification [50.56113809171805]
This work provides a principled framework, TRIPLE, to efficiently perform prompt selection under an explicit budget constraint.
It is built on a novel connection established between prompt optimization and fixed-budget best arm identification (BAI-FB) in multi-armed bandits (MAB)
arXiv Detail & Related papers (2024-02-15T05:31:13Z) - Multitask Vision-Language Prompt Tuning [103.5967011236282]
We propose multitask vision-language prompt tuning (MV)
MV incorporates cross-task knowledge into prompt tuning for vision-language models.
Results in 20 vision tasks demonstrate that the proposed approach outperforms all single-task baseline prompt tuning methods.
arXiv Detail & Related papers (2022-11-21T18:41:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.