Automatic Prompt Optimization via Heuristic Search: A Survey
- URL: http://arxiv.org/abs/2502.18746v1
- Date: Wed, 26 Feb 2025 01:42:08 GMT
- Title: Automatic Prompt Optimization via Heuristic Search: A Survey
- Authors: Wendi Cui, Jiaxin Zhang, Zhuohang Li, Hao Sun, Damien Lopez, Kamalika Das, Bradley A. Malin, Sricharan Kumar,
- Abstract summary: Large Language Models have led to remarkable achievements across a variety of Natural Language Processing tasks.<n>While manual methods can be effective, they typically rely on intuition and do not automatically refine prompts over time.<n>automatic prompt optimization employing-based search algorithms can systematically explore and improve prompts with minimal human oversight.
- Score: 13.332569343755075
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in Large Language Models have led to remarkable achievements across a variety of Natural Language Processing tasks, making prompt engineering increasingly central to guiding model outputs. While manual methods can be effective, they typically rely on intuition and do not automatically refine prompts over time. In contrast, automatic prompt optimization employing heuristic-based search algorithms can systematically explore and improve prompts with minimal human oversight. This survey proposes a comprehensive taxonomy of these methods, categorizing them by where optimization occurs, what is optimized, what criteria drive the optimization, which operators generate new prompts, and which iterative search algorithms are applied. We further highlight specialized datasets and tools that support and accelerate automated prompt refinement. We conclude by discussing key open challenges pointing toward future opportunities for more robust and versatile LLM applications.
Related papers
- MARS: A Multi-Agent Framework Incorporating Socratic Guidance for Automated Prompt Optimization [30.748085697067154]
We propose a Multi-Agent framework incorporating Socratic guidance (MARS)
MARS comprises seven agents, each with distinct functionalities, which autonomously use the Planner to devise an optimization path.
We conduct extensive experiments on various datasets to validate the effectiveness of our method.
arXiv Detail & Related papers (2025-03-21T06:19:55Z) - A Systematic Survey of Automatic Prompt Optimization Techniques [21.95159233568761]
We present a comprehensive survey summarizing the current progress and remaining challenges in this field.<n>We provide a formal definition of APO, a 5-part unifying framework, and then proceed to rigorously categorize all relevant works based on their salient features therein.
arXiv Detail & Related papers (2025-02-24T07:29:13Z) - A Sequential Optimal Learning Approach to Automated Prompt Engineering in Large Language Models [14.483240353801074]
This paper proposes an optimal learning framework for automated prompt engineering.<n>It is designed to sequentially identify effective prompt features while efficiently allocating a limited evaluation budget.<n>Our framework provides a solution to deploying automated prompt engineering in a wider range applications.
arXiv Detail & Related papers (2025-01-07T03:51:10Z) - QPO: Query-dependent Prompt Optimization via Multi-Loop Offline Reinforcement Learning [58.767866109043055]
We introduce Query-dependent Prompt Optimization (QPO), which iteratively fine-tune a small pretrained language model to generate optimal prompts tailored to the input queries.
We derive insights from offline prompting demonstration data, which already exists in large quantities as a by-product of benchmarking diverse prompts on open-sourced tasks.
Experiments on various LLM scales and diverse NLP and math tasks demonstrate the efficacy and cost-efficiency of our method in both zero-shot and few-shot scenarios.
arXiv Detail & Related papers (2024-08-20T03:06:48Z) - APEER: Automatic Prompt Engineering Enhances Large Language Model Reranking [39.649879274238856]
We introduce a novel automatic prompt engineering algorithm named APEER.
APEER iteratively generates refined prompts through feedback and preference optimization.
Experiments demonstrate the substantial performance improvement of APEER over existing state-of-the-art (SoTA) manual prompts.
arXiv Detail & Related papers (2024-06-20T16:11:45Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - PromptWizard: Task-Aware Prompt Optimization Framework [2.618253052454435]
Large language models (LLMs) have transformed AI across diverse domains.
Manual prompt engineering is both labor-intensive and domain-specific.
We introduce PromptWizard, a novel, fully automated framework for discrete prompt optimization.
arXiv Detail & Related papers (2024-05-28T17:08:31Z) - Efficient Prompting Methods for Large Language Models: A Survey [50.82812214830023]
Efficient Prompting Methods have attracted a wide range of attention.<n>We discuss Automatic Prompt Engineering for different prompt components and Prompt Compression in continuous and discrete spaces.
arXiv Detail & Related papers (2024-04-01T12:19:08Z) - Intent-based Prompt Calibration: Enhancing prompt optimization with
synthetic boundary cases [2.6159111710501506]
We introduce a new method for automatic prompt engineering, using a calibration process that iteratively refines the prompt to the user intent.
We demonstrate the effectiveness of our method with respect to strong proprietary models on real-world tasks such as moderation and generation.
arXiv Detail & Related papers (2024-02-05T15:28:43Z) - Are Large Language Models Good Prompt Optimizers? [65.48910201816223]
We conduct a study to uncover the actual mechanism of LLM-based Prompt Optimization.
Our findings reveal that the LLMs struggle to identify the true causes of errors during reflection, tending to be biased by their own prior knowledge.
We introduce a new "Automatic Behavior Optimization" paradigm, which directly optimize the target model's behavior in a more controllable manner.
arXiv Detail & Related papers (2024-02-03T09:48:54Z) - Automatic Engineering of Long Prompts [79.66066613717703]
Large language models (LLMs) have demonstrated remarkable capabilities in solving complex open-domain tasks.
This paper investigates the performance of greedy algorithms and genetic algorithms for automatic long prompt engineering.
Our results show that the proposed automatic long prompt engineering algorithm achieves an average of 9.2% accuracy gain on eight tasks in Big Bench Hard.
arXiv Detail & Related papers (2023-11-16T07:42:46Z) - Query-Dependent Prompt Evaluation and Optimization with Offline Inverse
RL [62.824464372594576]
We aim to enhance arithmetic reasoning ability of Large Language Models (LLMs) through zero-shot prompt optimization.
We identify a previously overlooked objective of query dependency in such optimization.
We introduce Prompt-OIRL, which harnesses offline inverse reinforcement learning to draw insights from offline prompting demonstration data.
arXiv Detail & Related papers (2023-09-13T01:12:52Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.