A Systematic Survey of Automatic Prompt Optimization Techniques
- URL: http://arxiv.org/abs/2502.16923v1
- Date: Mon, 24 Feb 2025 07:29:13 GMT
- Title: A Systematic Survey of Automatic Prompt Optimization Techniques
- Authors: Kiran Ramnath, Kang Zhou, Sheng Guan, Soumya Smruti Mishra, Xuan Qi, Zhengyuan Shen, Shuai Wang, Sangmin Woo, Sullam Jeoung, Yawei Wang, Haozhu Wang, Han Ding, Yuzhe Lu, Zhichao Xu, Yun Zhou, Balasubramaniam Srinivasan, Qiaojing Yan, Yueyan Chen, Haibo Ding, Panpan Xu, Lin Lee Cheong,
- Abstract summary: We present a comprehensive survey summarizing the current progress and remaining challenges in this field.<n>We provide a formal definition of APO, a 5-part unifying framework, and then proceed to rigorously categorize all relevant works based on their salient features therein.
- Score: 21.95159233568761
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Since the advent of large language models (LLMs), prompt engineering has been a crucial step for eliciting desired responses for various Natural Language Processing (NLP) tasks. However, prompt engineering remains an impediment for end users due to rapid advances in models, tasks, and associated best practices. To mitigate this, Automatic Prompt Optimization (APO) techniques have recently emerged that use various automated techniques to help improve the performance of LLMs on various tasks. In this paper, we present a comprehensive survey summarizing the current progress and remaining challenges in this field. We provide a formal definition of APO, a 5-part unifying framework, and then proceed to rigorously categorize all relevant works based on their salient features therein. We hope to spur further research guided by our framework.
Related papers
- Automatic Prompt Optimization via Heuristic Search: A Survey [13.332569343755075]
Large Language Models have led to remarkable achievements across a variety of Natural Language Processing tasks.
While manual methods can be effective, they typically rely on intuition and do not automatically refine prompts over time.
automatic prompt optimization employing-based search algorithms can systematically explore and improve prompts with minimal human oversight.
arXiv Detail & Related papers (2025-02-26T01:42:08Z) - Scaling Autonomous Agents via Automatic Reward Modeling And Planning [52.39395405893965]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of tasks.
However, they still struggle with problems requiring multi-step decision-making and environmental feedback.
We propose a framework that can automatically learn a reward model from the environment without human annotations.
arXiv Detail & Related papers (2025-02-17T18:49:25Z) - A Survey of Automatic Prompt Engineering: An Optimization Perspective [18.933465526053453]
This paper presents the first comprehensive survey on automated prompt engineering through a unified optimization-theoretic lens.<n>We formalize prompt optimization as a problem over discrete, continuous, and hybrid prompt spaces.<n>We highlight underexplored frontiers in constrained optimization and agent-oriented prompt design.
arXiv Detail & Related papers (2025-02-17T08:48:07Z) - PromptWizard: Task-Aware Prompt Optimization Framework [2.618253052454435]
Large language models (LLMs) have transformed AI across diverse domains.
Manual prompt engineering is both labor-intensive and domain-specific.
We introduce PromptWizard, a novel, fully automated framework for discrete prompt optimization.
arXiv Detail & Related papers (2024-05-28T17:08:31Z) - Efficient Prompting Methods for Large Language Models: A Survey [50.82812214830023]
Efficient Prompting Methods have attracted a wide range of attention.<n>We discuss Automatic Prompt Engineering for different prompt components and Prompt Compression in continuous and discrete spaces.
arXiv Detail & Related papers (2024-04-01T12:19:08Z) - A Systematic Survey of Prompt Engineering in Large Language Models:
Techniques and Applications [11.568575664316143]
This paper provides a structured overview of recent advancements in prompt engineering, categorized by application area.
We provide a summary detailing the prompting methodology, its applications, the models involved, and the datasets utilized.
This systematic analysis enables a better understanding of this rapidly developing field and facilitates future research by illuminating open challenges and opportunities for prompt engineering.
arXiv Detail & Related papers (2024-02-05T19:49:13Z) - TaskBench: Benchmarking Large Language Models for Task Automation [82.2932794189585]
We introduce TaskBench, a framework to evaluate the capability of large language models (LLMs) in task automation.
Specifically, task decomposition, tool selection, and parameter prediction are assessed.
Our approach combines automated construction with rigorous human verification, ensuring high consistency with human evaluation.
arXiv Detail & Related papers (2023-11-30T18:02:44Z) - Automatic Engineering of Long Prompts [79.66066613717703]
Large language models (LLMs) have demonstrated remarkable capabilities in solving complex open-domain tasks.
This paper investigates the performance of greedy algorithms and genetic algorithms for automatic long prompt engineering.
Our results show that the proposed automatic long prompt engineering algorithm achieves an average of 9.2% accuracy gain on eight tasks in Big Bench Hard.
arXiv Detail & Related papers (2023-11-16T07:42:46Z) - Unleashing the potential of prompt engineering in Large Language Models: a comprehensive review [1.6006550105523192]
Review explores the pivotal role of prompt engineering in unleashing the capabilities of Large Language Models (LLMs)
Examines both foundational and advanced methodologies of prompt engineering, including techniques such as self-consistency, chain-of-thought, and generated knowledge.
Review also reflects the essential role of prompt engineering in advancing AI capabilities, providing a structured framework for future research and application.
arXiv Detail & Related papers (2023-10-23T09:15:18Z) - Automatically Correcting Large Language Models: Surveying the landscape
of diverse self-correction strategies [104.32199881187607]
Large language models (LLMs) have demonstrated remarkable performance across a wide array of NLP tasks.
A promising approach to rectify these flaws is self-correction, where the LLM itself is prompted or guided to fix problems in its own output.
This paper presents a comprehensive review of this emerging class of techniques.
arXiv Detail & Related papers (2023-08-06T18:38:52Z) - Prompts Matter: Insights and Strategies for Prompt Engineering in
Automated Software Traceability [45.235173351109374]
Large Language Models (LLMs) have the potential to revolutionize automated traceability.
This paper explores the process of prompt engineering to extract link predictions from an LLM.
arXiv Detail & Related papers (2023-08-01T01:56:22Z) - OverPrompt: Enhancing ChatGPT through Efficient In-Context Learning [49.38867353135258]
We propose OverPrompt, leveraging the in-context learning capability of LLMs to handle multiple task inputs.
Our experiments show that OverPrompt can achieve cost-efficient zero-shot classification without causing significant detriment to task performance.
arXiv Detail & Related papers (2023-05-24T10:08:04Z) - AANG: Automating Auxiliary Learning [110.36191309793135]
We present an approach for automatically generating a suite of auxiliary objectives.
We achieve this by deconstructing existing objectives within a novel unified taxonomy, identifying connections between them, and generating new ones based on the uncovered structure.
This leads us to a principled and efficient algorithm for searching the space of generated objectives to find those most useful to a specified end-task.
arXiv Detail & Related papers (2022-05-27T16:32:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.