A Toolbox for Improving Evolutionary Prompt Search
- URL: http://arxiv.org/abs/2511.05120v1
- Date: Fri, 07 Nov 2025 10:04:41 GMT
- Title: A Toolbox for Improving Evolutionary Prompt Search
- Authors: Daniel Grießhaber, Maximilian Kimmich, Johannes Maucher, Ngoc Thang Vu,
- Abstract summary: Evolutionary prompt optimization has demonstrated effectiveness in refining prompts for LLMs.<n>Existing approaches lack robust operators and efficient evaluation mechanisms.<n>We propose several key improvements to evolutionary prompt optimization that can partially generalize to prompt optimization in general.
- Score: 19.376387158049067
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Evolutionary prompt optimization has demonstrated effectiveness in refining prompts for LLMs. However, existing approaches lack robust operators and efficient evaluation mechanisms. In this work, we propose several key improvements to evolutionary prompt optimization that can partially generalize to prompt optimization in general: 1) decomposing evolution into distinct steps to enhance the evolution and its control, 2) introducing an LLM-based judge to verify the evolutions, 3) integrating human feedback to refine the evolutionary operator, and 4) developing more efficient evaluation strategies that maintain performance while reducing computational overhead. Our approach improves both optimization quality and efficiency. We release our code, enabling prompt optimization on new tasks and facilitating further research in this area.
Related papers
- EvoX: Meta-Evolution for Automated Discovery [115.89434419482797]
EvoX is an adaptive evolution method that optimize its own evolution process.<n>It continuously updates how prior solutions are selected and varied based on progress.<n>It outperforms existing AI-driven evolutionary methods including AlphaEvolve, OpenEvolve, GEPA, and ShinkaEvolve on the majority of tasks.
arXiv Detail & Related papers (2026-02-26T18:54:41Z) - Learning from Prompt itself: the Hierarchical Attribution Prompt Optimization [13.8868879878572]
A structured optimization approach requires automated or semi-automated procedures to develop improved prompts.<n>Current prompt optimization methods often induce prompt drift, where new prompts fix prior failures but impair performance on previously successful tasks.<n>This study proposes the Hierarchical Prompt Optimization framework, which introduces three innovations: (1) a dynamic attribution mechanism targeting error patterns in training data and prompting history, (2) semantic-unit optimization for editing functional prompt segments, and (3) multimodal-friendly progression supporting both end-to-end LLM and LLM-MLLM.
arXiv Detail & Related papers (2026-01-06T03:34:17Z) - Beyond Algorithm Evolution: An LLM-Driven Framework for the Co-Evolution of Swarm Intelligence Optimization Algorithms and Prompts [2.7320188728052064]
This paper proposes a novel framework for the collaborative evolution of both swarm intelligence algorithms and guiding prompts.<n>The framework was rigorously evaluated on a range of NP problems, where it demonstrated superior performance.<n>Our work establishes a new paradigm for swarm intelligence optimization algorithms, underscoring the indispensable role of prompt evolution.
arXiv Detail & Related papers (2025-12-10T00:37:16Z) - LLM4EO: Large Language Model for Evolutionary Optimization in Flexible Job Shop Scheduling [4.782301990330074]
This work leverages Large Language Models (LLMs) to perceive evolutionary dynamics and enable operator-level meta-evolution.<n>The proposed framework, LLM4EO, comprises three components: knowledge-transfer-based operator design, evolution perception and analysis, and adaptive operator evolution.
arXiv Detail & Related papers (2025-11-20T15:56:09Z) - Make Optimization Once and for All with Fine-grained Guidance [78.14885351827232]
Learning to Optimize (L2O) enhances optimization efficiency with integrated neural networks.<n>L2O paradigms achieve great outcomes, e.g., refitting, generating unseen solutions iteratively or directly.<n>Our analyses explore general framework for learning optimization, called Diff-L2O, focusing on augmenting solutions from a wider view.
arXiv Detail & Related papers (2025-03-14T14:48:12Z) - Improving Retrospective Language Agents via Joint Policy Gradient Optimization [57.35348425288859]
RetroAct is a framework that jointly optimize both task-planning and self-reflective evolution capabilities in language agents.<n>We develop a two-stage joint optimization process that integrates imitation learning and reinforcement learning.<n>We conduct extensive experiments across various testing environments, demonstrating RetroAct has substantial improvements in task performance and decision-making processes.
arXiv Detail & Related papers (2025-03-03T12:54:54Z) - Can Large Language Models Be Trusted as Evolutionary Optimizers for Network-Structured Combinatorial Problems? [8.431866560904753]
Large Language Models (LLMs) have shown strong capabilities in language understanding and reasoning across diverse domains.<n>In this work, we propose a systematic framework to evaluate the capability of LLMs to engage with problem structures.<n>We adopt the commonly used evolutionary (EVO) and propose a comprehensive evaluation framework that rigorously assesses the output fidelity of LLM-based operators.
arXiv Detail & Related papers (2025-01-25T05:19:19Z) - REVOLVE: Optimizing AI Systems by Tracking Response Evolution in Textual Optimization [42.570114760974946]
We introduce REVOLVE, an optimization method that tracks how "R"esponses "EVOLVE" across iterations in large language models (LLMs)<n> Experimental results demonstrate that REVOLVE outperforms competitive baselines, achieving a 7.8% improvement in prompt optimization, a 20.72% gain in solution refinement, and a 29.17% increase in code optimization.
arXiv Detail & Related papers (2024-12-04T07:44:35Z) - A Problem-Oriented Perspective and Anchor Verification for Code Optimization [43.28045750932116]
Large language models (LLMs) have shown remarkable capabilities in solving various programming tasks.<n>This paper investigates the capabilities of LLMs in optimizing code for minimal execution time.
arXiv Detail & Related papers (2024-06-17T16:10:10Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Unleashing the Potential of Large Language Models as Prompt Optimizers: Analogical Analysis with Gradient-based Model Optimizers [108.72225067368592]
We propose a novel perspective to investigate the design of large language models (LLMs)-based prompts.<n>We identify two pivotal factors in model parameter learning: update direction and update method.<n>We develop a capable Gradient-inspired Prompt-based GPO.
arXiv Detail & Related papers (2024-02-27T15:05:32Z) - Studying Evolutionary Solution Adaption Using a Flexibility Benchmark Based on a Metal Cutting Process [39.05320053926048]
We consider optimizing for different production requirements from the viewpoint of a bio-inspired framework for system flexibility.<n>We study the flexibility of NSGA-II, which we extend by two variants: 1) varying goals, which optimize solutions for two tasks simultaneously to obtain in-between source solutions expected to be more adaptable, and 2) active-inactive genotype, which accommodates different possibilities that can be activated or deactivated.
arXiv Detail & Related papers (2023-05-31T12:07:50Z) - DECN: Evolution Inspired Deep Convolution Network for Black-box Optimization [9.878660285945728]
This paper introduces the concept of Automated EA: Automated EA exploits structure in the problem of interest to automatically generate update rules.<n>We design a deep evolutionary convolution network (DECN) to realize the move from hand-designed EAs to automated EAs without manual interventions.
arXiv Detail & Related papers (2023-04-19T12:14:01Z) - Learning to Optimize: A Primer and A Benchmark [94.29436694770953]
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
arXiv Detail & Related papers (2021-03-23T20:46:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.