ReflectivePrompt: Reflective evolution in autoprompting algorithms
- URL: http://arxiv.org/abs/2508.18870v1
- Date: Tue, 26 Aug 2025 09:46:20 GMT
- Title: ReflectivePrompt: Reflective evolution in autoprompting algorithms
- Authors: Viktor N. Zhuravlev, Artur R. Khairullin, Ernest A. Dyagin, Alena N. Sitkina, Nikita I. Kulin,
- Abstract summary: ReflectivePrompt is a novel autoprompting method based on evolutionary algorithms.<n>It employs a reflective evolution approach for more precise and comprehensive search of optimal prompts.<n>It was tested on 33 datasets for classification and text generation tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autoprompting is the process of automatically selecting optimized prompts for language models, which has been gaining popularity with the rapid advancement of prompt engineering, driven by extensive research in the field of large language models (LLMs). This paper presents ReflectivePrompt - a novel autoprompting method based on evolutionary algorithms that employs a reflective evolution approach for more precise and comprehensive search of optimal prompts. ReflectivePrompt utilizes short-term and long-term reflection operations before crossover and elitist mutation to enhance the quality of the modifications they introduce. This method allows for the accumulation of knowledge obtained throughout the evolution process and updates it at each epoch based on the current population. ReflectivePrompt was tested on 33 datasets for classification and text generation tasks using open-access large language models: t-lite-instruct-0.1 and gemma3-27b-it. The method demonstrates, on average, a significant improvement (e.g., 28% on BBH compared to EvoPrompt) in metrics relative to current state-of-the-art approaches, thereby establishing itself as one of the most effective solutions in evolutionary algorithm-based autoprompting.
Related papers
- ELPO: Ensemble Learning Based Prompt Optimization for Large Language Models [39.71820341519503]
We propose a novel framework called Ensemble Learning based Prompt Optimization (ELPO) to achieve more accurate and robust results.<n>Motivated by the idea of ensemble learning, ELPO conducts voting mechanism and introduces shared generation strategies.<n>ELPO creatively presents more efficient algorithms for the prompt generation and search process.
arXiv Detail & Related papers (2025-11-20T07:27:26Z) - Rethinking On-policy Optimization for Query Augmentation [49.87723664806526]
We present the first systematic comparison of prompting-based and RL-based query augmentation across diverse benchmarks.<n>We introduce a novel hybrid method, On-policy Pseudo-document Query Expansion (OPQE), which learns to generate a pseudo-document that maximizes retrieval performance.
arXiv Detail & Related papers (2025-10-20T04:16:28Z) - Automatic Prompt Optimization with Prompt Distillation [0.0]
DistillPrompt is a novel autoprompting method based on large language models.<n>It employs a multi-stage integration of task-specific information into prompts using training data.
arXiv Detail & Related papers (2025-08-26T12:46:58Z) - GreenTEA: Gradient Descent with Topic-modeling and Evolutionary Auto-prompting [2.085792950847639]
GreenTEA is an agentic workflow for automatic prompt optimization.<n>It balances candidate exploration and knowledge exploitation.<n>It iteratively refines prompts based on feedback from error samples.
arXiv Detail & Related papers (2025-08-12T06:48:30Z) - Evolving Prompts In-Context: An Open-ended, Self-replicating Perspective [65.12150411762273]
We show that pruning random demonstrations into seemingly incoherent "gibberish" can remarkably improve performance across diverse tasks.<n>We propose a self-discover prompt optimization framework, PromptQuine, that automatically searches for the pruning strategy by itself using only low-data regimes.
arXiv Detail & Related papers (2025-06-22T07:53:07Z) - GAAPO: Genetic Algorithmic Applied to Prompt Optimization [0.0]
Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks, with their performance heavily dependent on the quality of input prompts.<n>While prompt engineering has proven effective, it typically relies on manual adjustments, making it time-consuming and potentially suboptimal.<n>This paper introducesGenetic Algorithm Applied to Prompt Optimization, a novel hybrid optimization framework that leverages genetic principles to evolve prompts through successive generations.
arXiv Detail & Related papers (2025-04-09T11:19:42Z) - A Survey of Automatic Prompt Optimization with Instruction-focused Heuristic-based Search Algorithm [13.332569343755075]
Large Language Models have led to remarkable achievements across a variety of Natural Language Processing tasks.<n>While manual methods can be effective, they typically rely on intuition and do not automatically refine prompts over time.<n>automatic prompt optimization employing-based search algorithms can systematically explore and improve prompts with minimal human oversight.
arXiv Detail & Related papers (2025-02-26T01:42:08Z) - In-context Demonstration Matters: On Prompt Optimization for Pseudo-Supervision Refinement [71.60563181678323]
Large language models (LLMs) have achieved great success across diverse tasks, and fine-tuning is sometimes needed to further enhance generation quality.<n>To handle these challenges, a direct solution is to generate high-confidence'' data from unsupervised downstream tasks.<n>We propose a novel approach, pseudo-supervised demonstrations aligned prompt optimization (PAPO) algorithm, which jointly refines both the prompt and the overall pseudo-supervision.
arXiv Detail & Related papers (2024-10-04T03:39:28Z) - APEER: Automatic Prompt Engineering Enhances Large Language Model Reranking [39.649879274238856]
We introduce a novel automatic prompt engineering algorithm named APEER.<n>APEER iteratively generates refined prompts through feedback and preference optimization.<n>We find that the prompts generated by APEER exhibit better transferability across diverse tasks and LLMs.
arXiv Detail & Related papers (2024-06-20T16:11:45Z) - EvoPrompt: Connecting LLMs with Evolutionary Algorithms Yields Powerful Prompt Optimizers [67.64162164254809]
EvoPrompt is a framework for discrete prompt optimization.<n>It borrows the idea of evolutionary algorithms (EAs) as they exhibit good performance and fast convergence.<n>It significantly outperforms human-engineered prompts and existing methods for automatic prompt generation.
arXiv Detail & Related papers (2023-09-15T16:50:09Z) - TEMPERA: Test-Time Prompting via Reinforcement Learning [57.48657629588436]
We propose Test-time Prompt Editing using Reinforcement learning (TEMPERA)
In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge.
Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
arXiv Detail & Related papers (2022-11-21T22:38:20Z) - MetaPrompting: Learning to Learn Better Prompts [52.914694884515534]
We propose a new soft prompting method called MetaPrompting, which adopts the well-recognized model-agnostic meta-learning algorithm.
Extensive experiments show MetaPrompting brings significant improvement on four different datasets.
arXiv Detail & Related papers (2022-09-23T09:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.