ELPO: Ensemble Learning Based Prompt Optimization for Large Language Models
- URL: http://arxiv.org/abs/2511.16122v1
- Date: Thu, 20 Nov 2025 07:27:26 GMT
- Title: ELPO: Ensemble Learning Based Prompt Optimization for Large Language Models
- Authors: Qing Zhang, Bing Xu, Xudong Zhang, Yifan Shi, Yang Li, Chen Zhang, Yik Chung Wu, Ngai Wong, Yijie Chen, Hong Dai, Xiansen Chen, Mian Zhang,
- Abstract summary: We propose a novel framework called Ensemble Learning based Prompt Optimization (ELPO) to achieve more accurate and robust results.<n>Motivated by the idea of ensemble learning, ELPO conducts voting mechanism and introduces shared generation strategies.<n>ELPO creatively presents more efficient algorithms for the prompt generation and search process.
- Score: 39.71820341519503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The remarkable performance of Large Language Models (LLMs) highly relies on crafted prompts. However, manual prompt engineering is a laborious process, creating a core bottleneck for practical application of LLMs. This phenomenon has led to the emergence of a new research area known as Automatic Prompt Optimization (APO), which develops rapidly in recent years. Existing APO methods such as those based on evolutionary algorithms or trial-and-error approaches realize an efficient and accurate prompt optimization to some extent. However, those researches focus on a single model or algorithm for the generation strategy and optimization process, which limits their performance when handling complex tasks. To address this, we propose a novel framework called Ensemble Learning based Prompt Optimization (ELPO) to achieve more accurate and robust results. Motivated by the idea of ensemble learning, ELPO conducts voting mechanism and introduces shared generation strategies along with different search methods for searching superior prompts. Moreover, ELPO creatively presents more efficient algorithms for the prompt generation and search process. Experimental results demonstrate that ELPO outperforms state-of-the-art prompt optimization methods across different tasks, e.g., improving F1 score by 7.6 on ArSarcasm dataset.
Related papers
- Beyond Algorithm Evolution: An LLM-Driven Framework for the Co-Evolution of Swarm Intelligence Optimization Algorithms and Prompts [2.7320188728052064]
This paper proposes a novel framework for the collaborative evolution of both swarm intelligence algorithms and guiding prompts.<n>The framework was rigorously evaluated on a range of NP problems, where it demonstrated superior performance.<n>Our work establishes a new paradigm for swarm intelligence optimization algorithms, underscoring the indispensable role of prompt evolution.
arXiv Detail & Related papers (2025-12-10T00:37:16Z) - Grammar-Guided Evolutionary Search for Discrete Prompt Optimisation [63.97051732013936]
We propose an evolutionary search approach to automated discrete prompt optimisation consisting of two phases.<n>In the first phase, grammar-guided genetic programming is invoked to synthesise prompt-creating programmes.<n>In the second phase, local search is applied to explore the neighbourhoods of best-performing programmes.
arXiv Detail & Related papers (2025-07-14T14:34:15Z) - Evolving Prompts In-Context: An Open-ended, Self-replicating Perspective [65.12150411762273]
We show that pruning random demonstrations into seemingly incoherent "gibberish" can remarkably improve performance across diverse tasks.<n>We propose a self-discover prompt optimization framework, PromptQuine, that automatically searches for the pruning strategy by itself using only low-data regimes.
arXiv Detail & Related papers (2025-06-22T07:53:07Z) - GAAPO: Genetic Algorithmic Applied to Prompt Optimization [0.0]
Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks, with their performance heavily dependent on the quality of input prompts.<n>While prompt engineering has proven effective, it typically relies on manual adjustments, making it time-consuming and potentially suboptimal.<n>This paper introducesGenetic Algorithm Applied to Prompt Optimization, a novel hybrid optimization framework that leverages genetic principles to evolve prompts through successive generations.
arXiv Detail & Related papers (2025-04-09T11:19:42Z) - A Survey of Automatic Prompt Optimization with Instruction-focused Heuristic-based Search Algorithm [13.332569343755075]
Large Language Models have led to remarkable achievements across a variety of Natural Language Processing tasks.<n>While manual methods can be effective, they typically rely on intuition and do not automatically refine prompts over time.<n>automatic prompt optimization employing-based search algorithms can systematically explore and improve prompts with minimal human oversight.
arXiv Detail & Related papers (2025-02-26T01:42:08Z) - QPO: Query-dependent Prompt Optimization via Multi-Loop Offline Reinforcement Learning [58.767866109043055]
We introduce Query-dependent Prompt Optimization (QPO), which iteratively fine-tune a small pretrained language model to generate optimal prompts tailored to the input queries.<n>We derive insights from offline prompting demonstration data, which already exists in large quantities as a by-product of benchmarking diverse prompts on open-sourced tasks.<n> Experiments on various LLM scales and diverse NLP and math tasks demonstrate the efficacy and cost-efficiency of our method in both zero-shot and few-shot scenarios.
arXiv Detail & Related papers (2024-08-20T03:06:48Z) - Teach Better or Show Smarter? On Instructions and Exemplars in Automatic Prompt Optimization [15.967049403803749]
This paper comprehensively compares the performance of representative IO and EO techniques on a diverse set of challenging tasks.
We find that intelligently reusing model-generated input-output pairs consistently improves performance on top of IO methods.
We also observe a synergy between EO and IO, with optimal combinations surpassing the individual contributions.
arXiv Detail & Related papers (2024-06-22T02:07:10Z) - Unleashing the Potential of Large Language Models as Prompt Optimizers: Analogical Analysis with Gradient-based Model Optimizers [108.72225067368592]
We propose a novel perspective to investigate the design of large language models (LLMs)-based prompts.<n>We identify two pivotal factors in model parameter learning: update direction and update method.<n>We develop a capable Gradient-inspired Prompt-based GPO.
arXiv Detail & Related papers (2024-02-27T15:05:32Z) - SEE: Strategic Exploration and Exploitation for Cohesive In-Context Prompt Optimization [8.975505323004427]
We propose a novel Cohesive In-Context Prompt Optimization framework for Large Language Models (LLMs)<n>We introduce SEE, a scalable and efficient prompt optimization framework that adopts metaheuristic optimization principles and strategically exploration and exploitation.<n> SEE significantly outperforms state-of-the-art baseline methods by a large margin, achieving an average performance gain of 13.94 while reducing computational costs by 58.67.
arXiv Detail & Related papers (2024-02-17T17:47:10Z) - EvoPrompt: Connecting LLMs with Evolutionary Algorithms Yields Powerful Prompt Optimizers [67.64162164254809]
EvoPrompt is a framework for discrete prompt optimization.<n>It borrows the idea of evolutionary algorithms (EAs) as they exhibit good performance and fast convergence.<n>It significantly outperforms human-engineered prompts and existing methods for automatic prompt generation.
arXiv Detail & Related papers (2023-09-15T16:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.