RiOT: Efficient Prompt Refinement with Residual Optimization Tree
- URL: http://arxiv.org/abs/2506.16389v1
- Date: Thu, 19 Jun 2025 15:19:56 GMT
- Title: RiOT: Efficient Prompt Refinement with Residual Optimization Tree
- Authors: Chenyi Zhou, Zhengyan Shi, Yuan Yao, Lei Liang, Huajun Chen, Qiang Zhang,
- Abstract summary: Residual Optimization Tree (RiOT) is a novel framework for automatic prompt optimization.<n>RiOT iteratively refines prompts through text gradients, generating multiple semantically diverse candidates at each step, and selects the best prompt using perplexity.<n>A tree structure efficiently manages the optimization process, ensuring scalability and flexibility.
- Score: 32.685797785747056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in large language models (LLMs) have highlighted their potential across a variety of tasks, but their performance still heavily relies on the design of effective prompts. Existing methods for automatic prompt optimization face two challenges: lack of diversity, limiting the exploration of valuable and innovative directions and semantic drift, where optimizations for one task can degrade performance in others. To address these issues, we propose Residual Optimization Tree (RiOT), a novel framework for automatic prompt optimization. RiOT iteratively refines prompts through text gradients, generating multiple semantically diverse candidates at each step, and selects the best prompt using perplexity. Additionally, RiOT incorporates the text residual connection to mitigate semantic drift by selectively retaining beneficial content across optimization iterations. A tree structure efficiently manages the optimization process, ensuring scalability and flexibility. Extensive experiments across five benchmarks, covering commonsense, mathematical, logical, temporal, and semantic reasoning, demonstrate that RiOT outperforms both previous prompt optimization methods and manual prompting.
Related papers
- Promptomatix: An Automatic Prompt Optimization Framework for Large Language Models [72.4723784999432]
Large Language Models (LLMs) perform best with well-crafted prompts, yet prompt engineering remains manual, inconsistent, and inaccessible to non-experts.<n>Promptomatix transforms natural language task descriptions into high-quality prompts without requiring manual tuning or domain expertise.<n>System analyzes user intent, generates synthetic training data, selects prompting strategies, and refines prompts using cost-aware objectives.
arXiv Detail & Related papers (2025-07-17T18:18:20Z) - Evolving Prompts In-Context: An Open-ended, Self-replicating Perspective [65.12150411762273]
We show that pruning random demonstrations into seemingly incoherent "gibberish" can remarkably improve performance across diverse tasks.<n>We propose a self-discover prompt optimization framework, PromptQuine, that automatically searches for the pruning strategy by itself using only low-data regimes.
arXiv Detail & Related papers (2025-06-22T07:53:07Z) - Tournament of Prompts: Evolving LLM Instructions Through Structured Debates and Elo Ratings [0.9437165725355702]
We introduce DEEVO, a novel framework that guides prompt evolution through a debate-driven evaluation with an Elo-based selection.<n>Using Elo ratings as a fitness proxy, DEEVO simultaneously drives improvement and preserves valuable diversity in the prompt population.
arXiv Detail & Related papers (2025-05-30T19:33:41Z) - Beyond Degradation Redundancy: Contrastive Prompt Learning for All-in-One Image Restoration [109.38288333994407]
Contrastive Prompt Learning (CPL) is a novel framework that fundamentally enhances prompt-task alignment.<n>Our framework establishes new state-of-the-art performance while maintaining parameter efficiency, offering a principled solution for unified image restoration.
arXiv Detail & Related papers (2025-04-14T08:24:57Z) - Efficient and Accurate Prompt Optimization: the Benefit of Memory in Exemplar-Guided Reflection [19.020514286500006]
We propose an Exemplar-Guided Reflection with Memory mechanism to realize more efficient and accurate prompt optimization.<n>Specifically, we design an exemplar-guided reflection mechanism where the feedback generation is additionally guided by the generated exemplars.<n> Empirical evaluations show our method surpasses previous state-of-the-arts with less optimization steps.
arXiv Detail & Related papers (2024-11-12T00:07:29Z) - SCULPT: Systematic Tuning of Long Prompts [17.00433893207345]
We propose a framework that treats prompt optimization as a hierarchical tree refinement problem.<n>SCULPT represents prompts as tree structures, enabling targeted modifications while preserving contextual integrity.<n>It produces more stable and interpretable prompt modifications, ensuring better generalization across tasks.
arXiv Detail & Related papers (2024-10-28T07:10:10Z) - StraGo: Harnessing Strategic Guidance for Prompt Optimization [35.96577924228001]
StraGo is a novel approach designed to mitigate prompt drifting by leveraging insights from both successful and failed cases.
It employs a how-to-do methodology, integrating in-context learning to formulate specific, actionable strategies.
Experiments conducted across a range of tasks, including reasoning, natural language understanding, domain-specific knowledge, and industrial applications, demonstrate StraGo's superior performance.
arXiv Detail & Related papers (2024-10-11T07:55:42Z) - In-context Demonstration Matters: On Prompt Optimization for Pseudo-Supervision Refinement [71.60563181678323]
Large language models (LLMs) have achieved great success across diverse tasks, and fine-tuning is sometimes needed to further enhance generation quality.<n>To handle these challenges, a direct solution is to generate high-confidence'' data from unsupervised downstream tasks.<n>We propose a novel approach, pseudo-supervised demonstrations aligned prompt optimization (PAPO) algorithm, which jointly refines both the prompt and the overall pseudo-supervision.
arXiv Detail & Related papers (2024-10-04T03:39:28Z) - CriSPO: Multi-Aspect Critique-Suggestion-guided Automatic Prompt Optimization for Text Generation [18.39379838806384]
We propose a novel critique-suggestion-guided automatic Prompt Optimization (CriSPO) approach.<n>CriSPO introduces a critique-suggestion module as its core component.<n>This module spontaneously discovers aspects, and compares generated reference texts across these aspects, providing actionable suggestions for prompt modification.<n>To further improve CriSPO with multi-metric optimization, we introduce an Automatic Suffix Tuning (AST) extension to enhance the performance of task prompts across multiple metrics.
arXiv Detail & Related papers (2024-10-03T17:57:01Z) - QPO: Query-dependent Prompt Optimization via Multi-Loop Offline Reinforcement Learning [58.767866109043055]
We introduce Query-dependent Prompt Optimization (QPO), which iteratively fine-tune a small pretrained language model to generate optimal prompts tailored to the input queries.<n>We derive insights from offline prompting demonstration data, which already exists in large quantities as a by-product of benchmarking diverse prompts on open-sourced tasks.<n> Experiments on various LLM scales and diverse NLP and math tasks demonstrate the efficacy and cost-efficiency of our method in both zero-shot and few-shot scenarios.
arXiv Detail & Related papers (2024-08-20T03:06:48Z) - Large Language Models as Optimizers [106.52386531624532]
We propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as prompts.
In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values.
We demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks.
arXiv Detail & Related papers (2023-09-07T00:07:15Z) - RLPrompt: Optimizing Discrete Text Prompts With Reinforcement Learning [84.75064077323098]
This paper proposes RLPrompt, an efficient discrete prompt optimization approach with reinforcement learning (RL)
RLPrompt is flexibly applicable to different types of LMs, such as masked gibberish (e.g., grammaBERT) and left-to-right models (e.g., GPTs)
Experiments on few-shot classification and unsupervised text style transfer show superior performance over a wide range of existing finetuning or prompting methods.
arXiv Detail & Related papers (2022-05-25T07:50:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.