Reward-Agnostic Prompt Optimization for Text-to-Image Diffusion Models
- URL: http://arxiv.org/abs/2506.16853v2
- Date: Mon, 29 Sep 2025 13:36:01 GMT
- Title: Reward-Agnostic Prompt Optimization for Text-to-Image Diffusion Models
- Authors: Semin Kim, Yeonwoo Cha, Jaehoon Yoo, Seunghoon Hong,
- Abstract summary: We introduce RATTPO, a flexible test-time optimization method applicable across various reward scenarios without modification.<n>RATTPO searches for optimized prompts by querying large language models (LLMs) textitwithout requiring reward-specific task descriptions.<n> Empirical results demonstrate the versatility of RATTPO, effectively enhancing user prompts across diverse reward setups.
- Score: 20.292872255460534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate a general approach for improving user prompts in text-to-image (T2I) diffusion models by finding prompts that maximize a reward function specified at test-time. Although diverse reward models are used for evaluating image generation, existing automated prompt engineering methods typically target specific reward configurations. Consequently, these specialized designs exhibit suboptimal performance when applied to new prompt engineering scenarios involving different reward models. To address this limitation, we introduce RATTPO (Reward-Agnostic Test-Time Prompt Optimization), a flexible test-time optimization method applicable across various reward scenarios without modification. RATTPO iteratively searches for optimized prompts by querying large language models (LLMs) \textit{without} requiring reward-specific task descriptions. Instead, it uses the optimization trajectory and a novel reward-aware feedback signal (termed a "hint") as context. Empirical results demonstrate the versatility of RATTPO, effectively enhancing user prompts across diverse reward setups that assess various generation aspects, such as aesthetics, general human preference, or spatial relationships between objects. RATTPO surpasses other test-time search baselines in search efficiency, running 4.8 times faster than naive reward-agnostic test-time search baseline on average. Furthermore, with sufficient inference budget, it can achieve comparable performance to learning-based baselines that require reward-specific fine-tuning. The code is available at https://github.com/seminkim/RATTPO.
Related papers
- GFlowPO: Generative Flow Network as a Language Model Prompt Optimizer [51.31263673158136]
GFlowPO casts prompt search as a posterior inference problem over latent prompts regularized by a meta-prompted reference-LM prior.<n>GFlowPO consistently outperforms recent discrete prompt optimization baselines.
arXiv Detail & Related papers (2026-02-03T10:30:03Z) - GenPilot: A Multi-Agent System for Test-Time Prompt Optimization in Image Generation [13.197958581564256]
We propose a test-time prompt optimization strategy that operates directly on the input text.<n>Our approach is model-agnostic, interpretable, and well-suited for handling long and complex prompts.
arXiv Detail & Related papers (2025-10-08T16:51:52Z) - Can Prompt Difficulty be Online Predicted for Accelerating RL Finetuning of Reasoning Models? [62.579951798437115]
This work investigates iterative approximate evaluation for arbitrary prompts.<n>It introduces Model Predictive Prompt Selection (MoPPS), a Bayesian risk-predictive framework.<n>MoPPS reliably predicts prompt difficulty and accelerates training with significantly reduced rollouts.
arXiv Detail & Related papers (2025-07-07T03:20:52Z) - ConceptMix++: Leveling the Playing Field in Text-to-Image Benchmarking via Iterative Prompt Optimization [20.935028961216325]
ConceptMix++ is a framework that disentangles prompt phrasing from visual generation capabilities.<n>We show that optimized prompts significantly improve compositional generation performance.<n>These findings demonstrate that rigid benchmarking approaches may significantly underrepresent true model capabilities.
arXiv Detail & Related papers (2025-07-04T03:27:04Z) - Fake it till You Make it: Reward Modeling as Discriminative Prediction [49.31309674007382]
GAN-RM is an efficient reward modeling framework that eliminates manual preference annotation and explicit quality dimension engineering.<n>Our method trains the reward model through discrimination between a small set of representative, unpaired target samples.<n>Experiments demonstrate our GAN-RM's effectiveness across multiple key applications.
arXiv Detail & Related papers (2025-06-16T17:59:40Z) - ORPP: Self-Optimizing Role-playing Prompts to Enhance Language Model Capabilities [64.24517317344959]
High-quality prompts are crucial for eliciting outstanding performance from large language models on complex tasks.<n>We propose ORPP, a framework that enhances model performance by optimizing and generating role-playing prompts.<n>We show that ORPP not only matches but in most cases surpasses existing mainstream prompt optimization methods in terms of performance.
arXiv Detail & Related papers (2025-06-03T05:51:35Z) - PMPO: Probabilistic Metric Prompt Optimization for Small and Large Language Models [1.6816171955882597]
PMPO locates low quality prompt segments via a masking based analysis and iteratively rewrites them to propose improved variants.<n>It selects among variants by minimizing loss in a single forward pass, eliminating output sampling and human or judge based scoring for selection.<n>Across model sizes and datasets, PMPO outperforms prior prompts: it achieves the highest average accuracy on BBH, performs strongly on GSM8K and AQUA RAT, and raises AlpacaEval 2.0 win rates by over 19 points.
arXiv Detail & Related papers (2025-05-22T06:59:10Z) - Sample, Don't Search: Rethinking Test-Time Alignment for Language Models [55.2480439325792]
We introduce QAlign, a new test-time alignment approach.<n>As we scale test-time compute, QAlign converges to sampling from the optimal aligned distribution for each individual prompt.<n>By adopting recent advances in Markov chain Monte Carlo for text generation, our method enables better-aligned outputs without modifying the underlying model or even requiring logit access.
arXiv Detail & Related papers (2025-04-04T00:41:40Z) - Indirect Query Bayesian Optimization with Integrated Feedback [17.66813850517961]
We develop a new class of Bayesian optimization problems where integrated feedback is given via a conditional expectation of the unknown function $f$ to be optimized.<n>The goal is to find the global optimum of $f$ by adaptively querying and observing in the space transformed by the conditional distribution.<n>This is motivated by real-world applications where one cannot access direct feedback due to privacy, hardware or computational constraints.
arXiv Detail & Related papers (2024-12-18T07:20:33Z) - HERO: Hint-Based Efficient and Reliable Query Optimizer [0.0]
We propose a novel model for learned query optimization which provides query hints leading to better execution plans.<n>The model addresses the three key challenges in learned hint-based query optimization: reliable hint recommendation, efficient hint exploration, and fast inference.<n>Our model is interpretable and easy to debug, which is particularly important for deployment in production.
arXiv Detail & Related papers (2024-12-03T10:58:34Z) - In-context Demonstration Matters: On Prompt Optimization for Pseudo-Supervision Refinement [71.60563181678323]
Large language models (LLMs) have achieved great success across diverse tasks, and fine-tuning is sometimes needed to further enhance generation quality.<n>To handle these challenges, a direct solution is to generate high-confidence'' data from unsupervised downstream tasks.<n>We propose a novel approach, pseudo-supervised demonstrations aligned prompt optimization (PAPO) algorithm, which jointly refines both the prompt and the overall pseudo-supervision.
arXiv Detail & Related papers (2024-10-04T03:39:28Z) - Large Language Models Prompting With Episodic Memory [53.8690170372303]
We propose PrOmpting with Episodic Memory (POEM), a novel prompt optimization technique that is simple, efficient, and demonstrates strong generalization capabilities.
In the testing phase, we optimize the sequence of examples for each test query by selecting the sequence that yields the highest total rewards from the top-k most similar training examples in the episodic memory.
Our results show that POEM outperforms recent techniques like TEMPERA and RLPrompt by over 5.3% in various text classification tasks.
arXiv Detail & Related papers (2024-08-14T11:19:28Z) - Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars [66.823588073584]
Large language models (LLMs) have shown impressive capabilities in real-world applications.
The quality of these exemplars in the prompt greatly impacts performance.
Existing methods fail to adequately account for the impact of exemplar ordering on the performance.
arXiv Detail & Related papers (2024-05-25T08:23:05Z) - MORL-Prompt: An Empirical Analysis of Multi-Objective Reinforcement Learning for Discrete Prompt Optimization [45.410121761165634]
RL-based techniques can be employed to search for prompts that, when fed into a target language model, maximize a set of user-specified reward functions.
Current techniques focus on maximizing the average of reward functions, which does not necessarily lead to prompts that achieve balance across rewards.
arXiv Detail & Related papers (2024-02-18T21:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.