DP$^2$O-SR: Direct Perceptual Preference Optimization for Real-World Image Super-Resolution
- URL: http://arxiv.org/abs/2510.18851v1
- Date: Tue, 21 Oct 2025 17:43:23 GMT
- Title: DP$^2$O-SR: Direct Perceptual Preference Optimization for Real-World Image Super-Resolution
- Authors: Rongyuan Wu, Lingchen Sun, Zhengqiang Zhang, Shihao Wang, Tianhe Wu, Qiaosi Yi, Shuai Li, Lei Zhang,
- Abstract summary: We introduce a framework that aligns generative models with perceptual preferences without requiring costly human annotations.<n>We show that DP$2$O-SR significantly improves perceptual quality and generalizes well to real-world benchmarks.
- Score: 31.6824458800392
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Benefiting from pre-trained text-to-image (T2I) diffusion models, real-world image super-resolution (Real-ISR) methods can synthesize rich and realistic details. However, due to the inherent stochasticity of T2I models, different noise inputs often lead to outputs with varying perceptual quality. Although this randomness is sometimes seen as a limitation, it also introduces a wider perceptual quality range, which can be exploited to improve Real-ISR performance. To this end, we introduce Direct Perceptual Preference Optimization for Real-ISR (DP$^2$O-SR), a framework that aligns generative models with perceptual preferences without requiring costly human annotations. We construct a hybrid reward signal by combining full-reference and no-reference image quality assessment (IQA) models trained on large-scale human preference datasets. This reward encourages both structural fidelity and natural appearance. To better utilize perceptual diversity, we move beyond the standard best-vs-worst selection and construct multiple preference pairs from outputs of the same model. Our analysis reveals that the optimal selection ratio depends on model capacity: smaller models benefit from broader coverage, while larger models respond better to stronger contrast in supervision. Furthermore, we propose hierarchical preference optimization, which adaptively weights training pairs based on intra-group reward gaps and inter-group diversity, enabling more efficient and stable learning. Extensive experiments across both diffusion- and flow-based T2I backbones demonstrate that DP$^2$O-SR significantly improves perceptual quality and generalizes well to real-world benchmarks.
Related papers
- Bidirectional Reward-Guided Diffusion for Real-World Image Super-Resolution [79.35296000454694]
Diffusion-based super-resolution can synthesize rich details, but models trained on synthetic paired data often fail on real-world LR images.<n>We propose Bird-SR, a reward-guided diffusion framework that formulates super-resolution as trajectory-level preference optimization.<n>Experiments on real-world SR benchmarks demonstrate that Bird-SR consistently outperforms state-of-the-art methods in perceptual quality.
arXiv Detail & Related papers (2026-02-05T19:21:45Z) - Unified Personalized Reward Model for Vision Generation [27.496220369122494]
We propose UnifiedReward-Flex, a unified personalized reward model for vision generation.<n>We first distill structured, high-quality reasoning traces from advanced closed-source VLMs to bootstrap SFT.<n>We then perform direct preference optimization (DPO) on carefully curated preference pairs to further strengthen reasoning fidelity and discriminative alignment.
arXiv Detail & Related papers (2026-02-02T17:44:21Z) - APLOT: Robust Reward Modeling via Adaptive Preference Learning with Optimal Transport [37.21695864040979]
The reward model (RM) plays a crucial role in aligning Large Language Models (LLMs) with human preferences through Reinforcement Learning.<n>This paper introduces an effective enhancement to BT-based RMs through an adaptive margin mechanism.
arXiv Detail & Related papers (2025-10-13T03:13:28Z) - Divergence Minimization Preference Optimization for Diffusion Model Alignment [66.31417479052774]
Divergence Minimization Preference Optimization (DMPO) is a principled method for aligning diffusion models by minimizing reverse KL divergence.<n>DMPO can consistently outperform or match existing techniques across different base models and test sets.
arXiv Detail & Related papers (2025-07-10T07:57:30Z) - Unified Reward Model for Multimodal Understanding and Generation [32.22714522329413]
This paper proposes UnifiedReward, the first unified reward model for multimodal understanding and generation assessment.<n>We first develop UnifiedReward on our constructed large-scale human preference dataset, including both image and video generation/understanding tasks.
arXiv Detail & Related papers (2025-03-07T08:36:05Z) - Dual Caption Preference Optimization for Diffusion Models [53.218293277964165]
We introduce Dual Caption Preference Optimization (DCPO) to improve text-to-image diffusion models.<n>DCPO assigns two distinct captions to each preference pair, which reinforces the learning signal.<n>Experiments show that DCPO significantly improves image quality and relevance to prompts.
arXiv Detail & Related papers (2025-02-09T20:34:43Z) - Calibrated Multi-Preference Optimization for Aligning Diffusion Models [90.15024547673785]
Calibrated Preference Optimization (CaPO) is a novel method to align text-to-image (T2I) diffusion models.<n>CaPO incorporates the general preference from multiple reward models without human annotated data.<n> Experimental results show that CaPO consistently outperforms prior methods.
arXiv Detail & Related papers (2025-02-04T18:59:23Z) - Personalized Preference Fine-tuning of Diffusion Models [75.22218338096316]
We introduce PPD, a multi-reward optimization objective that aligns diffusion models with personalized preferences.<n>With PPD, a diffusion model learns the individual preferences of a population of users in a few-shot way.<n>Our approach achieves an average win rate of 76% over Stable Cascade, generating images that more accurately reflect specific user preferences.
arXiv Detail & Related papers (2025-01-11T22:38:41Z) - Elucidating Optimal Reward-Diversity Tradeoffs in Text-to-Image Diffusion Models [20.70550870149442]
We introduce Annealed Importance Guidance (AIG), an inference-time regularization inspired by Annealed Importance Sampling.
Our experiments demonstrate the benefits of AIG for Stable Diffusion models, striking the optimal balance between reward optimization and image diversity.
arXiv Detail & Related papers (2024-09-09T16:27:26Z) - PRDP: Proximal Reward Difference Prediction for Large-Scale Reward Finetuning of Diffusion Models [13.313186665410486]
Reward finetuning has emerged as a promising approach to aligning foundation models with downstream objectives.
Existing reward finetuning methods are limited by their instability in large-scale prompt datasets.
We propose Proximal Reward Difference Prediction (PRDP) to enable stable black-box reward finetuning for diffusion models.
arXiv Detail & Related papers (2024-02-13T18:58:16Z) - Secrets of RLHF in Large Language Models Part II: Reward Modeling [134.97964938009588]
We introduce a series of novel methods to mitigate the influence of incorrect and ambiguous preferences in the dataset.
We also introduce contrastive learning to enhance the ability of reward models to distinguish between chosen and rejected responses.
arXiv Detail & Related papers (2024-01-11T17:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.