Multi-dimensional Preference Alignment by Conditioning Reward Itself
- URL: http://arxiv.org/abs/2512.10237v1
- Date: Thu, 11 Dec 2025 02:44:31 GMT
- Title: Multi-dimensional Preference Alignment by Conditioning Reward Itself
- Authors: Jiho Jang, Jinyoung Kim, Kyungjune Baek, Nojun Kwak,
- Abstract summary: Multi Reward Conditional DPO resolves reward conflicts by introducing a disentangled Bradley-Terry objective.<n>Experiments on Stable 1.5 and SDXL demonstrate that MCDPO achieves superior performance on benchmarks.
- Score: 32.33870784484853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement Learning from Human Feedback has emerged as a standard for aligning diffusion models. However, we identify a fundamental limitation in the standard DPO formulation because it relies on the Bradley-Terry model to aggregate diverse evaluation axes like aesthetic quality and semantic alignment into a single scalar reward. This aggregation creates a reward conflict where the model is forced to unlearn desirable features of a specific dimension if they appear in a globally non-preferred sample. To address this issue, we propose Multi Reward Conditional DPO (MCDPO). This method resolves reward conflicts by introducing a disentangled Bradley-Terry objective. MCDPO explicitly injects a preference outcome vector as a condition during training, which allows the model to learn the correct optimization direction for each reward axis independently within a single network. We further introduce dimensional reward dropout to ensure balanced optimization across dimensions. Extensive experiments on Stable Diffusion 1.5 and SDXL demonstrate that MCDPO achieves superior performance on benchmarks. Notably, our conditional framework enables dynamic and multiple-axis control at inference time using Classifier Free Guidance to amplify specific reward dimensions without additional training or external reward models.
Related papers
- Taming Preference Mode Collapse via Directional Decoupling Alignment in Diffusion Reinforcement Learning [27.33241821967005]
We propose a novel framework that mitigates Preference Mode Collapse (PMC)<n>D$2$-Align achieves superior alignment with human preference.
arXiv Detail & Related papers (2025-12-30T11:17:52Z) - Probing Preference Representations: A Multi-Dimensional Evaluation and Analysis Method for Reward Models [63.00458229517523]
This work addresses the evaluation challenge of reward models by probing preference representations.<n>We construct a Multi-dimensional Reward Model Benchmark (MRMBench), a collection of six probing tasks for different preference dimensions.<n>We introduce an analysis method, inference-time probing, which identifies the dimensions used during the reward prediction and enhances its interpretability.
arXiv Detail & Related papers (2025-11-16T05:29:29Z) - G$^2$RPO: Granular GRPO for Precise Reward in Flow Models [74.21206048155669]
We propose a novel Granular-GRPO (G$2$RPO) framework that achieves precise and comprehensive reward assessments of sampling directions.<n>We introduce a Multi-Granularity Advantage Integration module that aggregates advantages computed at multiple diffusion scales.<n>Our G$2$RPO significantly outperforms existing flow-based GRPO baselines.
arXiv Detail & Related papers (2025-10-02T12:57:12Z) - A Principled Loss Function for Direct Language Model Alignment [0.0]
We propose a novel loss function derived directly from the RLHF optimality condition.<n>Our proposed loss targets a specific finite value for the logits, which is dictated by the underlying reward, rather than its difference.<n>This inherent stability prevents reward hacking and leads to more effective alignment.
arXiv Detail & Related papers (2025-08-10T01:56:58Z) - Fake it till You Make it: Reward Modeling as Discriminative Prediction [49.31309674007382]
GAN-RM is an efficient reward modeling framework that eliminates manual preference annotation and explicit quality dimension engineering.<n>Our method trains the reward model through discrimination between a small set of representative, unpaired target samples.<n>Experiments demonstrate our GAN-RM's effectiveness across multiple key applications.
arXiv Detail & Related papers (2025-06-16T17:59:40Z) - AMoPO: Adaptive Multi-objective Preference Optimization without Reward Models and Reference Models [18.249363312256722]
AMoPO is a novel framework that achieves dynamic balance across preference dimensions.<n>We introduce the multi-objective optimization paradigm to use the dimension-aware generation metrics as implicit rewards.<n> Empirical results demonstrate that AMoPO outperforms state-of-the-art baselines by 28.5%.
arXiv Detail & Related papers (2025-06-08T14:31:06Z) - Bridging Jensen Gap for Max-Min Group Fairness Optimization in Recommendation [63.66719748453878]
Group max-min fairness (MMF) is commonly used in fairness-aware recommender systems (RS) as an optimization objective.<n>We present an efficient and effective algorithm named FairDual, which utilizes a dual optimization technique to minimize the Jensen gap.<n>Our theoretical analysis demonstrates that FairDual can achieve a sub-linear convergence rate to the globally optimal solution.
arXiv Detail & Related papers (2025-02-13T13:33:45Z) - Calibrated Multi-Preference Optimization for Aligning Diffusion Models [90.15024547673785]
Calibrated Preference Optimization (CaPO) is a novel method to align text-to-image (T2I) diffusion models.<n>CaPO incorporates the general preference from multiple reward models without human annotated data.<n> Experimental results show that CaPO consistently outperforms prior methods.
arXiv Detail & Related papers (2025-02-04T18:59:23Z) - Test-time Alignment of Diffusion Models without Reward Over-optimization [8.981605934618349]
Diffusion models excel in generative tasks, but aligning them with specific objectives remains challenging.<n>We propose a training-free, test-time method based on Sequential Monte Carlo (SMC) to sample from the reward-aligned target distribution.<n>We demonstrate its effectiveness in single-reward optimization, multi-objective scenarios, and online black-box optimization.
arXiv Detail & Related papers (2025-01-10T09:10:30Z) - Robust Preference Optimization through Reward Model Distillation [68.65844394615702]
Direct Preference Optimization (DPO) is a popular offline alignment method that trains a policy directly on preference data.<n>We analyze this phenomenon and use distillation to get a better proxy for the true preference distribution over generation pairs.<n>Our results show that distilling from such a family of reward models leads to improved robustness to distribution shift in preference annotations.
arXiv Detail & Related papers (2024-05-29T17:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.