VRM: Teaching Reward Models to Understand Authentic Human Preferences
- URL: http://arxiv.org/abs/2603.04974v1
- Date: Thu, 05 Mar 2026 09:12:39 GMT
- Title: VRM: Teaching Reward Models to Understand Authentic Human Preferences
- Authors: Biao Liu, Ning Xu, Junming Yang, Hao Xu, Xin Geng,
- Abstract summary: Variational Reward Modeling is a novel framework that explicitly models the evaluation process of human preference judgments.<n>We show that VRM significantly outperforms existing methods in capturing authentic human preferences.
- Score: 39.939650821889764
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Large Language Models (LLMs) have achieved remarkable success across diverse natural language tasks, yet the reward models employed for aligning LLMs often encounter challenges of reward hacking, where the approaches predominantly rely on directly mapping prompt-response pairs to scalar scores, which may inadvertently capture spurious correlations rather than authentic human preferences. In contrast, human evaluation employs a sophisticated process that initially weighs the relative importance of multiple high-dimensional objectives according to the prompt context, subsequently evaluating response quality through low-dimensional semantic features such as logical coherence and contextual appropriateness. Motivated by this consideration, we propose VRM, i.e., Variational Reward Modeling, a novel framework that explicitly models the evaluation process of human preference judgments by incorporating both high-dimensional objective weights and low-dimensional semantic features as latent variables, which are inferred through variational inference techniques. Additionally, we provide a theoretical analysis showing that VRM can achieve a tighter generalization error bound compared to the traditional reward model. Extensive experiments on benchmark datasets demonstrate that VRM significantly outperforms existing methods in capturing authentic human preferences.
Related papers
- LVLM-Aided Alignment of Task-Specific Vision Models [49.96265491629163]
Small task-specific vision models are crucial in high-stakes domains.<n>We introduce a novel and efficient method for aligning small task-specific vision models with human domain knowledge.<n>Our method demonstrates substantial improvement in aligning model behavior with human specifications.
arXiv Detail & Related papers (2025-12-26T11:11:25Z) - RoleRMBench & RoleRM: Towards Reward Modeling for Profile-Based Role Play in Dialogue Systems [85.16327248973387]
We develop RoleRM, a reward model trained with Continuous Implicit Preferences (CIP)<n>We show RoleRM surpasses strong open- and closed-source reward models by over 24% on average.<n>Our findings highlight the importance of continuous preference representation and annotation consistency, establishing a foundation for subjective alignment in human-centered dialogue systems.
arXiv Detail & Related papers (2025-12-11T12:04:46Z) - Uncertainty Quantification for Large Language Model Reward Learning under Heterogeneous Human Feedback [8.538830579425147]
We study estimation and statistical reward models used in aligning large language (LLMs)<n>A key component of LLM alignment is reinforcement learning from human feedback.
arXiv Detail & Related papers (2025-12-02T20:22:25Z) - Probing Preference Representations: A Multi-Dimensional Evaluation and Analysis Method for Reward Models [63.00458229517523]
This work addresses the evaluation challenge of reward models by probing preference representations.<n>We construct a Multi-dimensional Reward Model Benchmark (MRMBench), a collection of six probing tasks for different preference dimensions.<n>We introduce an analysis method, inference-time probing, which identifies the dimensions used during the reward prediction and enhances its interpretability.
arXiv Detail & Related papers (2025-11-16T05:29:29Z) - Reward Model Interpretability via Optimal and Pessimal Tokens [4.951383975460995]
Reward modeling has emerged as a crucial component in aligning large language models with human values.<n>We present a novel approach to reward model interpretability through exhaustive analysis of their responses across their entire vocabulary space.<n>We find that these models can encode concerning biases toward certain identity groups, which may emerge as unintended consequences of harmlessness training.
arXiv Detail & Related papers (2025-06-08T23:56:58Z) - Preference Learning for AI Alignment: a Causal Perspective [55.2480439325792]
We frame this problem in a causal paradigm, providing the rich toolbox of causality to identify persistent challenges.<n>Inheriting from the literature of causal inference, we identify key assumptions necessary for reliable generalisation.<n>We illustrate failure modes of naive reward models and demonstrate how causally-inspired approaches can improve model robustness.
arXiv Detail & Related papers (2025-06-06T10:45:42Z) - Rethinking Reward Model Evaluation Through the Lens of Reward Overoptimization [15.729285736811383]
Reward models play a crucial role in reinforcement learning from human feedback.<n>Existing benchmarks for reward models show a weak correlation with the performance of optimized policies.
arXiv Detail & Related papers (2025-05-19T06:43:08Z) - Two Minds Better Than One: Collaborative Reward Modeling for LLM Alignment [35.80989342492335]
noisy preferences in human feedback can lead to reward misgeneralization.<n>This paper aims to identify how noisy preferences differ from human-aligned preferences in reward modeling.<n>We propose an online Collaborative Reward Modeling framework to achieve robust preference learning.
arXiv Detail & Related papers (2025-05-15T10:58:20Z) - Disentangling Length Bias In Preference Learning Via Response-Conditioned Modeling [87.17041933863041]
Reinforcement Learning from Human Feedback (RLHF) has achieved considerable success in aligning large language models (LLMs)<n>We introduce a $textbfR$esponse-$textbfc$onditioned $textbfB$radley-$textbfT$erry (Rc-BT) model that enhances the model's capability in length bias mitigating and length instruction following.<n>We also propose the Rc-RM and Rc-DPO algorithm to leverage the Rc-BT model for reward modeling and direct policy optimization
arXiv Detail & Related papers (2025-02-02T14:50:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.