ResponseRank: Data-Efficient Reward Modeling through Preference Strength Learning
- URL: http://arxiv.org/abs/2512.25023v1
- Date: Wed, 31 Dec 2025 18:21:52 GMT
- Title: ResponseRank: Data-Efficient Reward Modeling through Preference Strength Learning
- Authors: Timo Kaufmann, Yannick Metz, Daniel Keim, Eyke Hüllermeier,
- Abstract summary: We propose ResponseRank to address the challenge of learning from noisy strength signals.<n>Our method uses relative differences in proxy signals to rank responses to pairwise comparisons by their inferred preference strength.<n>Our contributions are threefold: (1) ResponseRank, a novel method that robustly learns preference strength by leveraging locally valid relative strength signals; (2) empirical evidence of improved sample efficiency and robustness across diverse tasks; and (3) the Pearson Distance Correlation (PDC), a novel metric that isolates cardinal utility learning from ordinal accuracy.
- Score: 26.19338354679139
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Binary choices, as often used for reinforcement learning from human feedback (RLHF), convey only the direction of a preference. A person may choose apples over oranges and bananas over grapes, but which preference is stronger? Strength is crucial for decision-making under uncertainty and generalization of preference models, but hard to measure reliably. Metadata such as response times and inter-annotator agreement can serve as proxies for strength, but are often noisy and confounded. We propose ResponseRank to address the challenge of learning from noisy strength signals. Our method uses relative differences in proxy signals to rank responses to pairwise comparisons by their inferred preference strength. To control for systemic variation, we compare signals only locally within carefully constructed strata. This enables robust learning of utility differences consistent with strength-derived rankings while making minimal assumptions about the strength signal. Our contributions are threefold: (1) ResponseRank, a novel method that robustly learns preference strength by leveraging locally valid relative strength signals; (2) empirical evidence of improved sample efficiency and robustness across diverse tasks: synthetic preference learning (with simulated response times), language modeling (with annotator agreement), and RL control tasks (with simulated episode returns); and (3) the Pearson Distance Correlation (PDC), a novel metric that isolates cardinal utility learning from ordinal accuracy.
Related papers
- Robust Preference Alignment via Directional Neighborhood Consensus [13.313830197011983]
We introduce Robust Preference Selection (RPS), a post-hoc, training-free method by leveraging directional neighborhood consensus.<n>RPS samples multiple responses from a local neighborhood of related preferences to create a superior candidate pool.<n>Our work presents a practical, theoretically-grounded solution for enhancing the reliability of preference-aligned models.
arXiv Detail & Related papers (2025-10-23T12:39:20Z) - Reference-Free Rating of LLM Responses via Latent Information [53.463883683503106]
We study the common practice of asking a judge model to assign Likert-scale scores to free-text responses.<n>We then propose and evaluate Latent Judges, which derive scalar ratings from internal model signals.<n>Across a broad suite of pairwise and single-rating benchmarks, latent methods match or surpass standard prompting.
arXiv Detail & Related papers (2025-09-29T12:15:52Z) - The Delta Learning Hypothesis: Preference Tuning on Weak Data can Yield Strong Gains [50.66245575710432]
We show that paired preference data consisting of individually weak data points can enable gains beyond the strength of each individual data point.<n>Our work shows that models can learn surprisingly well from paired data that might typically be considered weak.
arXiv Detail & Related papers (2025-07-08T17:14:44Z) - Enhancing Preference-based Linear Bandits via Human Response Time [25.92686846689662]
Interactive preference learning systems infer human preferences by presenting queries as pairs of options and collecting binary choices.<n>We propose a method that combines choices and response times to estimate human utility functions.<n>We incorporate this estimator into preference-based linear bandits for fixed-budget best-arm identification.
arXiv Detail & Related papers (2024-09-09T17:02:47Z) - Aligning Large Language Models from Self-Reference AI Feedback with one General Principle [61.105703857868775]
We propose a self-reference-based AI feedback framework that enables a 13B Llama2-Chat to provide high-quality feedback.
Specifically, we allow the AI to first respond to the user's instructions, then generate criticism of other answers based on its own response as a reference.
Finally, we determine which answer better fits human preferences according to the criticism.
arXiv Detail & Related papers (2024-06-17T03:51:46Z) - Secrets of RLHF in Large Language Models Part II: Reward Modeling [134.97964938009588]
We introduce a series of novel methods to mitigate the influence of incorrect and ambiguous preferences in the dataset.
We also introduce contrastive learning to enhance the ability of reward models to distinguish between chosen and rejected responses.
arXiv Detail & Related papers (2024-01-11T17:56:59Z) - A Minimaximalist Approach to Reinforcement Learning from Human Feedback [49.45285664482369]
We present Self-Play Preference Optimization (SPO), an algorithm for reinforcement learning from human feedback.
Our approach is minimalist in that it does not require training a reward model nor unstable adversarial training.
We demonstrate that on a suite of continuous control tasks, we are able to learn significantly more efficiently than reward-model based approaches.
arXiv Detail & Related papers (2024-01-08T17:55:02Z) - Deep Reinforcement Learning from Hierarchical Preference Design [99.46415116087259]
This paper shows by exploiting certain structures, one can ease the reward design process.
We propose a hierarchical reward modeling framework -- HERON for scenarios: (I) The feedback signals naturally present hierarchy; (II) The reward is sparse, but with less important surrogate feedback to help policy learning.
arXiv Detail & Related papers (2023-09-06T00:44:29Z) - Policy Evaluation and Seeking for Multi-Agent Reinforcement Learning via
Best Response [15.149039407681945]
We adopt strict best response dynamics to model selfish behaviors at a meta-level for multi-agent reinforcement learning.
Our approach is more compatible with single-agent reinforcement learning than alpha-rank which relies on weakly better responses.
arXiv Detail & Related papers (2020-06-17T01:17:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.