RewardBench 2: Advancing Reward Model Evaluation
- URL: http://arxiv.org/abs/2506.01937v1
- Date: Mon, 02 Jun 2025 17:54:04 GMT
- Title: RewardBench 2: Advancing Reward Model Evaluation
- Authors: Saumya Malik, Valentina Pyatkin, Sander Land, Jacob Morrison, Noah A. Smith, Hannaneh Hajishirzi, Nathan Lambert,
- Abstract summary: Reward models are used throughout the post-training of language models to capture nuanced signals from preference data.<n>The community has begun establishing best practices for evaluating reward models.<n>This paper introduces RewardBench 2, a new multi-skill reward modeling benchmark.
- Score: 71.65938693914153
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reward models are used throughout the post-training of language models to capture nuanced signals from preference data and provide a training target for optimization across instruction following, reasoning, safety, and more domains. The community has begun establishing best practices for evaluating reward models, from the development of benchmarks that test capabilities in specific skill areas to others that test agreement with human preferences. At the same time, progress in evaluation has not been mirrored by the effectiveness of reward models in downstream tasks -- simpler direct alignment algorithms are reported to work better in many cases. This paper introduces RewardBench 2, a new multi-skill reward modeling benchmark designed to bring new, challenging data for accuracy-based reward model evaluation -- models score about 20 points on average lower on RewardBench 2 compared to the first RewardBench -- while being highly correlated with downstream performance. Compared to most other benchmarks, RewardBench 2 sources new human prompts instead of existing prompts from downstream evaluations, facilitating more rigorous evaluation practices. In this paper, we describe our benchmark construction process and report how existing models perform on it, while quantifying how performance on the benchmark correlates with downstream use of the models in both inference-time scaling algorithms, like best-of-N sampling, and RLHF training algorithms like proximal policy optimization.
Related papers
- Activation Reward Models for Few-Shot Model Alignment [77.37511364793515]
We introduce Activation Reward Models (Activation RMs)<n>Activation RMs leverage activation steering to construct well-aligned reward signals using minimal supervision and no additional model finetuning.<n>We demonstrate the effectiveness of Activation RMs in mitigating reward hacking behaviors, highlighting their utility for safety-critical applications.
arXiv Detail & Related papers (2025-07-02T05:10:29Z) - OIBench: Benchmarking Strong Reasoning Models with Olympiad in Informatics [13.049841309304922]
This paper introduces OIBench, a high-quality, private, and challenging olympiad-level informatics dataset comprising 250 carefully curated original problems.<n>We detail the construction methodology of the benchmark, ensuring a comprehensive assessment across various programming paradigms and complexities.<n>We propose Time/Space Completion Curves for finer-grained efficiency analysis and enable direct human-model comparisons.
arXiv Detail & Related papers (2025-06-12T08:33:38Z) - VerifyBench: Benchmarking Reference-based Reward Systems for Large Language Models [55.39064621869925]
OpenAI o1 and DeepSeek-R1 have achieved remarkable performance in the domain of reasoning.<n>A key component of their training is the incorporation of verifiable rewards within reinforcement learning.<n>Existing reward benchmarks do not evaluate reference-based reward systems.
arXiv Detail & Related papers (2025-05-21T17:54:43Z) - A Unified Pairwise Framework for RLHF: Bridging Generative Reward Modeling and Policy Optimization [18.892740849961456]
Reinforcement Learning from Human Feedback (RLHF) has emerged as an important paradigm for aligning large language models with human preferences during post-training.<n>This paper introduces Pairwise-RL, a RLHF framework that addresses these challenges through a combination of generative reward modeling and a pairwise proximal policy optimization algorithm.
arXiv Detail & Related papers (2025-04-07T11:34:48Z) - reWordBench: Benchmarking and Improving the Robustness of Reward Models with Transformed Inputs [64.29893431743608]
We show that state-of-the-art reward models suffer from substantial performance degradation even with minor input transformations.<n>We propose to explicitly train them to assign similar scores to paraphrases, and find that this approach also improves robustness to other distinct kinds of transformations.
arXiv Detail & Related papers (2025-03-14T17:59:41Z) - CARMO: Dynamic Criteria Generation for Context-Aware Reward Modelling [27.86204841898399]
Reward modeling in large language models is susceptible to reward hacking.<n>We propose Context-Aware Reward Modeling (CARMO) to mitigate this problem.<n>We establish a new state-of-the-art performance in zero-shot settings for generative models, achieving a 2.1% improvement on Reward Bench.
arXiv Detail & Related papers (2024-10-28T21:18:49Z) - Evaluating Robustness of Reward Models for Mathematical Reasoning [14.97819343313859]
We introduce a new design for reliable evaluation of reward models, and to validate this, we construct RewardMATH.
We demonstrate that the scores on RewardMATH strongly correlate with the results of optimized policy and effectively estimate reward overoptimization.
arXiv Detail & Related papers (2024-10-02T16:39:58Z) - Self-Taught Evaluators [77.92610887220594]
We present an approach that aims to im-proves without human annotations, using synthetic training data only.
Our Self-Taught Evaluator can improve a strong LLM from 75.4 to 88.3 on RewardBench.
arXiv Detail & Related papers (2024-08-05T17:57:02Z) - RewardBench: Evaluating Reward Models for Language Modeling [100.28366840977966]
We present RewardBench, a benchmark dataset and code-base for evaluation of reward models.
The dataset is a collection of prompt-chosen-rejected trios spanning chat, reasoning, and safety.
On the RewardBench leaderboard, we evaluate reward models trained with a variety of methods.
arXiv Detail & Related papers (2024-03-20T17:49:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.