The Perils of Optimizing Learned Reward Functions: Low Training Error Does Not Guarantee Low Regret
- URL: http://arxiv.org/abs/2406.15753v2
- Date: Tue, 04 Mar 2025 15:17:17 GMT
- Title: The Perils of Optimizing Learned Reward Functions: Low Training Error Does Not Guarantee Low Regret
- Authors: Lukas Fluri, Leon Lang, Alessandro Abate, Patrick Forré, David Krueger, Joar Skalse,
- Abstract summary: We show that a sufficiently low expected test error of the reward model guarantees low worst-case regret.<n>We then show that similar problems persist even when using policy regularization techniques.
- Score: 64.04721528586747
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In reinforcement learning, specifying reward functions that capture the intended task can be very challenging. Reward learning aims to address this issue by learning the reward function. However, a learned reward model may have a low error on the data distribution, and yet subsequently produce a policy with large regret. We say that such a reward model has an error-regret mismatch. The main source of an error-regret mismatch is the distributional shift that commonly occurs during policy optimization. In this paper, we mathematically show that a sufficiently low expected test error of the reward model guarantees low worst-case regret, but that for any fixed expected test error, there exist realistic data distributions that allow for error-regret mismatch to occur. We then show that similar problems persist even when using policy regularization techniques, commonly employed in methods such as RLHF. We hope our results stimulate the theoretical and empirical study of improved methods to learn reward models, and better ways to measure their quality reliably.
Related papers
- Probabilistic Uncertain Reward Model: A Natural Generalization of Bradley-Terry Reward Model [27.40414952747553]
We propose a Probabilistic Uncertain Reward Model (PURM) to address reward hacking.
We show that PURM effectively models the rewards and uncertainties, and significantly delays the onset of reward hacking.
arXiv Detail & Related papers (2025-03-28T14:39:52Z) - What Makes a Reward Model a Good Teacher? An Optimization Perspective [61.38643642719093]
We prove that regardless of accurate a reward model is, if it induces low reward variance, the RLHF objective suffers from a flat landscape.
We additionally show that a reward model that works well for one language model can induce low reward variance, and thus a flat objective landscape, for another.
arXiv Detail & Related papers (2025-03-19T17:54:41Z) - Towards Reliable Alignment: Uncertainty-aware RLHF [14.20181662644689]
We show that the fluctuation of reward models can be detrimental to the alignment problem.
We show that such policies are more risk-averse in the sense that they are more cautious of uncertain rewards.
We use this ensemble of reward models to align language model using our methodology and observe that our empirical findings match our theoretical predictions.
arXiv Detail & Related papers (2024-10-31T08:26:51Z) - Catastrophic Goodhart: regularizing RLHF with KL divergence does not mitigate heavy-tailed reward misspecification [1.0582505915332336]
We show that when the reward function has light-tailed error, optimal policies under less restrictive KL penalties achieve arbitrarily high utility.
If error is heavy-tailed, some policies obtain arbitrarily high reward despite achieving no more utility than the base model.
The pervasiveness of heavy-tailed distributions in many real-world applications indicates that future sources of RL reward could have heavy-tailed error.
arXiv Detail & Related papers (2024-07-19T17:57:59Z) - Symmetric Q-learning: Reducing Skewness of Bellman Error in Online
Reinforcement Learning [55.75959755058356]
In deep reinforcement learning, estimating the value function is essential to evaluate the quality of states and actions.
A recent study suggested that the error distribution for training the value function is often skewed because of the properties of the Bellman operator.
We proposed a method called Symmetric Q-learning, in which the synthetic noise generated from a zero-mean distribution is added to the target values to generate a Gaussian error distribution.
arXiv Detail & Related papers (2024-03-12T14:49:19Z) - Selective Learning: Towards Robust Calibration with Dynamic Regularization [79.92633587914659]
Miscalibration in deep learning refers to there is a discrepancy between the predicted confidence and performance.
We introduce Dynamic Regularization (DReg) which aims to learn what should be learned during training thereby circumventing the confidence adjusting trade-off.
arXiv Detail & Related papers (2024-02-13T11:25:20Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - DOMINO: Domain-aware Loss for Deep Learning Calibration [49.485186880996125]
This paper proposes a novel domain-aware loss function to calibrate deep learning models.
The proposed loss function applies a class-wise penalty based on the similarity between classes within a given target domain.
arXiv Detail & Related papers (2023-02-10T09:47:46Z) - CLARE: Conservative Model-Based Reward Learning for Offline Inverse
Reinforcement Learning [26.05184273238923]
This work aims to tackle a major challenge in offline Inverse Reinforcement Learning (IRL)
We devise a principled algorithm (namely CLARE) that solves offline IRL efficiently via integrating "conservatism" into a learned reward function.
Our theoretical analysis provides an upper bound on the return gap between the learned policy and the expert policy.
arXiv Detail & Related papers (2023-02-09T17:16:29Z) - Invariance in Policy Optimisation and Partial Identifiability in Reward
Learning [67.4640841144101]
We characterise the partial identifiability of the reward function given popular reward learning data sources.
We also analyse the impact of this partial identifiability for several downstream tasks, such as policy optimisation.
arXiv Detail & Related papers (2022-03-14T20:19:15Z) - Jitter: Random Jittering Loss Function [2.716362160018477]
One novel regularization method called flooding makes the training loss fluctuate around the flooding level.
We propose a novel method called Jitter to improve it.
Jitter can be a domain-, task-, and model-independent regularization method and train the model effectively after the training error reduces to zero.
arXiv Detail & Related papers (2021-06-25T16:39:40Z) - DisCor: Corrective Feedback in Reinforcement Learning via Distribution
Correction [96.90215318875859]
We show that bootstrapping-based Q-learning algorithms do not necessarily benefit from corrective feedback.
We propose a new algorithm, DisCor, which computes an approximation to this optimal distribution and uses it to re-weight the transitions used for training.
arXiv Detail & Related papers (2020-03-16T16:18:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.