The Perils of Optimizing Learned Reward Functions: Low Training Error Does Not Guarantee Low Regret
- URL: http://arxiv.org/abs/2406.15753v1
- Date: Sat, 22 Jun 2024 06:43:51 GMT
- Title: The Perils of Optimizing Learned Reward Functions: Low Training Error Does Not Guarantee Low Regret
- Authors: Lukas Fluri, Leon Lang, Alessandro Abate, Patrick Forré, David Krueger, Joar Skalse,
- Abstract summary: In reinforcement learning, specifying reward functions that capture the intended task can be very challenging.
In this paper, we mathematically show that a sufficiently low expected test error of the reward model guarantees low worst-case regret.
We then show that similar problems persist even when using policy regularization techniques, commonly employed in methods such as RLHF.
- Score: 64.04721528586747
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In reinforcement learning, specifying reward functions that capture the intended task can be very challenging. Reward learning aims to address this issue by learning the reward function. However, a learned reward model may have a low error on the training distribution, and yet subsequently produce a policy with large regret. We say that such a reward model has an error-regret mismatch. The main source of an error-regret mismatch is the distributional shift that commonly occurs during policy optimization. In this paper, we mathematically show that a sufficiently low expected test error of the reward model guarantees low worst-case regret, but that for any fixed expected test error, there exist realistic data distributions that allow for error-regret mismatch to occur. We then show that similar problems persist even when using policy regularization techniques, commonly employed in methods such as RLHF. Our theoretical results highlight the importance of developing new ways to measure the quality of learned reward models.
Related papers
- Catastrophic Goodhart: regularizing RLHF with KL divergence does not mitigate heavy-tailed reward misspecification [1.0582505915332336]
We show that when the reward function has light-tailed error, optimal policies under less restrictive KL penalties achieve arbitrarily high utility.
If error is heavy-tailed, some policies obtain arbitrarily high reward despite achieving no more utility than the base model.
The pervasiveness of heavy-tailed distributions in many real-world applications indicates that future sources of RL reward could have heavy-tailed error.
arXiv Detail & Related papers (2024-07-19T17:57:59Z) - Symmetric Q-learning: Reducing Skewness of Bellman Error in Online
Reinforcement Learning [55.75959755058356]
In deep reinforcement learning, estimating the value function is essential to evaluate the quality of states and actions.
A recent study suggested that the error distribution for training the value function is often skewed because of the properties of the Bellman operator.
We proposed a method called Symmetric Q-learning, in which the synthetic noise generated from a zero-mean distribution is added to the target values to generate a Gaussian error distribution.
arXiv Detail & Related papers (2024-03-12T14:49:19Z) - Selective Learning: Towards Robust Calibration with Dynamic Regularization [79.92633587914659]
Miscalibration in deep learning refers to there is a discrepancy between the predicted confidence and performance.
We introduce Dynamic Regularization (DReg) which aims to learn what should be learned during training thereby circumventing the confidence adjusting trade-off.
arXiv Detail & Related papers (2024-02-13T11:25:20Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - DOMINO: Domain-aware Loss for Deep Learning Calibration [49.485186880996125]
This paper proposes a novel domain-aware loss function to calibrate deep learning models.
The proposed loss function applies a class-wise penalty based on the similarity between classes within a given target domain.
arXiv Detail & Related papers (2023-02-10T09:47:46Z) - Jitter: Random Jittering Loss Function [2.716362160018477]
One novel regularization method called flooding makes the training loss fluctuate around the flooding level.
We propose a novel method called Jitter to improve it.
Jitter can be a domain-, task-, and model-independent regularization method and train the model effectively after the training error reduces to zero.
arXiv Detail & Related papers (2021-06-25T16:39:40Z) - DisCor: Corrective Feedback in Reinforcement Learning via Distribution
Correction [96.90215318875859]
We show that bootstrapping-based Q-learning algorithms do not necessarily benefit from corrective feedback.
We propose a new algorithm, DisCor, which computes an approximation to this optimal distribution and uses it to re-weight the transitions used for training.
arXiv Detail & Related papers (2020-03-16T16:18:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.