Statistical Rejection Sampling Improves Preference Optimization
- URL: http://arxiv.org/abs/2309.06657v2
- Date: Tue, 23 Jan 2024 23:16:11 GMT
- Title: Statistical Rejection Sampling Improves Preference Optimization
- Authors: Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh,
Peter J. Liu, Jialu Liu
- Abstract summary: We introduce a novel approach to source preference data from the target optimal policy using rejection sampling.
We also propose a unified framework that enhances the loss functions used in both Sequence Likelihood (SLiC) and Direct Preference Optimization (DPO) from a preference modeling standpoint.
- Score: 42.57245965632205
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Improving the alignment of language models with human preferences remains an
active research challenge. Previous approaches have primarily utilized
Reinforcement Learning from Human Feedback (RLHF) via online RL methods such as
Proximal Policy Optimization (PPO). Recently, offline methods such as Sequence
Likelihood Calibration (SLiC) and Direct Preference Optimization (DPO) have
emerged as attractive alternatives, offering improvements in stability and
scalability while maintaining competitive performance. SLiC refines its loss
function using sequence pairs sampled from a supervised fine-tuned (SFT)
policy, while DPO directly optimizes language models based on preference data,
foregoing the need for a separate reward model. However, the maximum likelihood
estimator (MLE) of the target optimal policy requires labeled preference pairs
sampled from that policy. DPO's lack of a reward model constrains its ability
to sample preference pairs from the optimal policy, and SLiC is restricted to
sampling preference pairs only from the SFT policy. To address these
limitations, we introduce a novel approach called Statistical Rejection
Sampling Optimization (RSO) that aims to source preference data from the target
optimal policy using rejection sampling, enabling a more accurate estimation of
the optimal policy. We also propose a unified framework that enhances the loss
functions used in both SLiC and DPO from a preference modeling standpoint.
Through extensive experiments across three diverse tasks, we demonstrate that
RSO consistently outperforms both SLiC and DPO on evaluations from both Large
Language Model (LLM) and human raters.
Related papers
- Uncertainty-Penalized Direct Preference Optimization [52.387088396044206]
We develop a pessimistic framework for DPO by introducing preference uncertainty penalization schemes.
The penalization serves as a correction to the loss which attenuates the loss gradient for uncertain samples.
We show improved overall performance compared to vanilla DPO, as well as better completions on prompts from high-uncertainty chosen/rejected responses.
arXiv Detail & Related papers (2024-10-26T14:24:37Z) - SePPO: Semi-Policy Preference Optimization for Diffusion Alignment [67.8738082040299]
We propose a preference optimization method that aligns DMs with preferences without relying on reward models or paired human-annotated data.
We validate SePPO across both text-to-image and text-to-video benchmarks.
arXiv Detail & Related papers (2024-10-07T17:56:53Z) - TSO: Self-Training with Scaled Preference Optimization [14.3799656174528]
We propose TSO, a framework for preference optimization that conducts self-training preference learning without training an additional reward model.
TSO enhances the diversity of responses by constructing a model matrix and incorporating human preference responses.
Experimental results demonstrate that TSO outperforms existing mainstream methods on various alignment evaluation benchmarks.
arXiv Detail & Related papers (2024-08-31T05:37:01Z) - Bridging and Modeling Correlations in Pairwise Data for Direct Preference Optimization [75.1240295759264]
We propose an effective framework for Bridging and Modeling Correlations in pairwise data, named BMC.
We increase the consistency and informativeness of the pairwise preference signals through targeted modifications.
We identify that DPO alone is insufficient to model these correlations and capture nuanced variations.
arXiv Detail & Related papers (2024-08-14T11:29:47Z) - Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.
To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.
Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - RS-DPO: A Hybrid Rejection Sampling and Direct Preference Optimization Method for Alignment of Large Language Models [7.676477609461592]
Reinforcement learning from human feedback (RLHF) has been extensively employed to align large language models with user intent.
DPO relies on contrastive responses generated from human annotator and alternative LLM, instead of the policy model.
In this paper, we address both challenges by systematically combining sampling rejection (RS) and DPO.
Our proposed method effectively fine-tunes LLMs with limited resource environments, leading to improved alignment with user intent.
arXiv Detail & Related papers (2024-02-15T16:00:58Z) - Towards Efficient Exact Optimization of Language Model Alignment [93.39181634597877]
Direct preference optimization (DPO) was proposed to directly optimize the policy from preference data.
We show that DPO derived based on the optimal solution of problem leads to a compromised mean-seeking approximation of the optimal solution in practice.
We propose efficient exact optimization (EXO) of the alignment objective.
arXiv Detail & Related papers (2024-02-01T18:51:54Z) - Preference as Reward, Maximum Preference Optimization with Importance Sampling [3.7040071165219595]
We propose a simple and intuitive off-policy preference optimization algorithm from an importance sampling view, which we call Maximum Preference Optimization (MPO)
MPO achieves the best of both worlds by combining the objectives of RLHF and IPO while being an off-policy algorithm.
arXiv Detail & Related papers (2023-12-27T06:34:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.