Back to Basics: Revisiting REINFORCE Style Optimization for Learning
from Human Feedback in LLMs
- URL: http://arxiv.org/abs/2402.14740v2
- Date: Mon, 26 Feb 2024 18:26:25 GMT
- Title: Back to Basics: Revisiting REINFORCE Style Optimization for Learning
from Human Feedback in LLMs
- Authors: Arash Ahmadian, Chris Cremer, Matthias Gall\'e, Marzieh Fadaee, Julia
Kreutzer, Olivier Pietquin, Ahmet \"Ust\"un, Sara Hooker
- Abstract summary: AI alignment in the shape of Reinforcement Learning from Human Feedback is increasingly treated as a crucial ingredient for high performance large language models.
Proximal Policy Optimization (PPO) has been positioned by recent literature as the canonical method for the RL part of RLHF.
We show that many components of PPO are unnecessary in an RLHF context and that simpler REINFORCE-style optimization variants outperform both PPO and newly proposed "RL-free" methods such as DPO and RAFT.
- Score: 29.505270680223003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI alignment in the shape of Reinforcement Learning from Human Feedback
(RLHF) is increasingly treated as a crucial ingredient for high performance
large language models. Proximal Policy Optimization (PPO) has been positioned
by recent literature as the canonical method for the RL part of RLHF. However,
it involves both high computational cost and sensitive hyperparameter tuning.
We posit that most of the motivational principles that led to the development
of PPO are less of a practical concern in RLHF and advocate for a less
computationally expensive method that preserves and even increases performance.
We revisit the formulation of alignment from human preferences in the context
of RL. Keeping simplicity as a guiding principle, we show that many components
of PPO are unnecessary in an RLHF context and that far simpler REINFORCE-style
optimization variants outperform both PPO and newly proposed "RL-free" methods
such as DPO and RAFT. Our work suggests that careful adaptation to LLMs
alignment characteristics enables benefiting from online RL optimization at low
cost.
Related papers
- Accelerated Preference Optimization for Large Language Model Alignment [60.22606527763201]
Reinforcement Learning from Human Feedback (RLHF) has emerged as a pivotal tool for aligning large language models (LLMs) with human preferences.
Direct Preference Optimization (DPO) formulates RLHF as a policy optimization problem without explicitly estimating the reward function.
We propose a general Accelerated Preference Optimization (APO) framework, which unifies many existing preference optimization algorithms.
arXiv Detail & Related papers (2024-10-08T18:51:01Z) - Inverse-Q*: Token Level Reinforcement Learning for Aligning Large Language Models Without Preference Data [25.844968873581244]
Inverse-Q* is an innovative framework that transcends traditional RL methods by optimizing token-level reinforcement learning.
Our results suggest that Inverse-Q* offers a practical and robust alternative to conventional RLHF approaches.
arXiv Detail & Related papers (2024-08-27T08:43:32Z) - Minor DPO reject penalty to increase training robustness [8.971332948872185]
Learning from human preference is a paradigm used in large-scale language model (LLM) fine-tuning step to better align pretrained LLM to human preference for downstream task.
Recently, Direct Preference Optimization (DPO) has been proposed to solve the alignment problem with a simplified RL-free method.
In this article, we analyze the working mechanism of $beta$ in DPO, disclose its syntax difference between RL algorithm and DPO, and understand the potential shortage brought by the DPO simplification.
arXiv Detail & Related papers (2024-08-19T09:29:31Z) - SAIL: Self-Improving Efficient Online Alignment of Large Language Models [56.59644677997827]
Reinforcement Learning from Human Feedback is a key method for aligning large language models with human preferences.
Recent literature has focused on designing online RLHF methods but still lacks a unified conceptual formulation.
Our approach significantly improves alignment performance on open-sourced datasets with minimal computational overhead.
arXiv Detail & Related papers (2024-06-21T18:05:35Z) - Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint [56.74058752955209]
This paper studies the alignment process of generative models with Reinforcement Learning from Human Feedback (RLHF)
We first identify the primary challenges of existing popular methods like offline PPO and offline DPO as lacking in strategical exploration of the environment.
We propose efficient algorithms with finite-sample theoretical guarantees.
arXiv Detail & Related papers (2023-12-18T18:58:42Z) - Contrastive Preference Learning: Learning from Human Feedback without RL [71.77024922527642]
We introduce Contrastive Preference Learning (CPL), an algorithm for learning optimal policies from preferences without learning reward functions.
CPL is fully off-policy, uses only a simple contrastive objective, and can be applied to arbitrary MDPs.
arXiv Detail & Related papers (2023-10-20T16:37:56Z) - Pairwise Proximal Policy Optimization: Harnessing Relative Feedback for
LLM Alignment [37.52249093928251]
This paper proposes a new framework, reinforcement learning with relative feedback, and a novel trajectory-wise policy gradient algorithm.
We show theoretically that P3O is invariant to equivalent rewards and avoids the complexity of PPO.
Empirical evaluations demonstrate that P3O outperforms PPO in the KL-Reward trade-off and can align with human preferences as well as or better than prior methods.
arXiv Detail & Related papers (2023-09-30T01:23:22Z) - Secrets of RLHF in Large Language Models Part I: PPO [81.01936993929127]
Large language models (LLMs) have formulated a blueprint for the advancement of artificial general intelligence.
reinforcement learning with human feedback (RLHF) emerges as the pivotal technological paradigm underpinning this pursuit.
In this report, we dissect the framework of RLHF, re-evaluate the inner workings of PPO, and explore how the parts comprising PPO algorithms impact policy agent training.
arXiv Detail & Related papers (2023-07-11T01:55:24Z) - Direct Preference Optimization: Your Language Model is Secretly a Reward Model [119.65409513119963]
We introduce a new parameterization of the reward model in RLHF that enables extraction of the corresponding optimal policy in closed form.
The resulting algorithm, which we call Direct Preference Optimization (DPO), is stable, performant, and computationally lightweight.
Our experiments show that DPO can fine-tune LMs to align with human preferences as well as or better than existing methods.
arXiv Detail & Related papers (2023-05-29T17:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.