Is Reinforcement Learning (Not) for Natural Language Processing?:
Benchmarks, Baselines, and Building Blocks for Natural Language Policy
Optimization
- URL: http://arxiv.org/abs/2210.01241v1
- Date: Mon, 3 Oct 2022 21:38:29 GMT
- Title: Is Reinforcement Learning (Not) for Natural Language Processing?:
Benchmarks, Baselines, and Building Blocks for Natural Language Policy
Optimization
- Authors: Rajkumar Ramamurthy, Prithviraj Ammanabrolu, Kiant\'e Brantley, Jack
Hessel, Rafet Sifa, Christian Bauckhage, Hannaneh Hajishirzi, Yejin Choi
- Abstract summary: We introduce an open-source modular library, RL4LMs, for optimizing language generators with reinforcement learning.
Next, we present the GRUE benchmark, a set of 6 language generation tasks which are supervised not by target strings, but by reward functions.
Finally, we introduce an easy-to-use, performant RL algorithm, NLPO, that learns to effectively reduce the action space in language generation.
- Score: 73.74371798168642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We tackle the problem of aligning pre-trained large language models (LMs)
with human preferences. If we view text generation as a sequential
decision-making problem, reinforcement learning (RL) appears to be a natural
conceptual framework. However, using RL for LM-based generation faces empirical
challenges, including training instability due to the combinatorial action
space, as well as a lack of open-source libraries and benchmarks customized for
LM alignment. Thus, a question rises in the research community: is RL a
practical paradigm for NLP?
To help answer this, we first introduce an open-source modular library,
RL4LMs (Reinforcement Learning for Language Models), for optimizing language
generators with RL. The library consists of on-policy RL algorithms that can be
used to train any encoder or encoder-decoder LM in the HuggingFace library
(Wolf et al. 2020) with an arbitrary reward function. Next, we present the GRUE
(General Reinforced-language Understanding Evaluation) benchmark, a set of 6
language generation tasks which are supervised not by target strings, but by
reward functions which capture automated measures of human preference.GRUE is
the first leaderboard-style evaluation of RL algorithms for NLP tasks. Finally,
we introduce an easy-to-use, performant RL algorithm, NLPO (Natural Language
Policy Optimization)} that learns to effectively reduce the combinatorial
action space in language generation. We show 1) that RL techniques are
generally better than supervised methods at aligning LMs to human preferences;
and 2) that NLPO exhibits greater stability and performance than previous
policy gradient methods (e.g., PPO (Schulman et al. 2017)), based on both
automatic and human evaluation.
Related papers
- Natural Language Reinforcement Learning [23.310602238815285]
Reinforcement Learning (RL) mathematically formulates decision-making with Markov Decision Process (MDP)
This paper seeks a new possibility, Natural Language Reinforcement Learning (NLRL), by extending traditional MDP to natural language-based representation space.
arXiv Detail & Related papers (2024-11-21T15:57:02Z) - Q-SFT: Q-Learning for Language Models via Supervised Fine-Tuning [62.984693936073974]
Value-based reinforcement learning can learn effective policies for a wide range of multi-turn problems.
Current value-based RL methods have proven particularly challenging to scale to the setting of large language models.
We propose a novel offline RL algorithm that addresses these drawbacks, casting Q-learning as a modified supervised fine-tuning problem.
arXiv Detail & Related papers (2024-11-07T21:36:52Z) - Multi-turn Reinforcement Learning from Preference Human Feedback [41.327438095745315]
Reinforcement Learning from Human Feedback (RLHF) has become the standard approach for aligning Large Language Models with human preferences.
Existing methods work by emulating the preferences at the single decision (turn) level.
We develop novel methods for Reinforcement Learning from preference feedback between two full multi-turn conversations.
arXiv Detail & Related papers (2024-05-23T14:53:54Z) - How Can LLM Guide RL? A Value-Based Approach [68.55316627400683]
Reinforcement learning (RL) has become the de facto standard practice for sequential decision-making problems by improving future acting policies with feedback.
Recent developments in large language models (LLMs) have showcased impressive capabilities in language understanding and generation, yet they fall short in exploration and self-improvement capabilities.
We develop an algorithm named LINVIT that incorporates LLM guidance as a regularization factor in value-based RL, leading to significant reductions in the amount of data needed for learning.
arXiv Detail & Related papers (2024-02-25T20:07:13Z) - Contrastive Preference Learning: Learning from Human Feedback without RL [71.77024922527642]
We introduce Contrastive Preference Learning (CPL), an algorithm for learning optimal policies from preferences without learning reward functions.
CPL is fully off-policy, uses only a simple contrastive objective, and can be applied to arbitrary MDPs.
arXiv Detail & Related papers (2023-10-20T16:37:56Z) - Reinforced Self-Training (ReST) for Language Modeling [56.75447441157628]
Reinforcement learning from human feedback (RLHF) can improve the quality of large language model's (LLM) outputs by aligning them with human preferences.
We propose a simple algorithm for aligning LLMs with human preferences inspired by growing batch reinforcement learning (RL), which we call Reinforced Self-Training (ReST)
Our results show that ReST can substantially improve translation quality, as measured by automated metrics and human evaluation on machine translation benchmarks in a compute and sample-efficient manner.
arXiv Detail & Related papers (2023-08-17T14:12:48Z) - Text Generation with Efficient (Soft) Q-Learning [91.47743595382758]
Reinforcement learning (RL) offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward.
We introduce a new RL formulation for text generation from the soft Q-learning perspective.
We apply the approach to a wide range of tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation.
arXiv Detail & Related papers (2021-06-14T18:48:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.