The Wisdom of Hindsight Makes Language Models Better Instruction
Followers
- URL: http://arxiv.org/abs/2302.05206v1
- Date: Fri, 10 Feb 2023 12:16:38 GMT
- Title: The Wisdom of Hindsight Makes Language Models Better Instruction
Followers
- Authors: Tianjun Zhang, Fangchen Liu, Justin Wong, Pieter Abbeel, Joseph E.
Gonzalez
- Abstract summary: Reinforcement learning has seen wide success in finetuning large language models to better align with instructions via human feedback.
In this paper, we consider an alternative approach: converting feedback to instruction by relabeling the original one and training the model for better alignment in a supervised manner.
We propose Hindsight Instruction Relabeling (HIR), a novel algorithm for aligning language models with instructions.
- Score: 84.9120606803906
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning has seen wide success in finetuning large language
models to better align with instructions via human feedback. The so-called
algorithm, Reinforcement Learning with Human Feedback (RLHF) demonstrates
impressive performance on the GPT series models. However, the underlying
Reinforcement Learning (RL) algorithm is complex and requires an additional
training pipeline for reward and value networks. In this paper, we consider an
alternative approach: converting feedback to instruction by relabeling the
original one and training the model for better alignment in a supervised
manner. Such an algorithm doesn't require any additional parameters except for
the original language model and maximally reuses the pretraining pipeline. To
achieve this, we formulate instruction alignment problem for language models as
a goal-reaching problem in decision making. We propose Hindsight Instruction
Relabeling (HIR), a novel algorithm for aligning language models with
instructions. The resulting two-stage algorithm shed light to a family of
reward-free approaches that utilize the hindsightly relabeled instructions
based on feedback. We evaluate the performance of HIR extensively on 12
challenging BigBench reasoning tasks and show that HIR outperforms the baseline
algorithms and is comparable to or even surpasses supervised finetuning.
Related papers
- Recursive Introspection: Teaching Language Model Agents How to Self-Improve [30.086494067593268]
We develop RISE: Recursive IntroSpEction, an approach for fine-tuning large language models.
Our experiments show that RISE enables Llama2, Llama3, and Mistral models to improve themselves with more turns on math reasoning tasks.
arXiv Detail & Related papers (2024-07-25T17:35:59Z) - Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models [63.36637269634553]
We present a novel method of further improving performance by requiring models to compare multiple reasoning chains.
We find that instruction tuning on DCoT datasets boosts the performance of even smaller, and therefore more accessible, language models.
arXiv Detail & Related papers (2024-07-03T15:01:18Z) - Adversarial Contrastive Decoding: Boosting Safety Alignment of Large Language Models via Opposite Prompt Optimization [34.29833630422768]
Adversarial Contrastive Decoding (ACD) is an optimization-based framework to generate two opposite system prompts for prompt-based contrastive decoding.
ACD achieves much better safety performance than previous model training-free decoding methods without sacrificing original generation ability.
arXiv Detail & Related papers (2024-06-24T15:51:30Z) - Contrastive Preference Learning: Learning from Human Feedback without RL [71.77024922527642]
We introduce Contrastive Preference Learning (CPL), an algorithm for learning optimal policies from preferences without learning reward functions.
CPL is fully off-policy, uses only a simple contrastive objective, and can be applied to arbitrary MDPs.
arXiv Detail & Related papers (2023-10-20T16:37:56Z) - Fine-tune Language Models to Approximate Unbiased In-context Learning [8.609157988755896]
We introduce a reweighted algorithm called RICL (Reweighted In-context Learning)
This algorithm fine-tunes language models using an unbiased validation set to determine the optimal weight for each input-output example.
We also introduce a low-cost reweighted algorithm, a linear optimal weight approximation algorithm called LARICL.
arXiv Detail & Related papers (2023-10-05T06:16:01Z) - CodeRL: Mastering Code Generation through Pretrained Models and Deep
Reinforcement Learning [92.36705236706678]
"CodeRL" is a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning.
During inference, we introduce a new generation procedure with a critical sampling strategy.
For the model backbones, we extended the encoder-decoder architecture of CodeT5 with enhanced learning objectives.
arXiv Detail & Related papers (2022-07-05T02:42:15Z) - Improving Pre-trained Language Model Fine-tuning with Noise Stability
Regularization [94.4409074435894]
We propose a novel and effective fine-tuning framework, named Layerwise Noise Stability Regularization (LNSR)
Specifically, we propose to inject the standard Gaussian noise and regularize hidden representations of the fine-tuned model.
We demonstrate the advantages of the proposed method over other state-of-the-art algorithms including L2-SP, Mixout and SMART.
arXiv Detail & Related papers (2022-06-12T04:42:49Z) - Text Generation with Efficient (Soft) Q-Learning [91.47743595382758]
Reinforcement learning (RL) offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward.
We introduce a new RL formulation for text generation from the soft Q-learning perspective.
We apply the approach to a wide range of tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation.
arXiv Detail & Related papers (2021-06-14T18:48:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.