Training Language Models with Language Feedback at Scale
- URL: http://arxiv.org/abs/2303.16755v3
- Date: Thu, 22 Feb 2024 22:29:10 GMT
- Title: Training Language Models with Language Feedback at Scale
- Authors: J\'er\'emy Scheurer, Jon Ander Campos, Tomasz Korbak, Jun Shern Chan,
Angelica Chen, Kyunghyun Cho, Ethan Perez
- Abstract summary: We introduce learning from Language Feedback (ILF), a new approach that utilizes more informative language feedback.
ILF consists of three steps that are applied iteratively: first, conditioning the language model on the input, an initial LM output, and feedback to generate refinements.
We show theoretically that ILF can be viewed as Bayesian Inference, similar to Reinforcement Learning from human feedback.
- Score: 50.70091340506957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pretrained language models often generate outputs that are not in line with
human preferences, such as harmful text or factually incorrect summaries.
Recent work approaches the above issues by learning from a simple form of human
feedback: comparisons between pairs of model-generated outputs. However,
comparison feedback only conveys limited information about human preferences.
In this paper, we introduce Imitation learning from Language Feedback (ILF), a
new approach that utilizes more informative language feedback. ILF consists of
three steps that are applied iteratively: first, conditioning the language
model on the input, an initial LM output, and feedback to generate refinements.
Second, selecting the refinement incorporating the most feedback. Third,
finetuning the language model to maximize the likelihood of the chosen
refinement given the input. We show theoretically that ILF can be viewed as
Bayesian Inference, similar to Reinforcement Learning from human feedback. We
evaluate ILF's effectiveness on a carefully-controlled toy task and a realistic
summarization task. Our experiments demonstrate that large language models
accurately incorporate feedback and that finetuning with ILF scales well with
the dataset size, even outperforming finetuning on human summaries. Learning
from both language and comparison feedback outperforms learning from each
alone, achieving human-level summarization performance.
Related papers
- LLMs are Superior Feedback Providers: Bootstrapping Reasoning for Lie Detection with Self-Generated Feedback [33.14770105185958]
Large Language Models (LLMs) excel at generating human-like dialogues and comprehending text.
We propose a bootstrapping framework that leverages self-generated feedback to enhance LLM reasoning capabilities for lie detection.
We investigate the application of the proposed framework for detecting betrayal and deception in Diplomacy games, and compare it with feedback from professional human players.
arXiv Detail & Related papers (2024-08-25T18:47:55Z) - Constructive Large Language Models Alignment with Diverse Feedback [76.9578950893839]
We introduce Constructive and Diverse Feedback (CDF) as a novel method to enhance large language models alignment.
We exploit critique feedback for easy problems, refinement feedback for medium problems, and preference feedback for hard problems.
By training our model with this diversified feedback, we achieve enhanced alignment performance while using less training data.
arXiv Detail & Related papers (2023-10-10T09:20:14Z) - Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural
Language Generation [68.9440575276396]
This survey aims to provide an overview of the recent research that has leveraged human feedback to improve natural language generation.
First, we introduce an encompassing formalization of feedback, and identify and organize existing research into a taxonomy following this formalization.
Second, we discuss how feedback can be described by its format and objective, and cover the two approaches proposed to use feedback (either for training or decoding): directly using the feedback or training feedback models.
Third, we provide an overview of the nascent field of AI feedback, which exploits large language models to make judgments based on a set of principles and minimize the need for
arXiv Detail & Related papers (2023-05-01T17:36:06Z) - Improving Code Generation by Training with Natural Language Feedback [69.52985513422381]
We formalize an algorithm for learning from natural language feedback at training time instead, which we call learning from Language Feedback (ILF)
ILF requires only a small amount of human-written feedback during training and does not require the same feedback at test time, making it both user-friendly and sample-efficient.
We use ILF to improve a Codegen-Mono 6.1B model's pass@1 rate by 38% relative (and 10% absolute) on the Mostly Basic Python Problems (MBPP) benchmark.
arXiv Detail & Related papers (2023-03-28T16:15:31Z) - Chain of Hindsight Aligns Language Models with Feedback [62.68665658130472]
We propose a novel technique, Chain of Hindsight, that is easy to optimize and can learn from any form of feedback, regardless of its polarity.
We convert all types of feedback into sequences of sentences, which are then used to fine-tune the model.
By doing so, the model is trained to generate outputs based on feedback, while learning to identify and correct negative attributes or errors.
arXiv Detail & Related papers (2023-02-06T10:28:16Z) - Training Language Models with Natural Language Feedback [51.36137482891037]
We learn from language feedback on model outputs using a three-step learning algorithm.
In synthetic experiments, we first evaluate whether language models accurately incorporate feedback to produce refinements.
Using only 100 samples of human-written feedback, our learning algorithm finetunes a GPT-3 model to roughly human-level summarization.
arXiv Detail & Related papers (2022-04-29T15:06:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.