Data-Efficient Alignment of Large Language Models with Human Feedback
Through Natural Language
- URL: http://arxiv.org/abs/2311.14543v1
- Date: Fri, 24 Nov 2023 15:20:36 GMT
- Title: Data-Efficient Alignment of Large Language Models with Human Feedback
Through Natural Language
- Authors: Di Jin, Shikib Mehri, Devamanyu Hazarika, Aishwarya Padmakumar,
Sungjin Lee, Yang Liu, Mahdi Namazifar
- Abstract summary: We investigate data efficiency of modeling human feedback that is in natural language.
We fine-tune an open-source LLM, e.g., Falcon-40B-Instruct, on a relatively small amount of human feedback in natural language.
We show that this model is able to improve the quality of responses from even some of the strongest LLMs.
- Score: 31.0723480021355
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning from human feedback is a prominent technique to align the output of
large language models (LLMs) with human expectations. Reinforcement learning
from human feedback (RLHF) leverages human preference signals that are in the
form of ranking of response pairs to perform this alignment. However, human
preference on LLM outputs can come in much richer forms including natural
language, which may provide detailed feedback on strengths and weaknesses of a
given response. In this work we investigate data efficiency of modeling human
feedback that is in natural language. Specifically, we fine-tune an open-source
LLM, e.g., Falcon-40B-Instruct, on a relatively small amount (1000 records or
even less) of human feedback in natural language in the form of critiques and
revisions of responses. We show that this model is able to improve the quality
of responses from even some of the strongest LLMs such as ChatGPT, BARD, and
Vicuna, through critique and revision of those responses. For instance, through
one iteration of revision of ChatGPT responses, the revised responses have
56.6% win rate over the original ones, and this win rate can be further
improved to 65.9% after applying the revision for five iterations.
Related papers
- Learning from Naturally Occurring Feedback [25.266461597402056]
We propose a scalable method for extracting feedback that users naturally include when interacting with chat models.
We manually annotated conversation data to confirm the presence of naturally occurring feedback.
We apply our method to over 1M conversations to obtain hundreds of thousands of feedback samples.
arXiv Detail & Related papers (2024-07-15T17:41:34Z) - UltraFeedback: Boosting Language Models with Scaled AI Feedback [99.4633351133207]
We present textscUltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset.
Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models.
arXiv Detail & Related papers (2023-10-02T17:40:01Z) - Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural
Language Generation [68.9440575276396]
This survey aims to provide an overview of the recent research that has leveraged human feedback to improve natural language generation.
First, we introduce an encompassing formalization of feedback, and identify and organize existing research into a taxonomy following this formalization.
Second, we discuss how feedback can be described by its format and objective, and cover the two approaches proposed to use feedback (either for training or decoding): directly using the feedback or training feedback models.
Third, we provide an overview of the nascent field of AI feedback, which exploits large language models to make judgments based on a set of principles and minimize the need for
arXiv Detail & Related papers (2023-05-01T17:36:06Z) - Training Language Models with Language Feedback at Scale [50.70091340506957]
We introduce learning from Language Feedback (ILF), a new approach that utilizes more informative language feedback.
ILF consists of three steps that are applied iteratively: first, conditioning the language model on the input, an initial LM output, and feedback to generate refinements.
We show theoretically that ILF can be viewed as Bayesian Inference, similar to Reinforcement Learning from human feedback.
arXiv Detail & Related papers (2023-03-28T17:04:15Z) - Improving Code Generation by Training with Natural Language Feedback [69.52985513422381]
We formalize an algorithm for learning from natural language feedback at training time instead, which we call learning from Language Feedback (ILF)
ILF requires only a small amount of human-written feedback during training and does not require the same feedback at test time, making it both user-friendly and sample-efficient.
We use ILF to improve a Codegen-Mono 6.1B model's pass@1 rate by 38% relative (and 10% absolute) on the Mostly Basic Python Problems (MBPP) benchmark.
arXiv Detail & Related papers (2023-03-28T16:15:31Z) - Chain of Hindsight Aligns Language Models with Feedback [62.68665658130472]
We propose a novel technique, Chain of Hindsight, that is easy to optimize and can learn from any form of feedback, regardless of its polarity.
We convert all types of feedback into sequences of sentences, which are then used to fine-tune the model.
By doing so, the model is trained to generate outputs based on feedback, while learning to identify and correct negative attributes or errors.
arXiv Detail & Related papers (2023-02-06T10:28:16Z) - Training Language Models with Natural Language Feedback [51.36137482891037]
We learn from language feedback on model outputs using a three-step learning algorithm.
In synthetic experiments, we first evaluate whether language models accurately incorporate feedback to produce refinements.
Using only 100 samples of human-written feedback, our learning algorithm finetunes a GPT-3 model to roughly human-level summarization.
arXiv Detail & Related papers (2022-04-29T15:06:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.