Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural
Language Generation
- URL: http://arxiv.org/abs/2305.00955v2
- Date: Thu, 1 Jun 2023 01:24:53 GMT
- Title: Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural
Language Generation
- Authors: Patrick Fernandes, Aman Madaan, Emmy Liu, Ant\'onio Farinhas, Pedro
Henrique Martins, Amanda Bertsch, Jos\'e G. C. de Souza, Shuyan Zhou,
Tongshuang Wu, Graham Neubig, Andr\'e F. T. Martins
- Abstract summary: This survey aims to provide an overview of the recent research that has leveraged human feedback to improve natural language generation.
First, we introduce an encompassing formalization of feedback, and identify and organize existing research into a taxonomy following this formalization.
Second, we discuss how feedback can be described by its format and objective, and cover the two approaches proposed to use feedback (either for training or decoding): directly using the feedback or training feedback models.
Third, we provide an overview of the nascent field of AI feedback, which exploits large language models to make judgments based on a set of principles and minimize the need for
- Score: 68.9440575276396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many recent advances in natural language generation have been fueled by
training large language models on internet-scale data. However, this paradigm
can lead to models that generate toxic, inaccurate, and unhelpful content, and
automatic evaluation metrics often fail to identify these behaviors. As models
become more capable, human feedback is an invaluable signal for evaluating and
improving models. This survey aims to provide an overview of the recent
research that has leveraged human feedback to improve natural language
generation. First, we introduce an encompassing formalization of feedback, and
identify and organize existing research into a taxonomy following this
formalization. Next, we discuss how feedback can be described by its format and
objective, and cover the two approaches proposed to use feedback (either for
training or decoding): directly using the feedback or training feedback models.
We also discuss existing datasets for human-feedback data collection, and
concerns surrounding feedback collection. Finally, we provide an overview of
the nascent field of AI feedback, which exploits large language models to make
judgments based on a set of principles and minimize the need for human
intervention.
Related papers
- Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models [46.09562860220433]
We introduce GazeReward, a novel framework that integrates implicit feedback -- and specifically eye-tracking (ET) data -- into the Reward Model (RM)
Our approach significantly improves the accuracy of the RM on established human preference datasets.
arXiv Detail & Related papers (2024-10-02T13:24:56Z) - Learning from Naturally Occurring Feedback [25.266461597402056]
We propose a scalable method for extracting feedback that users naturally include when interacting with chat models.
We manually annotated conversation data to confirm the presence of naturally occurring feedback.
We apply our method to over 1M conversations to obtain hundreds of thousands of feedback samples.
arXiv Detail & Related papers (2024-07-15T17:41:34Z) - The Past, Present and Better Future of Feedback Learning in Large
Language Models for Subjective Human Preferences and Values [16.62409302626101]
We survey existing approaches for learning from human feedback, drawing on 95 papers primarily from the ACL and arXiv repositories.
We give an overview of present techniques and practices, as well as the motivations for using feedback.
We encourage a better future of feedback learning in Large Language Models by raising five unresolved conceptual and practical challenges.
arXiv Detail & Related papers (2023-10-11T16:18:13Z) - UltraFeedback: Boosting Language Models with Scaled AI Feedback [99.4633351133207]
We present textscUltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset.
Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models.
arXiv Detail & Related papers (2023-10-02T17:40:01Z) - System-Level Natural Language Feedback [83.24259100437965]
We show how to use feedback to formalize system-level design decisions in a human-in-the-loop-process.
We conduct two case studies of this approach for improving search query and dialog response generation.
We show the combination of system-level and instance-level feedback brings further gains.
arXiv Detail & Related papers (2023-06-23T16:21:40Z) - Training Language Models with Language Feedback at Scale [50.70091340506957]
We introduce learning from Language Feedback (ILF), a new approach that utilizes more informative language feedback.
ILF consists of three steps that are applied iteratively: first, conditioning the language model on the input, an initial LM output, and feedback to generate refinements.
We show theoretically that ILF can be viewed as Bayesian Inference, similar to Reinforcement Learning from human feedback.
arXiv Detail & Related papers (2023-03-28T17:04:15Z) - Chain of Hindsight Aligns Language Models with Feedback [62.68665658130472]
We propose a novel technique, Chain of Hindsight, that is easy to optimize and can learn from any form of feedback, regardless of its polarity.
We convert all types of feedback into sequences of sentences, which are then used to fine-tune the model.
By doing so, the model is trained to generate outputs based on feedback, while learning to identify and correct negative attributes or errors.
arXiv Detail & Related papers (2023-02-06T10:28:16Z) - Training Language Models with Natural Language Feedback [51.36137482891037]
We learn from language feedback on model outputs using a three-step learning algorithm.
In synthetic experiments, we first evaluate whether language models accurately incorporate feedback to produce refinements.
Using only 100 samples of human-written feedback, our learning algorithm finetunes a GPT-3 model to roughly human-level summarization.
arXiv Detail & Related papers (2022-04-29T15:06:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.