Constructive Large Language Models Alignment with Diverse Feedback
- URL: http://arxiv.org/abs/2310.06450v2
- Date: Wed, 11 Oct 2023 07:04:04 GMT
- Title: Constructive Large Language Models Alignment with Diverse Feedback
- Authors: Tianshu Yu, Ting-En Lin, Yuchuan Wu, Min Yang, Fei Huang, Yongbin Li
- Abstract summary: We introduce Constructive and Diverse Feedback (CDF) as a novel method to enhance large language models alignment.
We exploit critique feedback for easy problems, refinement feedback for medium problems, and preference feedback for hard problems.
By training our model with this diversified feedback, we achieve enhanced alignment performance while using less training data.
- Score: 76.9578950893839
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent research on large language models (LLMs), there has been a growing
emphasis on aligning these models with human values to reduce the impact of
harmful content. However, current alignment methods often rely solely on
singular forms of human feedback, such as preferences, annotated labels, or
natural language critiques, overlooking the potential advantages of combining
these feedback types. This limitation leads to suboptimal performance, even
when ample training data is available. In this paper, we introduce Constructive
and Diverse Feedback (CDF) as a novel method to enhance LLM alignment, inspired
by constructivist learning theory. Our approach involves collecting three
distinct types of feedback tailored to problems of varying difficulty levels
within the training dataset. Specifically, we exploit critique feedback for
easy problems, refinement feedback for medium problems, and preference feedback
for hard problems. By training our model with this diversified feedback, we
achieve enhanced alignment performance while using less training data. To
assess the effectiveness of CDF, we evaluate it against previous methods in
three downstream tasks: question answering, dialog generation, and text
summarization. Experimental results demonstrate that CDF achieves superior
performance even with a smaller training dataset.
Related papers
- RLVF: Learning from Verbal Feedback without Overgeneralization [94.19501420241188]
We study the problem of incorporating verbal feedback without such overgeneralization.
We develop a new method Contextualized Critiques with Constrained Preference Optimization (C3PO)
Our approach effectively applies verbal feedback to relevant scenarios while preserving existing behaviors for other contexts.
arXiv Detail & Related papers (2024-02-16T18:50:24Z) - Sample Efficient Reinforcement Learning from Human Feedback via Active
Exploration [29.935758027209292]
Preference-based feedback is important for many applications in reinforcement learning.
In this work, we take advantage of the fact that one can often choose contexts to obtain human feedback.
We show that our method is able to reach better performance with fewer samples of human preferences than multiple baselines.
arXiv Detail & Related papers (2023-12-01T00:54:02Z) - CritiqueLLM: Towards an Informative Critique Generation Model for Evaluation of Large Language Model Generation [87.44350003888646]
Eval-Instruct can acquire pointwise grading critiques with pseudo references and revise these critiques via multi-path prompting.
CritiqueLLM is empirically shown to outperform ChatGPT and all the open-source baselines.
arXiv Detail & Related papers (2023-11-30T16:52:42Z) - UltraFeedback: Boosting Language Models with Scaled AI Feedback [99.4633351133207]
We present textscUltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset.
Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models.
arXiv Detail & Related papers (2023-10-02T17:40:01Z) - Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural
Language Generation [68.9440575276396]
This survey aims to provide an overview of the recent research that has leveraged human feedback to improve natural language generation.
First, we introduce an encompassing formalization of feedback, and identify and organize existing research into a taxonomy following this formalization.
Second, we discuss how feedback can be described by its format and objective, and cover the two approaches proposed to use feedback (either for training or decoding): directly using the feedback or training feedback models.
Third, we provide an overview of the nascent field of AI feedback, which exploits large language models to make judgments based on a set of principles and minimize the need for
arXiv Detail & Related papers (2023-05-01T17:36:06Z) - Training Language Models with Language Feedback at Scale [50.70091340506957]
We introduce learning from Language Feedback (ILF), a new approach that utilizes more informative language feedback.
ILF consists of three steps that are applied iteratively: first, conditioning the language model on the input, an initial LM output, and feedback to generate refinements.
We show theoretically that ILF can be viewed as Bayesian Inference, similar to Reinforcement Learning from human feedback.
arXiv Detail & Related papers (2023-03-28T17:04:15Z) - Make The Most of Prior Data: A Solution for Interactive Text
Summarization with Preference Feedback [15.22874706089491]
We introduce a new framework to train summarization models with preference feedback interactively.
By properly leveraging offline data and a novel reward model, we improve the performance regarding ROUGE scores and sample-efficiency.
arXiv Detail & Related papers (2022-04-12T03:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.