A Critical Evaluation of AI Feedback for Aligning Large Language Models
- URL: http://arxiv.org/abs/2402.12366v1
- Date: Mon, 19 Feb 2024 18:53:54 GMT
- Title: A Critical Evaluation of AI Feedback for Aligning Large Language Models
- Authors: Archit Sharma, Sedrick Keh, Eric Mitchell, Chelsea Finn, Kushal Arora,
Thomas Kollar
- Abstract summary: We show that simple supervised fine-tuning with GPT-4 as the teacher outperforms existing RLAIF pipelines.
More generally, we find that the gains from RLAIF vary substantially across base model families, test-time evaluation protocols, and critic models.
- Score: 60.42291111149438
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement learning with AI feedback (RLAIF) is a popular paradigm for
improving the instruction-following abilities of powerful pre-trained language
models. RLAIF first performs supervised fine-tuning (SFT) using demonstrations
from a teacher model and then further fine-tunes the model with reinforcement
learning (RL), using feedback from a critic model. While recent popular
open-source models have demonstrated substantial improvements in performance
from the RL step, in this paper we question whether the complexity of this RL
step is truly warranted for AI feedback. We show that the improvements of the
RL step are virtually entirely due to the widespread practice of using a weaker
teacher model (e.g. GPT-3.5) for SFT data collection than the critic (e.g.,
GPT-4) used for AI feedback generation. Specifically, we show that simple
supervised fine-tuning with GPT-4 as the teacher outperforms existing RLAIF
pipelines. More generally, we find that the gains from RLAIF vary substantially
across base model families, test-time evaluation protocols, and critic models.
Finally, we provide a mechanistic explanation for when SFT may outperform the
full two-step RLAIF pipeline as well as suggestions for making RLAIF maximally
useful in practice.
Related papers
- Self-Evolved Reward Learning for LLMs [45.6910747154447]
Reinforcement Learning from Human Feedback (RLHF) is a crucial technique for aligning language models with human preferences.
We propose Self-Evolved Reward Learning (SER), a novel approach where the RM generates additional training data to iteratively improve itself.
Our results demonstrate that even with limited human-annotated data, learning from self-feedback can robustly enhance RM performance.
arXiv Detail & Related papers (2024-11-01T07:29:03Z) - Improve Vision Language Model Chain-of-thought Reasoning [86.83335752119741]
Chain-of-thought (CoT) reasoning in vision language models (VLMs) is crucial for improving interpretability and trustworthiness.
We show that training VLM on short answers does not generalize well to reasoning tasks that require more detailed responses.
arXiv Detail & Related papers (2024-10-21T17:00:06Z) - Training Language Models to Critique With Multi-agent Feedback [102.42751835338233]
MultiCritique pipeline improves critique ability of LLMs by utilizing multi-agent feedback.
pipeline aggregates high-quality critiques from multiple agents instead of a single model.
Our fine-tuned 7B model significantly surpasses other advanced 7B-13B open-source models.
arXiv Detail & Related papers (2024-10-20T04:57:45Z) - LLMs-as-Instructors: Learning from Errors Toward Automating Model Improvement [93.38736019287224]
"LLMs-as-Instructors" framework autonomously enhances the training of smaller target models.
Inspired by the theory of "Learning from Errors", this framework employs an instructor LLM to meticulously analyze the specific errors within a target model.
Within this framework, we implement two strategies: "Learning from Error," which focuses solely on incorrect responses to tailor training data, and "Learning from Error by Contrast", which uses contrastive learning to analyze both correct and incorrect responses for a deeper understanding of errors.
arXiv Detail & Related papers (2024-06-29T17:16:04Z) - ARES: Alternating Reinforcement Learning and Supervised Fine-Tuning for Enhanced Multi-Modal Chain-of-Thought Reasoning Through Diverse AI Feedback [13.154512864498912]
We propose a two-stage algorithm ARES that Alternates REinforcement Learning (RL) and Supervised Fine-Tuning (SFT)
First, we request the Teacher to score how much each sentence contributes to solving the problem in a Chain-of-Thought (CoT)
Second, we ask the Teacher to correct the wrong reasoning after the RL stage. With the correction feedback, we stabilize the RL fine-tuned model through SFT.
arXiv Detail & Related papers (2024-06-25T07:20:11Z) - Teaching Language Models to Self-Improve by Learning from Language Feedback [40.649677201161744]
We present Self-Refinement Tuning (SRT), a method that leverages model feedback for alignment.
SRT uses a base language model (e.g., Tulu2) to generate initial responses, which are critiqued and refined by a more advanced model.
SRT further optimize the model by learning from its self-generated feedback and refinements, creating a feedback loop that promotes model improvement.
arXiv Detail & Related papers (2024-06-11T11:20:05Z) - Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment [65.15914284008973]
We propose to leverage an Inverse Reinforcement Learning (IRL) technique to simultaneously build an reward model and a policy model.
We show that the proposed algorithms converge to the stationary solutions of the IRL problem.
Our results indicate that it is beneficial to leverage reward learning throughout the entire alignment process.
arXiv Detail & Related papers (2024-05-28T07:11:05Z) - HRLAIF: Improvements in Helpfulness and Harmlessness in Open-domain Reinforcement Learning From AI Feedback [47.12549302721597]
We propose Hybrid Reinforcement Learning from AI Feedback (HRLAIF)
This method enhances the accuracy of AI annotations for responses, making the model's helpfulness more robust in training process.
HRLAIF inherits the ability of RLAIF to enhance human preference for outcomes at a low cost while also improving the satisfaction rate of responses.
arXiv Detail & Related papers (2024-03-13T07:38:20Z) - SALMON: Self-Alignment with Instructable Reward Models [80.83323636730341]
This paper presents a novel approach, namely SALMON, to align base language models with minimal human supervision.
We develop an AI assistant named Dromedary-2 with only 6 exemplars for in-context learning and 31 human-defined principles.
arXiv Detail & Related papers (2023-10-09T17:56:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.