Learning to summarize from human feedback
- URL: http://arxiv.org/abs/2009.01325v3
- Date: Tue, 15 Feb 2022 19:09:36 GMT
- Title: Learning to summarize from human feedback
- Authors: Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe,
Chelsea Voss, Alec Radford, Dario Amodei, Paul Christiano
- Abstract summary: We show that it is possible to significantly improve summary quality by training a model to optimize for human preferences.
We apply our method to a version of the TL;DR dataset of Reddit posts and find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone.
Our models also transfer to CNN/DM news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning.
- Score: 18.964548137315333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As language models become more powerful, training and evaluation are
increasingly bottlenecked by the data and metrics used for a particular task.
For example, summarization models are often trained to predict human reference
summaries and evaluated using ROUGE, but both of these metrics are rough
proxies for what we really care about -- summary quality. In this work, we show
that it is possible to significantly improve summary quality by training a
model to optimize for human preferences. We collect a large, high-quality
dataset of human comparisons between summaries, train a model to predict the
human-preferred summary, and use that model as a reward function to fine-tune a
summarization policy using reinforcement learning. We apply our method to a
version of the TL;DR dataset of Reddit posts and find that our models
significantly outperform both human reference summaries and much larger models
fine-tuned with supervised learning alone. Our models also transfer to CNN/DM
news articles, producing summaries nearly as good as the human reference
without any news-specific fine-tuning. We conduct extensive analyses to
understand our human feedback dataset and fine-tuned models We establish that
our reward model generalizes to new datasets, and that optimizing our reward
model results in better summaries than optimizing ROUGE according to humans. We
hope the evidence from our paper motivates machine learning researchers to pay
closer attention to how their training loss affects the model behavior they
actually want.
Related papers
- Model-based Preference Optimization in Abstractive Summarization without Human Feedback [5.438770095369458]
We introduce Model-based Preference Optimization (MPO) to fine-tune Large Language Models for improved summarization abilities without any human feedback.
Our experiments on standard summarization datasets and various metrics demonstrate that our proposed MPO significantly enhances the quality of generated summaries without relying on human feedback.
arXiv Detail & Related papers (2024-09-27T10:35:45Z) - Weak Reward Model Transforms Generative Models into Robust Causal Event Extraction Systems [17.10762463903638]
We train evaluation models to approximate human evaluation, achieving high agreement.
We propose a weak-to-strong supervision method that uses a fraction of the annotated data to train an evaluation model.
arXiv Detail & Related papers (2024-06-26T10:48:14Z) - RewardBench: Evaluating Reward Models for Language Modeling [100.28366840977966]
We present RewardBench, a benchmark dataset and code-base for evaluation of reward models.
The dataset is a collection of prompt-chosen-rejected trios spanning chat, reasoning, and safety.
On the RewardBench leaderboard, we evaluate reward models trained with a variety of methods.
arXiv Detail & Related papers (2024-03-20T17:49:54Z) - Information-Theoretic Distillation for Reference-less Summarization [67.51150817011617]
We present a novel framework to distill a powerful summarizer based on the information-theoretic objective for summarization.
We start off from Pythia-2.8B as the teacher model, which is not yet capable of summarization.
We arrive at a compact but powerful summarizer with only 568M parameters that performs competitively against ChatGPT.
arXiv Detail & Related papers (2024-03-20T17:42:08Z) - Secrets of RLHF in Large Language Models Part II: Reward Modeling [134.97964938009588]
We introduce a series of novel methods to mitigate the influence of incorrect and ambiguous preferences in the dataset.
We also introduce contrastive learning to enhance the ability of reward models to distinguish between chosen and rejected responses.
arXiv Detail & Related papers (2024-01-11T17:56:59Z) - Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models [115.501751261878]
Fine-tuning language models(LMs) on human-generated data remains a prevalent practice.
We investigate whether we can go beyond human data on tasks where we have access to scalar feedback.
We find that ReST$EM$ scales favorably with model size and significantly surpasses fine-tuning only on human data.
arXiv Detail & Related papers (2023-12-11T18:17:43Z) - Inverse Reinforcement Learning for Text Summarization [52.765898203824975]
We introduce inverse reinforcement learning (IRL) as an effective paradigm for training abstractive summarization models.
Experimental results across datasets in different domains demonstrate the superiority of our proposed IRL model for summarization over MLE and RL baselines.
arXiv Detail & Related papers (2022-12-19T23:45:05Z) - Improving Zero and Few-Shot Abstractive Summarization with Intermediate
Fine-tuning and Data Augmentation [101.26235068460551]
Models pretrained with self-supervised objectives on large text corpora achieve state-of-the-art performance on English text summarization tasks.
Models are typically fine-tuned on hundreds of thousands of data points, an infeasible requirement when applying summarization to new, niche domains.
We introduce a novel and generalizable method, called WikiTransfer, for fine-tuning pretrained models for summarization in an unsupervised, dataset-specific manner.
arXiv Detail & Related papers (2020-10-24T08:36:49Z) - Learning by Semantic Similarity Makes Abstractive Summarization Better [13.324006587838522]
We compare the generated summaries from recent LM, BART, and the reference summaries from a benchmark dataset, CNN/DM.
Interestingly, model-generated summaries receive higher scores relative to reference summaries.
arXiv Detail & Related papers (2020-02-18T17:59:02Z) - Abstractive Summarization for Low Resource Data using Domain Transfer
and Data Synthesis [1.148539813252112]
We explore using domain transfer and data synthesis to improve the performance of recent abstractive summarization methods.
We show that tuning state of the art model trained on newspaper data could boost performance on student reflection data.
We propose a template-based model to synthesize new data, which when incorporated into training further increased ROUGE scores.
arXiv Detail & Related papers (2020-02-09T17:49:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.