On Faithfulness and Factuality in Abstractive Summarization
- URL: http://arxiv.org/abs/2005.00661v1
- Date: Sat, 2 May 2020 00:09:16 GMT
- Title: On Faithfulness and Factuality in Abstractive Summarization
- Authors: Joshua Maynez and Shashi Narayan and Bernd Bohnet and Ryan McDonald
- Abstract summary: We analyzed limitations of neural text generation models for abstractive document summarization.
We found that these models are highly prone to hallucinate content that is unfaithful to the input document.
We show that textual entailment measures better correlate with faithfulness than standard metrics.
- Score: 17.261247316769484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is well known that the standard likelihood training and approximate
decoding objectives in neural text generation models lead to less human-like
responses for open-ended tasks such as language modeling and story generation.
In this paper we have analyzed limitations of these models for abstractive
document summarization and found that these models are highly prone to
hallucinate content that is unfaithful to the input document. We conducted a
large scale human evaluation of several neural abstractive summarization
systems to better understand the types of hallucinations they produce. Our
human annotators found substantial amounts of hallucinated content in all model
generated summaries. However, our analysis does show that pretrained models are
better summarizers not only in terms of raw metrics, i.e., ROUGE, but also in
generating faithful and factual summaries as evaluated by humans. Furthermore,
we show that textual entailment measures better correlate with faithfulness
than standard metrics, potentially leading the way to automatic evaluation
metrics as well as training and decoding criteria.
Related papers
- Assessment of Transformer-Based Encoder-Decoder Model for Human-Like Summarization [0.05852077003870416]
This work leverages transformer-based BART model for human-like summarization.
On training and fine-tuning the encoder-decoder model, it is tested with diverse sample articles.
The finetuned model performance is compared with the baseline pretrained model.
Empirical results on BBC News articles highlight that the gold standard summaries written by humans are more factually consistent by 17%.
arXiv Detail & Related papers (2024-10-22T09:25:04Z) - Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural
Language Generation [68.9440575276396]
This survey aims to provide an overview of the recent research that has leveraged human feedback to improve natural language generation.
First, we introduce an encompassing formalization of feedback, and identify and organize existing research into a taxonomy following this formalization.
Second, we discuss how feedback can be described by its format and objective, and cover the two approaches proposed to use feedback (either for training or decoding): directly using the feedback or training feedback models.
Third, we provide an overview of the nascent field of AI feedback, which exploits large language models to make judgments based on a set of principles and minimize the need for
arXiv Detail & Related papers (2023-05-01T17:36:06Z) - Leveraging Pretrained Models for Automatic Summarization of
Doctor-Patient Conversations [9.184616102949228]
We show that fluent and adequate summaries can be generated with limited training data by fine-tuning BART.
Using a carefully chosen fine-tuning dataset, this method is shown to be effective at handling longer conversations.
arXiv Detail & Related papers (2021-09-24T20:18:59Z) - Improving Faithfulness in Abstractive Summarization with Contrast
Candidate Generation and Selection [54.38512834521367]
We study contrast candidate generation and selection as a model-agnostic post-processing technique.
We learn a discriminative correction model by generating alternative candidate summaries.
This model is then used to select the best candidate as the final output summary.
arXiv Detail & Related papers (2021-04-19T05:39:24Z) - SummVis: Interactive Visual Analysis of Models, Data, and Evaluation for
Text Summarization [14.787106201073154]
SummVis is an open-source tool for visualizing abstractive summaries.
It enables fine-grained analysis of the models, data, and evaluation metrics associated with text summarization.
arXiv Detail & Related papers (2021-04-15T17:13:00Z) - Multi-Fact Correction in Abstractive Text Summarization [98.27031108197944]
Span-Fact is a suite of two factual correction models that leverages knowledge learned from question answering models to make corrections in system-generated summaries via span selection.
Our models employ single or multi-masking strategies to either iteratively or auto-regressively replace entities in order to ensure semantic consistency w.r.t. the source text.
Experiments show that our models significantly boost the factual consistency of system-generated summaries without sacrificing summary quality in terms of both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-10-06T02:51:02Z) - SummEval: Re-evaluating Summarization Evaluation [169.622515287256]
We re-evaluate 14 automatic evaluation metrics in a comprehensive and consistent fashion.
We benchmark 23 recent summarization models using the aforementioned automatic evaluation metrics.
We assemble the largest collection of summaries generated by models trained on the CNN/DailyMail news dataset.
arXiv Detail & Related papers (2020-07-24T16:25:19Z) - Few-Shot Learning for Opinion Summarization [117.70510762845338]
Opinion summarization is the automatic creation of text reflecting subjective information expressed in multiple documents.
In this work, we show that even a handful of summaries is sufficient to bootstrap generation of the summary text.
Our approach substantially outperforms previous extractive and abstractive methods in automatic and human evaluation.
arXiv Detail & Related papers (2020-04-30T15:37:38Z) - Unsupervised Opinion Summarization with Noising and Denoising [85.49169453434554]
We create a synthetic dataset from a corpus of user reviews by sampling a review, pretending it is a summary, and generating noisy versions thereof.
At test time, the model accepts genuine reviews and generates a summary containing salient opinions, treating those that do not reach consensus as noise.
arXiv Detail & Related papers (2020-04-21T16:54:57Z) - Learning by Semantic Similarity Makes Abstractive Summarization Better [13.324006587838522]
We compare the generated summaries from recent LM, BART, and the reference summaries from a benchmark dataset, CNN/DM.
Interestingly, model-generated summaries receive higher scores relative to reference summaries.
arXiv Detail & Related papers (2020-02-18T17:59:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.