Factual Error Correction for Abstractive Summarization Models
- URL: http://arxiv.org/abs/2010.08712v2
- Date: Thu, 1 Apr 2021 22:56:36 GMT
- Title: Factual Error Correction for Abstractive Summarization Models
- Authors: Meng Cao, Yue Dong, Jiapeng Wu, Jackie Chi Kit Cheung
- Abstract summary: We propose a post-editing corrector module to correct factual errors in generated summaries.
We show that our model is able to correct factual errors in summaries generated by other neural summarization models.
We also find that transferring from artificial error correction to downstream settings is still very challenging.
- Score: 41.77317902748772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural abstractive summarization systems have achieved promising progress,
thanks to the availability of large-scale datasets and models pre-trained with
self-supervised methods. However, ensuring the factual consistency of the
generated summaries for abstractive summarization systems is a challenge. We
propose a post-editing corrector module to address this issue by identifying
and correcting factual errors in generated summaries. The neural corrector
model is pre-trained on artificial examples that are created by applying a
series of heuristic transformations on reference summaries. These
transformations are inspired by an error analysis of state-of-the-art
summarization model outputs. Experimental results show that our model is able
to correct factual errors in summaries generated by other neural summarization
models and outperforms previous models on factual consistency evaluation on the
CNN/DailyMail dataset. We also find that transferring from artificial error
correction to downstream settings is still very challenging.
Related papers
- Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Correcting Diverse Factual Errors in Abstractive Summarization via
Post-Editing and Language Model Infilling [56.70682379371534]
We show that our approach vastly outperforms prior methods in correcting erroneous summaries.
Our model -- FactEdit -- improves factuality scores by over 11 points on CNN/DM and over 31 points on XSum.
arXiv Detail & Related papers (2022-10-22T07:16:19Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Understanding Factual Errors in Summarization: Errors, Summarizers,
Datasets, Error Detectors [105.12462629663757]
In this work, we aggregate factuality error annotations from nine existing datasets and stratify them according to the underlying summarization model.
We compare performance of state-of-the-art factuality metrics, including recent ChatGPT-based metrics, on this stratified benchmark and show that their performance varies significantly across different types of summarization models.
arXiv Detail & Related papers (2022-05-25T15:26:48Z) - CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in
Abstractive Summarization [6.017006996402699]
We study generating abstractive summaries that are faithful and factually consistent with the given articles.
A novel contrastive learning formulation is presented, which leverages both reference summaries, as positive training data, and automatically generated erroneous summaries, as negative training data, to train summarization systems that are better at distinguishing between them.
arXiv Detail & Related papers (2021-09-19T20:05:21Z) - Annotating and Modeling Fine-grained Factuality in Summarization [36.88018450067003]
A major barrier to their use in practice is their propensity to output summaries that are not faithful to the input and that contain factual errors.
We explore both synthetic and human-labeled data sources for training models to identify factual errors in summarization.
We show that our best factuality detection model enables training of more factual XSum summarization models by allowing us to identify non-factual tokens in the training data.
arXiv Detail & Related papers (2021-04-09T11:20:44Z) - Recoding latent sentence representations -- Dynamic gradient-based
activation modification in RNNs [0.0]
In RNNs, encoding information in a suboptimal way can impact the quality of representations based on later elements in the sequence.
I propose an augmentation to standard RNNs in form of a gradient-based correction mechanism.
I conduct different experiments in the context of language modeling, where the impact of using such a mechanism is examined in detail.
arXiv Detail & Related papers (2021-01-03T17:54:17Z) - Multi-Fact Correction in Abstractive Text Summarization [98.27031108197944]
Span-Fact is a suite of two factual correction models that leverages knowledge learned from question answering models to make corrections in system-generated summaries via span selection.
Our models employ single or multi-masking strategies to either iteratively or auto-regressively replace entities in order to ensure semantic consistency w.r.t. the source text.
Experiments show that our models significantly boost the factual consistency of system-generated summaries without sacrificing summary quality in terms of both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-10-06T02:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.