CO2Sum:Contrastive Learning for Factual-Consistent Abstractive
Summarization
- URL: http://arxiv.org/abs/2112.01147v1
- Date: Thu, 2 Dec 2021 11:52:01 GMT
- Title: CO2Sum:Contrastive Learning for Factual-Consistent Abstractive
Summarization
- Authors: Wei Liu, Huanqin Wu, Wenjing Mu, Zhen Li, Tao Chen, Dan Nie
- Abstract summary: CO2Sum is a contrastive learning scheme that can be easily applied on sequence-to-sequence models.
Experiments on public benchmarks demonstrate that CO2Sum improves the faithfulness on large pre-trained language models.
- Score: 11.033949886530225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating factual-consistent summaries is a challenging task for abstractive
summarization. Previous works mainly encode factual information or perform
post-correct/rank after decoding. In this paper, we provide a
factual-consistent solution from the perspective of contrastive learning, which
is a natural extension of previous works. We propose CO2Sum (Contrastive for
Consistency), a contrastive learning scheme that can be easily applied on
sequence-to-sequence models for factual-consistent abstractive summarization,
proving that the model can be fact-aware without modifying the architecture.
CO2Sum applies contrastive learning on the encoder, which can help the model be
aware of the factual information contained in the input article, or performs
contrastive learning on the decoder, which makes the model to generate
factual-correct output summary. What's more, these two schemes are orthogonal
and can be combined to further improve faithfulness. Comprehensive experiments
on public benchmarks demonstrate that CO2Sum improves the faithfulness on large
pre-trained language models and reaches competitive results compared to other
strong factual-consistent summarization baselines.
Related papers
- Improving Factuality of Abstractive Summarization via Contrastive Reward
Learning [77.07192378869776]
We propose a simple but effective contrastive learning framework that incorporates recent developments in reward learning and factuality metrics.
Empirical studies demonstrate that the proposed framework enables summarization models to learn from feedback of factuality metrics.
arXiv Detail & Related papers (2023-07-10T12:01:18Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of
Open Information Extraction [50.62245481416744]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.
We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.
By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - Correcting Diverse Factual Errors in Abstractive Summarization via
Post-Editing and Language Model Infilling [56.70682379371534]
We show that our approach vastly outperforms prior methods in correcting erroneous summaries.
Our model -- FactEdit -- improves factuality scores by over 11 points on CNN/DM and over 31 points on XSum.
arXiv Detail & Related papers (2022-10-22T07:16:19Z) - Towards Improving Faithfulness in Abstractive Summarization [37.19777407790153]
We propose a Faithfulness Enhanced Summarization model (FES) to improve fidelity in abstractive summarization.
Our model outperforms strong baselines in experiments on CNN/DM and XSum.
arXiv Detail & Related papers (2022-10-04T19:52:09Z) - FactPEGASUS: Factuality-Aware Pre-training and Fine-tuning for
Abstractive Summarization [91.46015013816083]
We present FactPEG, an abstractive summarization model that addresses the problem of factuality during pre-training and fine-tuning.
Our analysis suggests FactPEG is more factual than using the original pre-training objective in zero-shot and fewshot settings.
arXiv Detail & Related papers (2022-05-16T17:39:14Z) - CLIFF: Contrastive Learning for Improving Faithfulness and Factuality in
Abstractive Summarization [6.017006996402699]
We study generating abstractive summaries that are faithful and factually consistent with the given articles.
A novel contrastive learning formulation is presented, which leverages both reference summaries, as positive training data, and automatically generated erroneous summaries, as negative training data, to train summarization systems that are better at distinguishing between them.
arXiv Detail & Related papers (2021-09-19T20:05:21Z) - Understanding Neural Abstractive Summarization Models via Uncertainty [54.37665950633147]
seq2seq abstractive summarization models generate text in a free-form manner.
We study the entropy, or uncertainty, of the model's token-level predictions.
We show that uncertainty is a useful perspective for analyzing summarization and text generation models more broadly.
arXiv Detail & Related papers (2020-10-15T16:57:27Z) - Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven
Cloze Reward [42.925345819778656]
We present ASGARD, a novel framework for Abstractive Summarization with Graph-Augmentation and semantic-driven RewarD.
We propose the use of dual encoders---a sequential document encoder and a graph-structured encoder---to maintain the global context and local characteristics of entities.
Results show that our models produce significantly higher ROUGE scores than a variant without knowledge graph as input on both New York Times and CNN/Daily Mail datasets.
arXiv Detail & Related papers (2020-05-03T18:23:06Z) - Enhancing Factual Consistency of Abstractive Summarization [57.67609672082137]
We propose a fact-aware summarization model FASum to extract and integrate factual relations into the summary generation process.
We then design a factual corrector model FC to automatically correct factual errors from summaries generated by existing systems.
arXiv Detail & Related papers (2020-03-19T07:36:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.