The Current State of Summarization
- URL: http://arxiv.org/abs/2305.04853v2
- Date: Tue, 1 Aug 2023 17:42:59 GMT
- Title: The Current State of Summarization
- Authors: Fabian Retkowski
- Abstract summary: This work aims to concisely indicate the current state of the art in abstractive text summarization.
We outline the current paradigm shifts towards pre-trained encoder-decoder models and large autoregressive language models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the explosive growth of textual information, summarization systems have
become increasingly important. This work aims to concisely indicate the current
state of the art in abstractive text summarization. As part of this, we outline
the current paradigm shifts towards pre-trained encoder-decoder models and
large autoregressive language models. Additionally, we delve further into the
challenges of evaluating summarization systems and the potential of
instruction-tuned models for zero-shot summarization. Finally, we provide a
brief overview of how summarization systems are currently being integrated into
commercial applications.
Related papers
- Controllable Topic-Focused Abstractive Summarization [57.8015120583044]
Controlled abstractive summarization focuses on producing condensed versions of a source article to cover specific aspects.
This paper presents a new Transformer-based architecture capable of producing topic-focused summaries.
arXiv Detail & Related papers (2023-11-12T03:51:38Z) - Improving Factuality of Abstractive Summarization via Contrastive Reward
Learning [77.07192378869776]
We propose a simple but effective contrastive learning framework that incorporates recent developments in reward learning and factuality metrics.
Empirical studies demonstrate that the proposed framework enables summarization models to learn from feedback of factuality metrics.
arXiv Detail & Related papers (2023-07-10T12:01:18Z) - Factually Consistent Summarization via Reinforcement Learning with
Textual Entailment Feedback [57.816210168909286]
We leverage recent progress on textual entailment models to address this problem for abstractive summarization systems.
We use reinforcement learning with reference-free, textual entailment rewards to optimize for factual consistency.
Our results, according to both automatic metrics and human evaluation, show that our method considerably improves the faithfulness, salience, and conciseness of the generated summaries.
arXiv Detail & Related papers (2023-05-31T21:04:04Z) - SummIt: Iterative Text Summarization via ChatGPT [12.966825834765814]
We propose SummIt, an iterative text summarization framework based on large language models like ChatGPT.
Our framework enables the model to refine the generated summary iteratively through self-evaluation and feedback.
We also conduct a human evaluation to validate the effectiveness of the iterative refinements and identify a potential issue of over-correction.
arXiv Detail & Related papers (2023-05-24T07:40:06Z) - Enriching and Controlling Global Semantics for Text Summarization [11.037667460077813]
Transformer-based models have been proven effective in the abstractive summarization task by creating fluent and informative summaries.
We introduce a neural topic model empowered with normalizing flow to capture the global semantics of the document, which are then integrated into the summarization model.
Our method outperforms state-of-the-art summarization models on five common text summarization datasets.
arXiv Detail & Related papers (2021-09-22T09:31:50Z) - StreamHover: Livestream Transcript Summarization and Annotation [54.41877742041611]
We present StreamHover, a framework for annotating and summarizing livestream transcripts.
With a total of over 500 hours of videos annotated with both extractive and abstractive summaries, our benchmark dataset is significantly larger than currently existing annotated corpora.
We show that our model generalizes better and improves performance over strong baselines.
arXiv Detail & Related papers (2021-09-11T02:19:37Z) - Dialogue Summarization with Supporting Utterance Flow Modeling and Fact
Regularization [58.965859508695225]
We propose an end-to-end neural model for dialogue summarization with two novel modules.
The supporting utterance flow modeling helps to generate a coherent summary by smoothly shifting the focus from the former utterances to the later ones.
The fact regularization encourages the generated summary to be factually consistent with the ground-truth summary during model training.
arXiv Detail & Related papers (2021-08-03T03:09:25Z) - Semantic Extractor-Paraphraser based Abstractive Summarization [40.05739160204135]
We propose an extractor-paraphraser based abstractive summarization system that exploits semantic overlap.
Our model outperforms the state-of-the-art baselines in terms of ROUGE, METEOR and word similarity (WMS)
arXiv Detail & Related papers (2021-05-04T05:24:28Z) - Multi-Fact Correction in Abstractive Text Summarization [98.27031108197944]
Span-Fact is a suite of two factual correction models that leverages knowledge learned from question answering models to make corrections in system-generated summaries via span selection.
Our models employ single or multi-masking strategies to either iteratively or auto-regressively replace entities in order to ensure semantic consistency w.r.t. the source text.
Experiments show that our models significantly boost the factual consistency of system-generated summaries without sacrificing summary quality in terms of both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-10-06T02:51:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.