A New Approach to Overgenerating and Scoring Abstractive Summaries
- URL: http://arxiv.org/abs/2104.01726v1
- Date: Mon, 5 Apr 2021 00:29:45 GMT
- Title: A New Approach to Overgenerating and Scoring Abstractive Summaries
- Authors: Kaiqiang Song and Bingqing Wang and Zhe Feng and Fei Liu
- Abstract summary: We propose a two-staged strategy to generate a diverse set of candidate summaries from the source text in stage one, then score and select admissible ones in stage two.
Our generator gives a precise control over the length of the summary, which is especially well-suited when space is limited.
Our selectors are designed to predict the optimal summary length and put special emphasis on faithfulness to the original text.
- Score: 9.060597430218378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new approach to generate multiple variants of the target summary
with diverse content and varying lengths, then score and select admissible ones
according to users' needs. Abstractive summarizers trained on single reference
summaries may struggle to produce outputs that achieve multiple desirable
properties, i.e., capturing the most important information, being faithful to
the original, grammatical and fluent. In this paper, we propose a two-staged
strategy to generate a diverse set of candidate summaries from the source text
in stage one, then score and select admissible ones in stage two. Importantly,
our generator gives a precise control over the length of the summary, which is
especially well-suited when space is limited. Our selectors are designed to
predict the optimal summary length and put special emphasis on faithfulness to
the original text. Both stages can be effectively trained, optimized and
evaluated. Our experiments on benchmark summarization datasets suggest that
this paradigm can achieve state-of-the-art performance.
Related papers
- Hit the Sweet Spot! Span-Level Ensemble for Large Language Models [8.34562564266839]
We propose SweetSpan, a span-level ensemble method that effectively balances the need for real-time adjustments and the information required for accurate ensemble decisions.
Our approach involves two key steps: First, we have each candidate model independently generate candidate spans based on the shared prefix.
Second, we calculate perplexity scores to facilitate mutual evaluation among the candidate models and achieve robust span selection by filtering out unfaithful scores.
arXiv Detail & Related papers (2024-09-27T09:41:29Z) - Attributable and Scalable Opinion Summarization [79.87892048285819]
We generate abstractive summaries by decoding frequent encodings, and extractive summaries by selecting the sentences assigned to the same frequent encodings.
Our method is attributable, because the model identifies sentences used to generate the summary as part of the summarization process.
It scales easily to many hundreds of input reviews, because aggregation is performed in the latent space rather than over long sequences of tokens.
arXiv Detail & Related papers (2023-05-19T11:30:37Z) - UniSumm and SummZoo: Unified Model and Diverse Benchmark for Few-Shot
Summarization [54.59104881168188]
textscUniSumm is a unified few-shot summarization model pre-trained with multiple summarization tasks.
textscSummZoo is a new benchmark to better evaluate few-shot summarizers.
arXiv Detail & Related papers (2022-11-17T18:54:47Z) - Controlled Text Reduction [15.102190738450092]
We formalize textitControlled Text Reduction as a standalone task.
A model then needs to generate a coherent text that includes all and only the target information.
arXiv Detail & Related papers (2022-10-24T17:59:03Z) - Towards Summary Candidates Fusion [26.114829566197976]
We propose a new paradigm in second-stage abstractive summarization called SummaFusion.
It fuses several summary candidates to produce a novel abstractive second-stage summary.
Our method works well on several summarization datasets, improving both the ROUGE scores and qualitative properties of fused summaries.
arXiv Detail & Related papers (2022-10-17T06:48:05Z) - Unsupervised Summarization with Customized Granularities [76.26899748972423]
We propose the first unsupervised multi-granularity summarization framework, GranuSum.
By inputting different numbers of events, GranuSum is capable of producing multi-granular summaries in an unsupervised manner.
arXiv Detail & Related papers (2022-01-29T05:56:35Z) - Text Summarization with Latent Queries [60.468323530248945]
We introduce LaQSum, the first unified text summarization system that learns Latent Queries from documents for abstractive summarization with any existing query forms.
Under a deep generative framework, our system jointly optimize a latent query model and a conditional language model, allowing users to plug-and-play queries of any type at test time.
Our system robustly outperforms strong comparison systems across summarization benchmarks with different query types, document settings, and target domains.
arXiv Detail & Related papers (2021-05-31T21:14:58Z) - Controllable Abstractive Dialogue Summarization with Sketch Supervision [56.59357883827276]
Our model achieves state-of-the-art performance on the largest dialogue summarization corpus SAMSum, with as high as 50.79 in ROUGE-L score.
arXiv Detail & Related papers (2021-05-28T19:05:36Z) - Summarize, Outline, and Elaborate: Long-Text Generation via Hierarchical
Supervision from Extractive Summaries [46.183289748907804]
We propose SOE, a pipelined system that outlines, outlining and elaborating for long text generation.
SOE produces long texts with significantly better quality, along with faster convergence speed.
arXiv Detail & Related papers (2020-10-14T13:22:20Z) - Exploring Explainable Selection to Control Abstractive Summarization [51.74889133688111]
We develop a novel framework that focuses on explainability.
A novel pair-wise matrix captures the sentence interactions, centrality, and attribute scores.
A sentence-deployed attention mechanism in the abstractor ensures the final summary emphasizes the desired content.
arXiv Detail & Related papers (2020-04-24T14:39:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.