Better Highlighting: Creating Sub-Sentence Summary Highlights
- URL: http://arxiv.org/abs/2010.10566v1
- Date: Tue, 20 Oct 2020 18:57:42 GMT
- Title: Better Highlighting: Creating Sub-Sentence Summary Highlights
- Authors: Sangwoo Cho and Kaiqiang Song and Chen Li and Dong Yu and Hassan
Foroosh and Fei Liu
- Abstract summary: We present a new method to produce self-contained highlights that are understandable on their own to avoid confusion.
Our method combines determinantal point processes and deep contextualized representations to identify an optimal set of sub-sentence segments.
To demonstrate the flexibility and modeling power of our method, we conduct extensive experiments on summarization datasets.
- Score: 40.46639471959677
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Amongst the best means to summarize is highlighting. In this paper, we aim to
generate summary highlights to be overlaid on the original documents to make it
easier for readers to sift through a large amount of text. The method allows
summaries to be understood in context to prevent a summarizer from distorting
the original meaning, of which abstractive summarizers usually fall short. In
particular, we present a new method to produce self-contained highlights that
are understandable on their own to avoid confusion. Our method combines
determinantal point processes and deep contextualized representations to
identify an optimal set of sub-sentence segments that are both important and
non-redundant to form summary highlights. To demonstrate the flexibility and
modeling power of our method, we conduct extensive experiments on summarization
datasets. Our analysis provides evidence that highlighting is a promising
avenue of research towards future summarization.
Related papers
- Factually Consistent Summarization via Reinforcement Learning with
Textual Entailment Feedback [57.816210168909286]
We leverage recent progress on textual entailment models to address this problem for abstractive summarization systems.
We use reinforcement learning with reference-free, textual entailment rewards to optimize for factual consistency.
Our results, according to both automatic metrics and human evaluation, show that our method considerably improves the faithfulness, salience, and conciseness of the generated summaries.
arXiv Detail & Related papers (2023-05-31T21:04:04Z) - SummIt: Iterative Text Summarization via ChatGPT [12.966825834765814]
We propose SummIt, an iterative text summarization framework based on large language models like ChatGPT.
Our framework enables the model to refine the generated summary iteratively through self-evaluation and feedback.
We also conduct a human evaluation to validate the effectiveness of the iterative refinements and identify a potential issue of over-correction.
arXiv Detail & Related papers (2023-05-24T07:40:06Z) - Salience Allocation as Guidance for Abstractive Summarization [61.31826412150143]
We propose a novel summarization approach with a flexible and reliable salience guidance, namely SEASON (SaliencE Allocation as Guidance for Abstractive SummarizatiON)
SEASON utilizes the allocation of salience expectation to guide abstractive summarization and adapts well to articles in different abstractiveness.
arXiv Detail & Related papers (2022-10-22T02:13:44Z) - A General Contextualized Rewriting Framework for Text Summarization [15.311467109946571]
Exiting rewriting systems take each extractive sentence as the only input, which is relatively focused but can lose necessary background knowledge and discourse context.
We formalize contextualized rewriting as a seq2seq with group-tag alignments, identifying extractive sentences through content-based addressing.
Results show that our approach significantly outperforms non-contextualized rewriting systems without requiring reinforcement learning.
arXiv Detail & Related papers (2022-07-13T03:55:57Z) - A Survey on Neural Abstractive Summarization Methods and Factual
Consistency of Summarization [18.763290930749235]
summarization is the process of shortening a set of textual data computationally, to create a subset (a summary)
Existing summarization methods can be roughly divided into two types: extractive and abstractive.
An extractive summarizer explicitly selects text snippets from the source document, while an abstractive summarizer generates novel text snippets to convey the most salient concepts prevalent in the source.
arXiv Detail & Related papers (2022-04-20T14:56:36Z) - Summarizing Text on Any Aspects: A Knowledge-Informed Weakly-Supervised
Approach [89.56158561087209]
We study summarizing on arbitrary aspects relevant to the document.
Due to the lack of supervision data, we develop a new weak supervision construction method and an aspect modeling scheme.
Experiments show our approach achieves performance boosts on summarizing both real and synthetic documents.
arXiv Detail & Related papers (2020-10-14T03:20:46Z) - Few-Shot Learning for Opinion Summarization [117.70510762845338]
Opinion summarization is the automatic creation of text reflecting subjective information expressed in multiple documents.
In this work, we show that even a handful of summaries is sufficient to bootstrap generation of the summary text.
Our approach substantially outperforms previous extractive and abstractive methods in automatic and human evaluation.
arXiv Detail & Related papers (2020-04-30T15:37:38Z) - Screenplay Summarization Using Latent Narrative Structure [78.45316339164133]
We propose to explicitly incorporate the underlying structure of narratives into general unsupervised and supervised extractive summarization models.
We formalize narrative structure in terms of key narrative events (turning points) and treat it as latent in order to summarize screenplays.
Experimental results on the CSI corpus of TV screenplays, which we augment with scene-level summarization labels, show that latent turning points correlate with important aspects of a CSI episode.
arXiv Detail & Related papers (2020-04-27T11:54:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.