EASE: Extractive-Abstractive Summarization with Explanations
- URL: http://arxiv.org/abs/2105.06982v1
- Date: Fri, 14 May 2021 17:45:06 GMT
- Title: EASE: Extractive-Abstractive Summarization with Explanations
- Authors: Haoran Li, Arash Einolghozati, Srinivasan Iyer, Bhargavi Paranjape,
Yashar Mehdad, Sonal Gupta, Marjan Ghazvininejad
- Abstract summary: We present an explainable summarization system based on the Information Bottleneck principle.
Inspired by previous research that humans use a two-stage framework to summarize long documents, our framework first extracts a pre-defined amount of evidence spans as explanations.
We show that explanations from our framework are more relevant than simple baselines, without substantially sacrificing the quality of the generated summary.
- Score: 18.046254486733186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current abstractive summarization systems outperform their extractive
counterparts, but their widespread adoption is inhibited by the inherent lack
of interpretability. To achieve the best of both worlds, we propose EASE, an
extractive-abstractive framework for evidence-based text generation and apply
it to document summarization. We present an explainable summarization system
based on the Information Bottleneck principle that is jointly trained for
extraction and abstraction in an end-to-end fashion. Inspired by previous
research that humans use a two-stage framework to summarize long documents
(Jing and McKeown, 2000), our framework first extracts a pre-defined amount of
evidence spans as explanations and then generates a summary using only the
evidence. Using automatic and human evaluations, we show that explanations from
our framework are more relevant than simple baselines, without substantially
sacrificing the quality of the generated summary.
Related papers
- Efficient and Interpretable Compressive Text Summarisation with
Unsupervised Dual-Agent Reinforcement Learning [36.93582300019002]
We propose an efficient and interpretable compressive summarisation method using unsupervised dual-agent reinforcement learning.
Our model achieves promising performance and a significant improvement on Newsroom in terms of the ROUGE metric.
arXiv Detail & Related papers (2023-06-06T05:30:49Z) - Salience Allocation as Guidance for Abstractive Summarization [61.31826412150143]
We propose a novel summarization approach with a flexible and reliable salience guidance, namely SEASON (SaliencE Allocation as Guidance for Abstractive SummarizatiON)
SEASON utilizes the allocation of salience expectation to guide abstractive summarization and adapts well to articles in different abstractiveness.
arXiv Detail & Related papers (2022-10-22T02:13:44Z) - A Survey on Neural Abstractive Summarization Methods and Factual
Consistency of Summarization [18.763290930749235]
summarization is the process of shortening a set of textual data computationally, to create a subset (a summary)
Existing summarization methods can be roughly divided into two types: extractive and abstractive.
An extractive summarizer explicitly selects text snippets from the source document, while an abstractive summarizer generates novel text snippets to convey the most salient concepts prevalent in the source.
arXiv Detail & Related papers (2022-04-20T14:56:36Z) - Leveraging Information Bottleneck for Scientific Document Summarization [26.214930773343887]
This paper presents an unsupervised extractive approach to summarize scientific long documents.
Inspired by previous work which uses the Information Bottleneck principle for sentence compression, we extend it to document level summarization.
arXiv Detail & Related papers (2021-10-04T09:43:47Z) - Eider: Evidence-enhanced Document-level Relation Extraction [56.71004595444816]
Document-level relation extraction (DocRE) aims at extracting semantic relations among entity pairs in a document.
We propose a three-stage evidence-enhanced DocRE framework consisting of joint relation and evidence extraction, evidence-centered relation extraction (RE), and fusion of extraction results.
arXiv Detail & Related papers (2021-06-16T09:43:16Z) - Abstractive Query Focused Summarization with Query-Free Resources [60.468323530248945]
In this work, we consider the problem of leveraging only generic summarization resources to build an abstractive QFS system.
We propose Marge, a Masked ROUGE Regression framework composed of a novel unified representation for summaries and queries.
Despite learning from minimal supervision, our system achieves state-of-the-art results in the distantly supervised setting.
arXiv Detail & Related papers (2020-12-29T14:39:35Z) - Constrained Abstractive Summarization: Preserving Factual Consistency
with Constrained Generation [93.87095877617968]
We propose Constrained Abstractive Summarization (CAS), a general setup that preserves the factual consistency of abstractive summarization.
We adopt lexically constrained decoding, a technique generally applicable to autoregressive generative models, to fulfill CAS.
We observe up to 13.8 ROUGE-2 gains when only one manual constraint is used in interactive summarization.
arXiv Detail & Related papers (2020-10-24T00:27:44Z) - Multi-Fact Correction in Abstractive Text Summarization [98.27031108197944]
Span-Fact is a suite of two factual correction models that leverages knowledge learned from question answering models to make corrections in system-generated summaries via span selection.
Our models employ single or multi-masking strategies to either iteratively or auto-regressively replace entities in order to ensure semantic consistency w.r.t. the source text.
Experiments show that our models significantly boost the factual consistency of system-generated summaries without sacrificing summary quality in terms of both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-10-06T02:51:02Z) - At Which Level Should We Extract? An Empirical Analysis on Extractive
Document Summarization [110.54963847339775]
We show that unnecessity and redundancy issues exist when extracting full sentences.
We propose extracting sub-sentential units based on the constituency parsing tree.
arXiv Detail & Related papers (2020-04-06T13:35:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.