Exploring Explainable Selection to Control Abstractive Summarization
- URL: http://arxiv.org/abs/2004.11779v2
- Date: Mon, 14 Dec 2020 10:17:34 GMT
- Title: Exploring Explainable Selection to Control Abstractive Summarization
- Authors: Wang Haonan, Gao Yang, Bai Yu, Mirella Lapata, Huang Heyan
- Abstract summary: We develop a novel framework that focuses on explainability.
A novel pair-wise matrix captures the sentence interactions, centrality, and attribute scores.
A sentence-deployed attention mechanism in the abstractor ensures the final summary emphasizes the desired content.
- Score: 51.74889133688111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Like humans, document summarization models can interpret a document's
contents in a number of ways. Unfortunately, the neural models of today are
largely black boxes that provide little explanation of how or why they
generated a summary in the way they did. Therefore, to begin prying open the
black box and to inject a level of control into the substance of the final
summary, we developed a novel select-and-generate framework that focuses on
explainability. By revealing the latent centrality and interactions between
sentences, along with scores for sentence novelty and relevance, users are
given a window into the choices a model is making and an opportunity to guide
those choices in a more desirable direction. A novel pair-wise matrix captures
the sentence interactions, centrality, and attribute scores, and a mask with
tunable attribute thresholds allows the user to control which sentences are
likely to be included in the extraction. A sentence-deployed attention
mechanism in the abstractor ensures the final summary emphasizes the desired
content. Additionally, the encoder is adaptable, supporting both Transformer-
and BERT-based configurations. In a series of experiments assessed with ROUGE
metrics and two human evaluations, ESCA outperformed eight state-of-the-art
models on the CNN/DailyMail and NYT50 benchmark datasets.
Related papers
- Controllable Topic-Focused Abstractive Summarization [57.8015120583044]
Controlled abstractive summarization focuses on producing condensed versions of a source article to cover specific aspects.
This paper presents a new Transformer-based architecture capable of producing topic-focused summaries.
arXiv Detail & Related papers (2023-11-12T03:51:38Z) - Attributable and Scalable Opinion Summarization [79.87892048285819]
We generate abstractive summaries by decoding frequent encodings, and extractive summaries by selecting the sentences assigned to the same frequent encodings.
Our method is attributable, because the model identifies sentences used to generate the summary as part of the summarization process.
It scales easily to many hundreds of input reviews, because aggregation is performed in the latent space rather than over long sequences of tokens.
arXiv Detail & Related papers (2023-05-19T11:30:37Z) - MACSum: Controllable Summarization with Mixed Attributes [56.685735509260276]
MACSum is the first human-annotated summarization dataset for controlling mixed attributes.
We propose two simple and effective parameter-efficient approaches for the new task of mixed controllable summarization.
arXiv Detail & Related papers (2022-11-09T17:17:37Z) - Reinforcing Semantic-Symmetry for Document Summarization [15.113768658584979]
Document summarization condenses a long document into a short version with salient information and accurate semantic descriptions.
This paper introduces a new textbfreinforcing stextbfemantic-textbfsymmetry learning textbfmodel is proposed for document summarization.
A series of experiments have been conducted on two wildly used benchmark datasets CNN/Daily Mail and BigPatent.
arXiv Detail & Related papers (2021-12-14T17:41:37Z) - Transductive Learning for Abstractive News Summarization [24.03781438153328]
We propose the first application of transductive learning to summarization.
We show that our approach yields state-of-the-art results on CNN/DM and NYT datasets.
arXiv Detail & Related papers (2021-04-17T17:33:12Z) - CTRLsum: Towards Generic Controllable Text Summarization [54.69190421411766]
We presentsum, a novel framework for controllable summarization.
Our approach enables users to control multiple aspects of generated summaries by interacting with the summarization system.
Using a single unified model,sum is able to achieve a broad scope of summary manipulation at inference time.
arXiv Detail & Related papers (2020-12-08T08:54:36Z) - Topic-Guided Abstractive Text Summarization: a Joint Learning Approach [19.623946402970933]
We introduce a new approach for abstractive text summarization, Topic-Guided Abstractive Summarization.
The idea is to incorporate neural topic modeling with a Transformer-based sequence-to-sequence (seq2seq) model in a joint learning framework.
arXiv Detail & Related papers (2020-10-20T14:45:25Z) - Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven
Cloze Reward [42.925345819778656]
We present ASGARD, a novel framework for Abstractive Summarization with Graph-Augmentation and semantic-driven RewarD.
We propose the use of dual encoders---a sequential document encoder and a graph-structured encoder---to maintain the global context and local characteristics of entities.
Results show that our models produce significantly higher ROUGE scores than a variant without knowledge graph as input on both New York Times and CNN/Daily Mail datasets.
arXiv Detail & Related papers (2020-05-03T18:23:06Z) - Few-Shot Learning for Opinion Summarization [117.70510762845338]
Opinion summarization is the automatic creation of text reflecting subjective information expressed in multiple documents.
In this work, we show that even a handful of summaries is sufficient to bootstrap generation of the summary text.
Our approach substantially outperforms previous extractive and abstractive methods in automatic and human evaluation.
arXiv Detail & Related papers (2020-04-30T15:37:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.