GSum: A General Framework for Guided Neural Abstractive Summarization
- URL: http://arxiv.org/abs/2010.08014v3
- Date: Mon, 19 Apr 2021 09:39:30 GMT
- Title: GSum: A General Framework for Guided Neural Abstractive Summarization
- Authors: Zi-Yi Dou, Pengfei Liu, Hiroaki Hayashi, Zhengbao Jiang, Graham Neubig
- Abstract summary: We propose a general and guided summarization framework (GSum) that can effectively take different kinds of external guidance as input.
Experiments demonstrate that this model is effective, achieving state-of-the-art performance according to ROUGE on 4 popular summarization datasets.
- Score: 102.29593069542976
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural abstractive summarization models are flexible and can produce coherent
summaries, but they are sometimes unfaithful and can be difficult to control.
While previous studies attempt to provide different types of guidance to
control the output and increase faithfulness, it is not clear how these
strategies compare and contrast to each other. In this paper, we propose a
general and extensible guided summarization framework (GSum) that can
effectively take different kinds of external guidance as input, and we perform
experiments across several different varieties. Experiments demonstrate that
this model is effective, achieving state-of-the-art performance according to
ROUGE on 4 popular summarization datasets when using highlighted sentences as
guidance. In addition, we show that our guided model can generate more faithful
summaries and demonstrate how different types of guidance generate
qualitatively different summaries, lending a degree of controllability to the
learned models.
Related papers
- Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - Improving Factuality of Abstractive Summarization via Contrastive Reward
Learning [77.07192378869776]
We propose a simple but effective contrastive learning framework that incorporates recent developments in reward learning and factuality metrics.
Empirical studies demonstrate that the proposed framework enables summarization models to learn from feedback of factuality metrics.
arXiv Detail & Related papers (2023-07-10T12:01:18Z) - Salience Allocation as Guidance for Abstractive Summarization [61.31826412150143]
We propose a novel summarization approach with a flexible and reliable salience guidance, namely SEASON (SaliencE Allocation as Guidance for Abstractive SummarizatiON)
SEASON utilizes the allocation of salience expectation to guide abstractive summarization and adapts well to articles in different abstractiveness.
arXiv Detail & Related papers (2022-10-22T02:13:44Z) - Improving the Faithfulness of Abstractive Summarization via Entity
Coverage Control [27.214742188672464]
We propose a method to remedy entity-level hallucinations with Entity Coverage Control (ECC)
ECC computes entity coverage precision and prepend the corresponding control code for each training example.
We show that the proposed method leads to more faithful and salient abstractive summarization in supervised fine-tuning and zero-shot settings.
arXiv Detail & Related papers (2022-07-05T18:52:19Z) - Sequence Level Contrastive Learning for Text Summarization [49.01633745943263]
We propose a contrastive learning model for supervised abstractive text summarization.
Our model achieves better faithfulness ratings compared to its counterpart without contrastive objectives.
arXiv Detail & Related papers (2021-09-08T08:00:36Z) - Aspect-Controllable Opinion Summarization [58.5308638148329]
We propose an approach that allows the generation of customized summaries based on aspect queries.
Using a review corpus, we create a synthetic training dataset of (review, summary) pairs enriched with aspect controllers.
We fine-tune a pretrained model using our synthetic dataset and generate aspect-specific summaries by modifying the aspect controllers.
arXiv Detail & Related papers (2021-09-07T16:09:17Z) - Subjective Bias in Abstractive Summarization [11.675414451656568]
We formulate the differences among possible multiple expressions summarizing the same content as subjective bias and examine the role of this bias in the context of abstractive summarization.
Results of summarization models trained on style-clustered datasets show that there are certain types of styles that lead to better convergence, abstraction and generalization.
arXiv Detail & Related papers (2021-06-18T12:17:55Z) - Topic-Guided Abstractive Text Summarization: a Joint Learning Approach [19.623946402970933]
We introduce a new approach for abstractive text summarization, Topic-Guided Abstractive Summarization.
The idea is to incorporate neural topic modeling with a Transformer-based sequence-to-sequence (seq2seq) model in a joint learning framework.
arXiv Detail & Related papers (2020-10-20T14:45:25Z) - Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven
Cloze Reward [42.925345819778656]
We present ASGARD, a novel framework for Abstractive Summarization with Graph-Augmentation and semantic-driven RewarD.
We propose the use of dual encoders---a sequential document encoder and a graph-structured encoder---to maintain the global context and local characteristics of entities.
Results show that our models produce significantly higher ROUGE scores than a variant without knowledge graph as input on both New York Times and CNN/Daily Mail datasets.
arXiv Detail & Related papers (2020-05-03T18:23:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.