Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven
Cloze Reward
- URL: http://arxiv.org/abs/2005.01159v1
- Date: Sun, 3 May 2020 18:23:06 GMT
- Title: Knowledge Graph-Augmented Abstractive Summarization with Semantic-Driven
Cloze Reward
- Authors: Luyang Huang, Lingfei Wu, Lu Wang
- Abstract summary: We present ASGARD, a novel framework for Abstractive Summarization with Graph-Augmentation and semantic-driven RewarD.
We propose the use of dual encoders---a sequential document encoder and a graph-structured encoder---to maintain the global context and local characteristics of entities.
Results show that our models produce significantly higher ROUGE scores than a variant without knowledge graph as input on both New York Times and CNN/Daily Mail datasets.
- Score: 42.925345819778656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequence-to-sequence models for abstractive summarization have been studied
extensively, yet the generated summaries commonly suffer from fabricated
content, and are often found to be near-extractive. We argue that, to address
these issues, the summarizer should acquire semantic interpretation over input,
e.g., via structured representation, to allow the generation of more
informative summaries. In this paper, we present ASGARD, a novel framework for
Abstractive Summarization with Graph-Augmentation and semantic-driven RewarD.
We propose the use of dual encoders---a sequential document encoder and a
graph-structured encoder---to maintain the global context and local
characteristics of entities, complementing each other. We further design a
reward based on a multiple choice cloze test to drive the model to better
capture entity interactions. Results show that our models produce significantly
higher ROUGE scores than a variant without knowledge graph as input on both New
York Times and CNN/Daily Mail datasets. We also obtain better or comparable
performance compared to systems that are fine-tuned from large pretrained
language models. Human judges further rate our model outputs as more
informative and containing fewer unfaithful errors.
Related papers
- GLIMMER: Incorporating Graph and Lexical Features in Unsupervised Multi-Document Summarization [13.61818620609812]
We propose a lightweight yet effective unsupervised approach called GLIMMER: a Graph and LexIcal features based unsupervised Multi-docuMEnt summaRization approach.
It first constructs a sentence graph from the source documents, then automatically identifies semantic clusters by mining low-level features from raw texts.
Experiments conducted on Multi-News, Multi-XScience and DUC-2004 demonstrate that our approach outperforms existing unsupervised approaches.
arXiv Detail & Related papers (2024-08-19T16:01:48Z) - GEGA: Graph Convolutional Networks and Evidence Retrieval Guided Attention for Enhanced Document-level Relation Extraction [15.246183329778656]
Document-level relation extraction (DocRE) aims to extract relations between entities from unstructured document text.
To overcome these challenges, we propose GEGA, a novel model for DocRE.
We evaluate the GEGA model on three widely used benchmark datasets: DocRED, Re-DocRED, and Revisit-DocRED.
arXiv Detail & Related papers (2024-07-31T07:15:33Z) - Improving Sequence-to-Sequence Models for Abstractive Text Summarization Using Meta Heuristic Approaches [0.0]
Humans have a unique ability to create abstractions.
The use of sequence-to-sequence (seq2seq) models for neural abstractive text summarization has been ascending as far as prevalence.
In this article, we aim toward enhancing the present architectures and models for abstractive text summarization.
arXiv Detail & Related papers (2024-03-24T17:39:36Z) - Improving the Robustness of Summarization Systems with Dual Augmentation [68.53139002203118]
A robust summarization system should be able to capture the gist of the document, regardless of the specific word choices or noise in the input.
We first explore the summarization models' robustness against perturbations including word-level synonym substitution and noise.
We propose a SummAttacker, which is an efficient approach to generating adversarial samples based on language models.
arXiv Detail & Related papers (2023-06-01T19:04:17Z) - Scientific Paper Extractive Summarization Enhanced by Citation Graphs [50.19266650000948]
We focus on leveraging citation graphs to improve scientific paper extractive summarization under different settings.
Preliminary results demonstrate that citation graph is helpful even in a simple unsupervised framework.
Motivated by this, we propose a Graph-based Supervised Summarization model (GSS) to achieve more accurate results on the task when large-scale labeled data are available.
arXiv Detail & Related papers (2022-12-08T11:53:12Z) - Long Document Summarization with Top-down and Bottom-up Inference [113.29319668246407]
We propose a principled inference framework to improve summarization models on two aspects.
Our framework assumes a hierarchical latent structure of a document where the top-level captures the long range dependency.
We demonstrate the effectiveness of the proposed framework on a diverse set of summarization datasets.
arXiv Detail & Related papers (2022-03-15T01:24:51Z) - StreamHover: Livestream Transcript Summarization and Annotation [54.41877742041611]
We present StreamHover, a framework for annotating and summarizing livestream transcripts.
With a total of over 500 hours of videos annotated with both extractive and abstractive summaries, our benchmark dataset is significantly larger than currently existing annotated corpora.
We show that our model generalizes better and improves performance over strong baselines.
arXiv Detail & Related papers (2021-09-11T02:19:37Z) - MeetSum: Transforming Meeting Transcript Summarization using
Transformers! [2.1915057426589746]
We utilize a Transformer-based Pointer Generator Network to generate abstract summaries for meeting transcripts.
This model uses 2 LSTMs as an encoder and a decoder, a Pointer network which copies words from the inputted text, and a Generator network to produce out-of-vocabulary words.
We show that training the model on a news summary dataset and using zero-shot learning to test it on the meeting dataset proves to produce better results than training it on the AMI meeting dataset.
arXiv Detail & Related papers (2021-08-13T16:34:09Z) - Leveraging Graph to Improve Abstractive Multi-Document Summarization [50.62418656177642]
We develop a neural abstractive multi-document summarization (MDS) model which can leverage well-known graph representations of documents.
Our model utilizes graphs to encode documents in order to capture cross-document relations, which is crucial to summarizing long documents.
Our model can also take advantage of graphs to guide the summary generation process, which is beneficial for generating coherent and concise summaries.
arXiv Detail & Related papers (2020-05-20T13:39:47Z) - Neural Entity Summarization with Joint Encoding and Weak Supervision [29.26714907483851]
In knowledge graphs, an entity is often described by a large number of triple facts.
Existing solutions to entitymarization are mainly unsupervised.
We present a supervised approach that is based on our novel neural model.
arXiv Detail & Related papers (2020-05-01T00:14:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.