An analysis of document graph construction methods for AMR summarization
- URL: http://arxiv.org/abs/2111.13993v1
- Date: Sat, 27 Nov 2021 22:12:50 GMT
- Title: An analysis of document graph construction methods for AMR summarization
- Authors: Fei-Tzin Lee, Chris Kedzie, Nakul Verma, Kathleen McKeown
- Abstract summary: We present a novel dataset consisting of human-annotated alignments between the nodes of paired documents and summaries.
We apply these two forms of evaluation to prior work as well as a new method for node merging and show that our new method has significantly better performance than prior work.
- Score: 2.055054374525828
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Meaning Representation (AMR) is a graph-based semantic representation for
sentences, composed of collections of concepts linked by semantic relations.
AMR-based approaches have found success in a variety of applications, but a
challenge to using it in tasks that require document-level context is that it
only represents individual sentences. Prior work in AMR-based summarization has
automatically merged the individual sentence graphs into a document graph, but
the method of merging and its effects on summary content selection have not
been independently evaluated. In this paper, we present a novel dataset
consisting of human-annotated alignments between the nodes of paired documents
and summaries which may be used to evaluate (1) merge strategies; and (2) the
performance of content selection methods over nodes of a merged or unmerged AMR
graph. We apply these two forms of evaluation to prior work as well as a new
method for node merging and show that our new method has significantly better
performance than prior work.
Related papers
- Hypergraph based Understanding for Document Semantic Entity Recognition [65.84258776834524]
We build a novel hypergraph attention document semantic entity recognition framework, HGA, which uses hypergraph attention to focus on entity boundaries and entity categories at the same time.
Our results on FUNSD, CORD, XFUNDIE show that our method can effectively improve the performance of semantic entity recognition tasks.
arXiv Detail & Related papers (2024-07-09T14:35:49Z) - An AMR-based Link Prediction Approach for Document-level Event Argument
Extraction [51.77733454436013]
Recent works have introduced Abstract Meaning Representation (AMR) for Document-level Event Argument Extraction (Doc-level EAE)
This work reformulates EAE as a link prediction problem on AMR graphs.
We propose a novel graph structure, Tailored AMR Graph (TAG), which compresses less informative subgraphs and edge types, integrates span information, and highlights surrounding events in the same document.
arXiv Detail & Related papers (2023-05-30T16:07:48Z) - DocAMR: Multi-Sentence AMR Representation and Evaluation [19.229112468305267]
We introduce a simple algorithm for deriving a unified graph representation, avoiding the pitfalls of information loss from over-merging and lack of coherence from under-merging.
We also present a pipeline approach combining the top performing AMR and coreference resolution systems, providing a strong baseline for future research.
arXiv Detail & Related papers (2021-12-15T22:38:26Z) - SgSum: Transforming Multi-document Summarization into Sub-graph
Selection [27.40759123902261]
Most existing extractive multi-document summarization (MDS) methods score each sentence individually and extract salient sentences one by one to compose a summary.
We propose a novel MDS framework (SgSum) to formulate the MDS task as a sub-graph selection problem.
Our model can produce significantly more coherent and informative summaries compared with traditional MDS methods.
arXiv Detail & Related papers (2021-10-25T05:12:10Z) - Augmented Abstractive Summarization With Document-LevelSemantic Graph [3.0272794341021667]
Previous abstractive methods apply sequence-to-sequence structures to generate summary without a module.
We utilize semantic graph to boost the generation performance.
A novel neural decoder is presented to leverage the information of such entity graphs.
arXiv Detail & Related papers (2021-09-13T15:12:34Z) - BASS: Boosting Abstractive Summarization with Unified Semantic Graph [49.48925904426591]
BASS is a framework for Boosting Abstractive Summarization based on a unified Semantic graph.
A graph-based encoder-decoder model is proposed to improve both the document representation and summary generation process.
Empirical results show that the proposed architecture brings substantial improvements for both long-document and multi-document summarization tasks.
arXiv Detail & Related papers (2021-05-25T16:20:48Z) - Coarse-to-Fine Entity Representations for Document-level Relation
Extraction [28.39444850200523]
Document-level Relation Extraction (RE) requires extracting relations expressed within and across sentences.
Recent works show that graph-based methods, usually constructing a document-level graph that captures document-aware interactions, can obtain useful entity representations.
We propose the textbfCoarse-to-textbfFine textbfEntity textbfRepresentation model (textbfCFER) that adopts a coarse-to-fine strategy.
arXiv Detail & Related papers (2020-12-04T10:18:59Z) - SummPip: Unsupervised Multi-Document Summarization with Sentence Graph
Compression [61.97200991151141]
SummPip is an unsupervised method for multi-document summarization.
We convert the original documents to a sentence graph, taking both linguistic and deep representation into account.
We then apply spectral clustering to obtain multiple clusters of sentences, and finally compress each cluster to generate the final summary.
arXiv Detail & Related papers (2020-07-17T13:01:15Z) - Leveraging Graph to Improve Abstractive Multi-Document Summarization [50.62418656177642]
We develop a neural abstractive multi-document summarization (MDS) model which can leverage well-known graph representations of documents.
Our model utilizes graphs to encode documents in order to capture cross-document relations, which is crucial to summarizing long documents.
Our model can also take advantage of graphs to guide the summary generation process, which is beneficial for generating coherent and concise summaries.
arXiv Detail & Related papers (2020-05-20T13:39:47Z) - Extractive Summarization as Text Matching [123.09816729675838]
This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems.
We formulate the extractive summarization task as a semantic text matching problem.
We have driven the state-of-the-art extractive result on CNN/DailyMail to a new level (44.41 in ROUGE-1)
arXiv Detail & Related papers (2020-04-19T08:27:57Z) - A Divide-and-Conquer Approach to the Summarization of Long Documents [4.863209463405628]
We present a novel divide-and-conquer method for the neural summarization of long documents.
Our method exploits the discourse structure of the document and uses sentence similarity to split the problem into smaller summarization problems.
We demonstrate that this approach paired with different summarization models, including sequence-to-sequence RNNs and Transformers, can lead to improved summarization performance.
arXiv Detail & Related papers (2020-04-13T20:38:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.