DocAMR: Multi-Sentence AMR Representation and Evaluation
- URL: http://arxiv.org/abs/2112.08513v1
- Date: Wed, 15 Dec 2021 22:38:26 GMT
- Title: DocAMR: Multi-Sentence AMR Representation and Evaluation
- Authors: Tahira Naseem, Austin Blodgett, Sadhana Kumaravel, Tim O'Gorman,
Young-Suk Lee, Jeffrey Flanigan, Ram\'on Fernandez Astudillo, Radu Florian,
Salim Roukos, Nathan Schneider
- Abstract summary: We introduce a simple algorithm for deriving a unified graph representation, avoiding the pitfalls of information loss from over-merging and lack of coherence from under-merging.
We also present a pipeline approach combining the top performing AMR and coreference resolution systems, providing a strong baseline for future research.
- Score: 19.229112468305267
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Despite extensive research on parsing of English sentences into Abstraction
Meaning Representation (AMR) graphs, which are compared to gold graphs via the
Smatch metric, full-document parsing into a unified graph representation lacks
well-defined representation and evaluation. Taking advantage of a
super-sentential level of coreference annotation from previous work, we
introduce a simple algorithm for deriving a unified graph representation,
avoiding the pitfalls of information loss from over-merging and lack of
coherence from under-merging. Next, we describe improvements to the Smatch
metric to make it tractable for comparing document-level graphs, and use it to
re-evaluate the best published document-level AMR parser. We also present a
pipeline approach combining the top performing AMR parser and coreference
resolution systems, providing a strong baseline for future research.
Related papers
- An AMR-based Link Prediction Approach for Document-level Event Argument
Extraction [51.77733454436013]
Recent works have introduced Abstract Meaning Representation (AMR) for Document-level Event Argument Extraction (Doc-level EAE)
This work reformulates EAE as a link prediction problem on AMR graphs.
We propose a novel graph structure, Tailored AMR Graph (TAG), which compresses less informative subgraphs and edge types, integrates span information, and highlights surrounding events in the same document.
arXiv Detail & Related papers (2023-05-30T16:07:48Z) - Scientific Paper Extractive Summarization Enhanced by Citation Graphs [50.19266650000948]
We focus on leveraging citation graphs to improve scientific paper extractive summarization under different settings.
Preliminary results demonstrate that citation graph is helpful even in a simple unsupervised framework.
Motivated by this, we propose a Graph-based Supervised Summarization model (GSS) to achieve more accurate results on the task when large-scale labeled data are available.
arXiv Detail & Related papers (2022-12-08T11:53:12Z) - Retrofitting Multilingual Sentence Embeddings with Abstract Meaning
Representation [70.58243648754507]
We introduce a new method to improve existing multilingual sentence embeddings with Abstract Meaning Representation (AMR)
Compared with the original textual input, AMR is a structured semantic representation that presents the core concepts and relations in a sentence explicitly and unambiguously.
Experiment results show that retrofitting multilingual sentence embeddings with AMR leads to better state-of-the-art performance on both semantic similarity and transfer tasks.
arXiv Detail & Related papers (2022-10-18T11:37:36Z) - FactGraph: Evaluating Factuality in Summarization with Semantic Graph
Representations [114.94628499698096]
We propose FactGraph, a method that decomposes the document and the summary into structured meaning representations (MRs)
MRs describe core semantic concepts and their relations, aggregating the main content in both document and summary in a canonical form, and reducing data sparsity.
Experiments on different benchmarks for evaluating factuality show that FactGraph outperforms previous approaches by up to 15%.
arXiv Detail & Related papers (2022-04-13T16:45:33Z) - An analysis of document graph construction methods for AMR summarization [2.055054374525828]
We present a novel dataset consisting of human-annotated alignments between the nodes of paired documents and summaries.
We apply these two forms of evaluation to prior work as well as a new method for node merging and show that our new method has significantly better performance than prior work.
arXiv Detail & Related papers (2021-11-27T22:12:50Z) - SgSum: Transforming Multi-document Summarization into Sub-graph
Selection [27.40759123902261]
Most existing extractive multi-document summarization (MDS) methods score each sentence individually and extract salient sentences one by one to compose a summary.
We propose a novel MDS framework (SgSum) to formulate the MDS task as a sub-graph selection problem.
Our model can produce significantly more coherent and informative summaries compared with traditional MDS methods.
arXiv Detail & Related papers (2021-10-25T05:12:10Z) - BASS: Boosting Abstractive Summarization with Unified Semantic Graph [49.48925904426591]
BASS is a framework for Boosting Abstractive Summarization based on a unified Semantic graph.
A graph-based encoder-decoder model is proposed to improve both the document representation and summary generation process.
Empirical results show that the proposed architecture brings substantial improvements for both long-document and multi-document summarization tasks.
arXiv Detail & Related papers (2021-05-25T16:20:48Z) - Analysis of GraphSum's Attention Weights to Improve the Explainability
of Multi-Document Summarization [2.626095252463179]
Modern multi-document summarization (MDS) methods are based on transformer architectures.
They generate state of the art summaries, but lack explainability.
We aim to improve the explainability of the graph-based MDS by analyzing their attention weights.
arXiv Detail & Related papers (2021-05-19T08:18:59Z) - Leveraging Graph to Improve Abstractive Multi-Document Summarization [50.62418656177642]
We develop a neural abstractive multi-document summarization (MDS) model which can leverage well-known graph representations of documents.
Our model utilizes graphs to encode documents in order to capture cross-document relations, which is crucial to summarizing long documents.
Our model can also take advantage of graphs to guide the summary generation process, which is beneficial for generating coherent and concise summaries.
arXiv Detail & Related papers (2020-05-20T13:39:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.