Strictly Breadth-First AMR Parsing
- URL: http://arxiv.org/abs/2211.03922v1
- Date: Tue, 8 Nov 2022 00:42:27 GMT
- Title: Strictly Breadth-First AMR Parsing
- Authors: Chen Yu, Daniel Gildea
- Abstract summary: We focus on the breadth-first strategy of AMR parsing, which was proposed recently and achieved better performance than other strategies.
We propose a new architecture that emphguarantees that the parsing will strictly follow the breadth-first order.
With the help of this new architecture and some other improvements in the sentence and graph encoder, our model obtains better performance on both the AMR 1.0 and 2.0 dataset.
- Score: 14.465679158068498
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AMR parsing is the task that maps a sentence to an AMR semantic graph
automatically. We focus on the breadth-first strategy of this task, which was
proposed recently and achieved better performance than other strategies.
However, current models under this strategy only \emph{encourage} the model to
produce the AMR graph in breadth-first order, but \emph{cannot guarantee} this.
To solve this problem, we propose a new architecture that \emph{guarantees}
that the parsing will strictly follow the breadth-first order. In each parsing
step, we introduce a \textbf{focused parent} vertex and use this vertex to
guide the generation. With the help of this new architecture and some other
improvements in the sentence and graph encoder, our model obtains better
performance on both the AMR 1.0 and 2.0 dataset.
Related papers
- NAMER: Non-Autoregressive Modeling for Handwritten Mathematical Expression Recognition [80.22784377150465]
Handwritten Mathematical Expression Recognition (HMER) has gained considerable attention in pattern recognition for its diverse applications in document understanding.
This paper makes the first attempt to build a novel bottom-up Non-AutoRegressive Modeling approach for HMER, called NAMER.
NAMER comprises a Visual Aware Tokenizer (VAT) and a Parallel Graph (PGD)
arXiv Detail & Related papers (2024-07-16T04:52:39Z) - AMR Parsing with Causal Hierarchical Attention and Pointers [54.382865897298046]
We introduce new target forms of AMR parsing and a novel model, CHAP, which is equipped with causal hierarchical attention and the pointer mechanism.
Experiments show that our model outperforms baseline models on four out of five benchmarks in the setting of no additional data.
arXiv Detail & Related papers (2023-10-18T13:44:26Z) - Guiding AMR Parsing with Reverse Graph Linearization [45.37129580211495]
We propose a novel Reverse Graph Linearization (RGL) framework for AMR parsing.
RGL defines both default and reverse linearization orders of an AMR graph, where most structures at the back part of the default order appear at the front part of the reversed order and vice versa.
Our analysis shows that our proposed method significantly mitigates the problem of structure loss accumulation, outperforming the previously best AMR parsing model by 0.8 and 0.5 Smatch scores on the AMR 2.0 and AMR 3.0 dataset, respectively.
arXiv Detail & Related papers (2023-10-13T05:03:13Z) - AMRs Assemble! Learning to Ensemble with Autoregressive Models for AMR
Parsing [38.731641198934646]
We show how ensemble models can exploit SMATCH metric weaknesses to obtain higher scores, but sometimes result in corrupted graphs.
We propose two novel ensemble strategies based on Transformer models, improving robustness to structural constraints, while also reducing computational time.
arXiv Detail & Related papers (2023-06-19T08:58:47Z) - An AMR-based Link Prediction Approach for Document-level Event Argument
Extraction [51.77733454436013]
Recent works have introduced Abstract Meaning Representation (AMR) for Document-level Event Argument Extraction (Doc-level EAE)
This work reformulates EAE as a link prediction problem on AMR graphs.
We propose a novel graph structure, Tailored AMR Graph (TAG), which compresses less informative subgraphs and edge types, integrates span information, and highlights surrounding events in the same document.
arXiv Detail & Related papers (2023-05-30T16:07:48Z) - Graph Pre-training for AMR Parsing and Generation [14.228434699363495]
We investigate graph self-supervised training to improve structure awareness of PLMs over AMR graphs.
We introduce two graph auto-encoding strategies for graph-to-graph pre-training and four tasks to integrate text and graph information during pre-training.
arXiv Detail & Related papers (2022-03-15T12:47:00Z) - Hierarchical Memory Learning for Fine-Grained Scene Graph Generation [49.39355372599507]
This paper proposes a novel Hierarchical Memory Learning (HML) framework to learn the model from simple to complex.
After the autonomous partition of coarse and fine predicates, the model is first trained on the coarse predicates and then learns the fine predicates.
arXiv Detail & Related papers (2022-03-14T08:01:14Z) - Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text
Generation [56.73834525802723]
Lightweight Dynamic Graph Convolutional Networks (LDGCNs) are proposed.
LDGCNs capture richer non-local interactions by synthesizing higher order information from the input graphs.
We develop two novel parameter saving strategies based on the group graph convolutions and weight tied convolutions to reduce memory usage and model complexity.
arXiv Detail & Related papers (2020-10-09T06:03:46Z) - Improving AMR Parsing with Sequence-to-Sequence Pre-training [39.33133978535497]
In this paper, we focus on sequence-to-sequence (seq2seq) AMR parsing.
We propose a seq2seq pre-training approach to build pre-trained models in both single and joint way.
Experiments show that both the single and joint pre-trained models significantly improve the performance.
arXiv Detail & Related papers (2020-10-05T04:32:47Z) - AMR Parsing via Graph-Sequence Iterative Inference [62.85003739964878]
We propose a new end-to-end model that treats AMR parsing as a series of dual decisions on the input sequence and the incrementally constructed graph.
We show that the answers to these two questions are mutually causalities.
We design a model based on iterative inference that helps achieve better answers in both perspectives, leading to greatly improved parsing accuracy.
arXiv Detail & Related papers (2020-04-12T09:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.