Probabilistic, Structure-Aware Algorithms for Improved Variety,
Accuracy, and Coverage of AMR Alignments
- URL: http://arxiv.org/abs/2106.06002v1
- Date: Thu, 10 Jun 2021 18:46:32 GMT
- Title: Probabilistic, Structure-Aware Algorithms for Improved Variety,
Accuracy, and Coverage of AMR Alignments
- Authors: Austin Blodgett and Nathan Schneider
- Abstract summary: We present algorithms for aligning components of Abstract Meaning Representation (AMR) spans in English sentences.
We leverage unsupervised learning in combination with graphs, taking the best of both worlds from previous AMR.
Our approach covers a wider variety of AMR substructures than previously considered, achieves higher coverage of nodes and edges, and does so with higher accuracy.
- Score: 9.74672460306765
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present algorithms for aligning components of Abstract Meaning
Representation (AMR) graphs to spans in English sentences. We leverage
unsupervised learning in combination with heuristics, taking the best of both
worlds from previous AMR aligners. Our unsupervised models, however, are more
sensitive to graph substructures, without requiring a separate syntactic parse.
Our approach covers a wider variety of AMR substructures than previously
considered, achieves higher coverage of nodes and edges, and does so with
higher accuracy. We will release our LEAMR datasets and aligner for use in
research on AMR parsing, generation, and evaluation.
Related papers
- Hierarchical Indexing for Retrieval-Augmented Opinion Summarization [60.5923941324953]
We propose a method for unsupervised abstractive opinion summarization that combines the attributability and scalability of extractive approaches with the coherence and fluency of Large Language Models (LLMs)
Our method, HIRO, learns an index structure that maps sentences to a path through a semantically organized discrete hierarchy.
At inference time, we populate the index and use it to identify and retrieve clusters of sentences containing popular opinions from input reviews.
arXiv Detail & Related papers (2024-03-01T10:38:07Z) - AMR Parsing with Causal Hierarchical Attention and Pointers [54.382865897298046]
We introduce new target forms of AMR parsing and a novel model, CHAP, which is equipped with causal hierarchical attention and the pointer mechanism.
Experiments show that our model outperforms baseline models on four out of five benchmarks in the setting of no additional data.
arXiv Detail & Related papers (2023-10-18T13:44:26Z) - Leveraging Denoised Abstract Meaning Representation for Grammatical
Error Correction [53.55440811942249]
Grammatical Error Correction (GEC) is the task of correcting errorful sentences into grammatically correct, semantically consistent, and coherent sentences.
We propose the AMR-GEC, a seq-to-seq model that incorporates denoised AMR as additional knowledge.
arXiv Detail & Related papers (2023-07-05T09:06:56Z) - AMRs Assemble! Learning to Ensemble with Autoregressive Models for AMR
Parsing [38.731641198934646]
We show how ensemble models can exploit SMATCH metric weaknesses to obtain higher scores, but sometimes result in corrupted graphs.
We propose two novel ensemble strategies based on Transformer models, improving robustness to structural constraints, while also reducing computational time.
arXiv Detail & Related papers (2023-06-19T08:58:47Z) - An AMR-based Link Prediction Approach for Document-level Event Argument
Extraction [51.77733454436013]
Recent works have introduced Abstract Meaning Representation (AMR) for Document-level Event Argument Extraction (Doc-level EAE)
This work reformulates EAE as a link prediction problem on AMR graphs.
We propose a novel graph structure, Tailored AMR Graph (TAG), which compresses less informative subgraphs and edge types, integrates span information, and highlights surrounding events in the same document.
arXiv Detail & Related papers (2023-05-30T16:07:48Z) - Inducing and Using Alignments for Transition-based AMR Parsing [51.35194383275297]
We propose a neural aligner for AMR that learns node-to-word alignments without relying on complex pipelines.
We attain a new state-of-the art for gold-only trained models, matching silver-trained performance without the need for beam search on AMR3.0.
arXiv Detail & Related papers (2022-05-03T12:58:36Z) - DocAMR: Multi-Sentence AMR Representation and Evaluation [19.229112468305267]
We introduce a simple algorithm for deriving a unified graph representation, avoiding the pitfalls of information loss from over-merging and lack of coherence from under-merging.
We also present a pipeline approach combining the top performing AMR and coreference resolution systems, providing a strong baseline for future research.
arXiv Detail & Related papers (2021-12-15T22:38:26Z) - Ensembling Graph Predictions for AMR Parsing [28.625065956013778]
In many machine learning tasks, models are trained to predict structure data such as graphs.
In this work, we formalize this problem as mining the largest graph that is the most supported by a collection of graph predictions.
We show that the proposed approach can combine the strength of state-of-the-art AMRs to create new predictions that are more accurate than any individual models in five standard benchmark datasets.
arXiv Detail & Related papers (2021-10-18T09:35:39Z) - A Differentiable Relaxation of Graph Segmentation and Alignment for AMR
Parsing [75.36126971685034]
We treat alignment and segmentation as latent variables in our model and induce them as part of end-to-end training.
Our method also approaches that of a model that relies on citetLyu2018AMRPA's segmentation rules, which were hand-crafted to handle individual AMR constructions.
arXiv Detail & Related papers (2020-10-23T21:22:50Z) - Improving AMR Parsing with Sequence-to-Sequence Pre-training [39.33133978535497]
In this paper, we focus on sequence-to-sequence (seq2seq) AMR parsing.
We propose a seq2seq pre-training approach to build pre-trained models in both single and joint way.
Experiments show that both the single and joint pre-trained models significantly improve the performance.
arXiv Detail & Related papers (2020-10-05T04:32:47Z) - GPT-too: A language-model-first approach for AMR-to-text generation [22.65728041544785]
We propose an approach that combines a strong pre-trained language model with cycle consistency-based re-scoring.
Despite the simplicity of the approach, our experimental results show these models outperform all previous techniques.
arXiv Detail & Related papers (2020-05-18T22:50:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.