Guiding AMR Parsing with Reverse Graph Linearization
- URL: http://arxiv.org/abs/2310.08860v1
- Date: Fri, 13 Oct 2023 05:03:13 GMT
- Title: Guiding AMR Parsing with Reverse Graph Linearization
- Authors: Bofei Gao, Liang Chen, Peiyi Wang, Zhifang Sui, Baobao Chang
- Abstract summary: We propose a novel Reverse Graph Linearization (RGL) framework for AMR parsing.
RGL defines both default and reverse linearization orders of an AMR graph, where most structures at the back part of the default order appear at the front part of the reversed order and vice versa.
Our analysis shows that our proposed method significantly mitigates the problem of structure loss accumulation, outperforming the previously best AMR parsing model by 0.8 and 0.5 Smatch scores on the AMR 2.0 and AMR 3.0 dataset, respectively.
- Score: 45.37129580211495
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Abstract Meaning Representation (AMR) parsing aims to extract an abstract
semantic graph from a given sentence. The sequence-to-sequence approaches,
which linearize the semantic graph into a sequence of nodes and edges and
generate the linearized graph directly, have achieved good performance.
However, we observed that these approaches suffer from structure loss
accumulation during the decoding process, leading to a much lower F1-score for
nodes and edges decoded later compared to those decoded earlier. To address
this issue, we propose a novel Reverse Graph Linearization (RGL) enhanced
framework. RGL defines both default and reverse linearization orders of an AMR
graph, where most structures at the back part of the default order appear at
the front part of the reversed order and vice versa. RGL incorporates the
reversed linearization to the original AMR parser through a two-pass
self-distillation mechanism, which guides the model when generating the default
linearizations. Our analysis shows that our proposed method significantly
mitigates the problem of structure loss accumulation, outperforming the
previously best AMR parsing model by 0.8 and 0.5 Smatch scores on the AMR 2.0
and AMR 3.0 dataset, respectively. The code are available at
https://github.com/pkunlp-icler/AMR_reverse_graph_linearization.
Related papers
- Preserving Node Distinctness in Graph Autoencoders via Similarity Distillation [9.395697548237333]
Graph autoencoders (GAEs) rely on distance-based criteria, such as mean-square-error (MSE) to reconstruct the input graph.
relying solely on a single reconstruction criterion may lead to a loss of distinctiveness in the reconstructed graph.
We have developed a simple yet effective strategy to preserve the necessary distinctness in the reconstructed graph.
arXiv Detail & Related papers (2024-06-25T12:54:35Z) - Graph Signal Sampling for Inductive One-Bit Matrix Completion: a
Closed-form Solution [112.3443939502313]
We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing.
The key idea is to transform each user's ratings on the items to a function (signal) on the vertices of an item-item graph.
For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain.
arXiv Detail & Related papers (2023-02-08T08:17:43Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - GLAN: A Graph-based Linear Assignment Network [29.788755291070462]
We propose a learnable linear assignment solver based on deep graph networks.
The experimental results on a synthetic dataset reveal that our method outperforms state-of-the-art baselines.
We also embed the proposed solver into a popular multi-object tracking (MOT) framework to train the tracker in an end-to-end manner.
arXiv Detail & Related papers (2022-01-05T13:18:02Z) - Structure-aware Fine-tuning of Sequence-to-sequence Transformers for
Transition-based AMR Parsing [20.67024416678313]
We explore the integration of general pre-trained sequence-to-sequence language models and a structure-aware transition-based approach.
We propose a simplified transition set, designed to better exploit pre-trained language models for structured fine-tuning.
We show that the proposed parsing architecture retains the desirable properties of previous transition-based approaches, while being simpler and reaching the new state of the art for AMR 2.0, without the need for graph re-categorization.
arXiv Detail & Related papers (2021-10-29T04:36:31Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Graph Signal Restoration Using Nested Deep Algorithm Unrolling [85.53158261016331]
Graph signal processing is a ubiquitous task in many applications such as sensor, social transportation brain networks, point cloud processing, and graph networks.
We propose two restoration methods based on convexindependent deep ADMM (ADMM)
parameters in the proposed restoration methods are trainable in an end-to-end manner.
arXiv Detail & Related papers (2021-06-30T08:57:01Z) - A Differentiable Relaxation of Graph Segmentation and Alignment for AMR
Parsing [75.36126971685034]
We treat alignment and segmentation as latent variables in our model and induce them as part of end-to-end training.
Our method also approaches that of a model that relies on citetLyu2018AMRPA's segmentation rules, which were hand-crafted to handle individual AMR constructions.
arXiv Detail & Related papers (2020-10-23T21:22:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.