Incorporating Graph Information in Transformer-based AMR Parsing
- URL: http://arxiv.org/abs/2306.13467v1
- Date: Fri, 23 Jun 2023 12:12:08 GMT
- Title: Incorporating Graph Information in Transformer-based AMR Parsing
- Authors: Pavlo Vasylenko, Pere-Llu\'is Huguet Cabot, Abelardo Carlos Mart\'inez
Lorenzo, Roberto Navigli
- Abstract summary: LeakDistill is a model and method that explores a modification to the Transformer architecture.
We show how, by employing word-to-node alignment to embed graph structural information into the encoder at training time, we can obtain state-of-the-art AMR parsing.
- Score: 34.461828101932184
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Abstract Meaning Representation (AMR) is a Semantic Parsing formalism that
aims at providing a semantic graph abstraction representing a given text.
Current approaches are based on autoregressive language models such as BART or
T5, fine-tuned through Teacher Forcing to obtain a linearized version of the
AMR graph from a sentence. In this paper, we present LeakDistill, a model and
method that explores a modification to the Transformer architecture, using
structural adapters to explicitly incorporate graph information into the
learned representations and improve AMR parsing performance. Our experiments
show how, by employing word-to-node alignment to embed graph structural
information into the encoder at training time, we can obtain state-of-the-art
AMR parsing through self-knowledge distillation, even without the use of
additional data. We release the code at
\url{http://www.github.com/sapienzanlp/LeakDistill}.
Related papers
- AMR Parsing with Causal Hierarchical Attention and Pointers [54.382865897298046]
We introduce new target forms of AMR parsing and a novel model, CHAP, which is equipped with causal hierarchical attention and the pointer mechanism.
Experiments show that our model outperforms baseline models on four out of five benchmarks in the setting of no additional data.
arXiv Detail & Related papers (2023-10-18T13:44:26Z) - An AMR-based Link Prediction Approach for Document-level Event Argument
Extraction [51.77733454436013]
Recent works have introduced Abstract Meaning Representation (AMR) for Document-level Event Argument Extraction (Doc-level EAE)
This work reformulates EAE as a link prediction problem on AMR graphs.
We propose a novel graph structure, Tailored AMR Graph (TAG), which compresses less informative subgraphs and edge types, integrates span information, and highlights surrounding events in the same document.
arXiv Detail & Related papers (2023-05-30T16:07:48Z) - Investigating the Effect of Relative Positional Embeddings on
AMR-to-Text Generation with Structural Adapters [5.468547489755107]
We investigate the influence of Relative Position Embeddings (RPE) on AMR-to-Text generation.
Through ablation studies, graph attack and link prediction, we reveal that RPE might be partially encoding input graphs.
We suggest further research regarding the role of RPE will provide valuable insights for Graph-to-Text generation.
arXiv Detail & Related papers (2023-02-12T12:43:36Z) - What does Transformer learn about source code? [26.674180481543264]
transformer-based representation models have achieved state-of-the-art (SOTA) performance in many tasks.
We propose the aggregated attention score, a method to investigate the structural information learned by the transformer.
We also put forward the aggregated attention graph, a new way to extract program graphs from the pre-trained models automatically.
arXiv Detail & Related papers (2022-07-18T09:33:04Z) - Relphormer: Relational Graph Transformer for Knowledge Graph
Representations [25.40961076988176]
We propose a new variant of Transformer for knowledge graph representations dubbed Relphormer.
We propose a novel structure-enhanced self-attention mechanism to encode the relational information and keep the semantic information within entities and relations.
Experimental results on six datasets show that Relphormer can obtain better performance compared with baselines.
arXiv Detail & Related papers (2022-05-22T15:30:18Z) - Graph Pre-training for AMR Parsing and Generation [14.228434699363495]
We investigate graph self-supervised training to improve structure awareness of PLMs over AMR graphs.
We introduce two graph auto-encoding strategies for graph-to-graph pre-training and four tasks to integrate text and graph information during pre-training.
arXiv Detail & Related papers (2022-03-15T12:47:00Z) - Dependency Parsing based Semantic Representation Learning with Graph
Neural Network for Enhancing Expressiveness of Text-to-Speech [49.05471750563229]
We propose a semantic representation learning method based on graph neural network, considering dependency relations of a sentence.
We show that our proposed method outperforms the baseline using vanilla BERT features both in LJSpeech and Bilzzard Challenge 2013 datasets.
arXiv Detail & Related papers (2021-04-14T13:09:51Z) - Structural Adapters in Pretrained Language Models for AMR-to-text
Generation [59.50420985074769]
Previous work on text generation from graph-structured data relies on pretrained language models (PLMs)
We propose StructAdapt, an adapter method to encode graph structure into PLMs.
arXiv Detail & Related papers (2021-03-16T15:06:50Z) - Promoting Graph Awareness in Linearized Graph-to-Text Generation [72.83863719868364]
We study the ability of linearized models to encode local graph structures.
Our findings motivate solutions to enrich the quality of models' implicit graph encodings.
We find that these denoising scaffolds lead to substantial improvements in downstream generation in low-resource settings.
arXiv Detail & Related papers (2020-12-31T18:17:57Z) - A Differentiable Relaxation of Graph Segmentation and Alignment for AMR
Parsing [75.36126971685034]
We treat alignment and segmentation as latent variables in our model and induce them as part of end-to-end training.
Our method also approaches that of a model that relies on citetLyu2018AMRPA's segmentation rules, which were hand-crafted to handle individual AMR constructions.
arXiv Detail & Related papers (2020-10-23T21:22:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.