Investigating the Effect of Relative Positional Embeddings on
AMR-to-Text Generation with Structural Adapters
- URL: http://arxiv.org/abs/2302.05900v1
- Date: Sun, 12 Feb 2023 12:43:36 GMT
- Title: Investigating the Effect of Relative Positional Embeddings on
AMR-to-Text Generation with Structural Adapters
- Authors: Sebastien Montella, Alexis Nasr, Johannes Heinecke, Frederic Bechet,
Lina M. Rojas-Barahona
- Abstract summary: We investigate the influence of Relative Position Embeddings (RPE) on AMR-to-Text generation.
Through ablation studies, graph attack and link prediction, we reveal that RPE might be partially encoding input graphs.
We suggest further research regarding the role of RPE will provide valuable insights for Graph-to-Text generation.
- Score: 5.468547489755107
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text generation from Abstract Meaning Representation (AMR) has substantially
benefited from the popularized Pretrained Language Models (PLMs). Myriad
approaches have linearized the input graph as a sequence of tokens to fit the
PLM tokenization requirements. Nevertheless, this transformation jeopardizes
the structural integrity of the graph and is therefore detrimental to its
resulting representation. To overcome this issue, Ribeiro et al. have recently
proposed StructAdapt, a structure-aware adapter which injects the input graph
connectivity within PLMs using Graph Neural Networks (GNNs). In this paper, we
investigate the influence of Relative Position Embeddings (RPE) on AMR-to-Text,
and, in parallel, we examine the robustness of StructAdapt. Through ablation
studies, graph attack and link prediction, we reveal that RPE might be
partially encoding input graphs. We suggest further research regarding the role
of RPE will provide valuable insights for Graph-to-Text generation.
Related papers
- A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Graph-level Protein Representation Learning by Structure Knowledge
Refinement [50.775264276189695]
This paper focuses on learning representation on the whole graph level in an unsupervised manner.
We propose a novel framework called Structure Knowledge Refinement (SKR) which uses data structure to determine the probability of whether a pair is positive or negative.
arXiv Detail & Related papers (2024-01-05T09:05:33Z) - Incorporating Graph Information in Transformer-based AMR Parsing [34.461828101932184]
LeakDistill is a model and method that explores a modification to the Transformer architecture.
We show how, by employing word-to-node alignment to embed graph structural information into the encoder at training time, we can obtain state-of-the-art AMR parsing.
arXiv Detail & Related papers (2023-06-23T12:12:08Z) - An AMR-based Link Prediction Approach for Document-level Event Argument
Extraction [51.77733454436013]
Recent works have introduced Abstract Meaning Representation (AMR) for Document-level Event Argument Extraction (Doc-level EAE)
This work reformulates EAE as a link prediction problem on AMR graphs.
We propose a novel graph structure, Tailored AMR Graph (TAG), which compresses less informative subgraphs and edge types, integrates span information, and highlights surrounding events in the same document.
arXiv Detail & Related papers (2023-05-30T16:07:48Z) - Graph Pre-training for AMR Parsing and Generation [14.228434699363495]
We investigate graph self-supervised training to improve structure awareness of PLMs over AMR graphs.
We introduce two graph auto-encoding strategies for graph-to-graph pre-training and four tasks to integrate text and graph information during pre-training.
arXiv Detail & Related papers (2022-03-15T12:47:00Z) - Structural Adapters in Pretrained Language Models for AMR-to-text
Generation [59.50420985074769]
Previous work on text generation from graph-structured data relies on pretrained language models (PLMs)
We propose StructAdapt, an adapter method to encode graph structure into PLMs.
arXiv Detail & Related papers (2021-03-16T15:06:50Z) - Promoting Graph Awareness in Linearized Graph-to-Text Generation [72.83863719868364]
We study the ability of linearized models to encode local graph structures.
Our findings motivate solutions to enrich the quality of models' implicit graph encodings.
We find that these denoising scaffolds lead to substantial improvements in downstream generation in low-resource settings.
arXiv Detail & Related papers (2020-12-31T18:17:57Z) - Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text
Generation [56.73834525802723]
Lightweight Dynamic Graph Convolutional Networks (LDGCNs) are proposed.
LDGCNs capture richer non-local interactions by synthesizing higher order information from the input graphs.
We develop two novel parameter saving strategies based on the group graph convolutions and weight tied convolutions to reduce memory usage and model complexity.
arXiv Detail & Related papers (2020-10-09T06:03:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.