ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs
- URL: http://arxiv.org/abs/2204.08875v2
- Date: Wed, 20 Apr 2022 14:05:40 GMT
- Title: ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs
- Authors: Liang Chen, Peiyi Wang, Runxin Xu, Tianyu Liu, Zhifang Sui, Baobao
Chang
- Abstract summary: auxiliary tasks which are semantically or formally related can better enhance AMR parsing.
From an empirical perspective, we propose a principled method to involve auxiliary tasks to boost AMR parsing.
- Score: 34.55175412186001
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As Abstract Meaning Representation (AMR) implicitly involves compound
semantic annotations, we hypothesize auxiliary tasks which are semantically or
formally related can better enhance AMR parsing. We find that 1) Semantic role
labeling (SRL) and dependency parsing (DP), would bring more performance gain
than other tasks e.g. MT and summarization in the text-to-AMR transition even
with much less data. 2) To make a better fit for AMR, data from auxiliary tasks
should be properly "AMRized" to PseudoAMR before training. Knowledge from
shallow level parsing tasks can be better transferred to AMR Parsing with
structure transform. 3) Intermediate-task learning is a better paradigm to
introduce auxiliary tasks to AMR parsing, compared to multitask learning. From
an empirical perspective, we propose a principled method to involve auxiliary
tasks to boost AMR parsing. Extensive experiments show that our method achieves
new state-of-the-art performance on different benchmarks especially in
topology-related scores.
Related papers
- Analyzing the Role of Semantic Representations in the Era of Large Language Models [104.18157036880287]
We investigate the role of semantic representations in the era of large language models (LLMs)
We propose an AMR-driven chain-of-thought prompting method, which we call AMRCoT.
We find that it is difficult to predict which input examples AMR may help or hurt on, but errors tend to arise with multi-word expressions.
arXiv Detail & Related papers (2024-05-02T17:32:59Z) - An AMR-based Link Prediction Approach for Document-level Event Argument
Extraction [51.77733454436013]
Recent works have introduced Abstract Meaning Representation (AMR) for Document-level Event Argument Extraction (Doc-level EAE)
This work reformulates EAE as a link prediction problem on AMR graphs.
We propose a novel graph structure, Tailored AMR Graph (TAG), which compresses less informative subgraphs and edge types, integrates span information, and highlights surrounding events in the same document.
arXiv Detail & Related papers (2023-05-30T16:07:48Z) - Retrofitting Multilingual Sentence Embeddings with Abstract Meaning
Representation [70.58243648754507]
We introduce a new method to improve existing multilingual sentence embeddings with Abstract Meaning Representation (AMR)
Compared with the original textual input, AMR is a structured semantic representation that presents the core concepts and relations in a sentence explicitly and unambiguously.
Experiment results show that retrofitting multilingual sentence embeddings with AMR leads to better state-of-the-art performance on both semantic similarity and transfer tasks.
arXiv Detail & Related papers (2022-10-18T11:37:36Z) - Translate, then Parse! A strong baseline for Cross-Lingual AMR Parsing [10.495114898741205]
We develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures.
In this paper, we revisit a simple two-step base-line, and enhance it with a strong NMT system and a strong AMR.
Our experiments show that T+P outperforms a recent state-of-the-art system across all tested languages.
arXiv Detail & Related papers (2021-06-08T17:52:48Z) - Learning to Relate Depth and Semantics for Unsupervised Domain
Adaptation [87.1188556802942]
We present an approach for encoding visual task relationships to improve model performance in an Unsupervised Domain Adaptation (UDA) setting.
We propose a novel Cross-Task Relation Layer (CTRL), which encodes task dependencies between the semantic and depth predictions.
Furthermore, we propose an Iterative Self-Learning (ISL) training scheme, which exploits semantic pseudo-labels to provide extra supervision on the target domain.
arXiv Detail & Related papers (2021-05-17T13:42:09Z) - Pushing the Limits of AMR Parsing with Self-Learning [24.998016423211375]
We show how trained models can be applied to improve AMR parsing performance.
We show that without any additional human annotations, these techniques improve an already performant and achieve state-of-the-art results.
arXiv Detail & Related papers (2020-10-20T23:45:04Z) - Improving AMR Parsing with Sequence-to-Sequence Pre-training [39.33133978535497]
In this paper, we focus on sequence-to-sequence (seq2seq) AMR parsing.
We propose a seq2seq pre-training approach to build pre-trained models in both single and joint way.
Experiments show that both the single and joint pre-trained models significantly improve the performance.
arXiv Detail & Related papers (2020-10-05T04:32:47Z) - Meta Reinforcement Learning with Autonomous Inference of Subtask
Dependencies [57.27944046925876]
We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph.
Instead of directly learning a meta-policy, we develop a Meta-learner with Subtask Graph Inference.
Our experiment results on two grid-world domains and StarCraft II environments show that the proposed method is able to accurately infer the latent task parameter.
arXiv Detail & Related papers (2020-01-01T17:34:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.