Hierarchical Curriculum Learning for AMR Parsing
- URL: http://arxiv.org/abs/2110.07855v1
- Date: Fri, 15 Oct 2021 04:45:15 GMT
- Title: Hierarchical Curriculum Learning for AMR Parsing
- Authors: Peiyi Wang, Liang Chen, Tianyu Liu, Baobao Chang, Zhifang Sui
- Abstract summary: Flat sentence-to-AMR training impedes the representation learning of concepts and relations in the deeper AMR sub-graph.
We propose a hierarchical curriculum learning (HCL) which consists of structure-level curriculum (SC) and instance-level curriculum (IC)
- Score: 29.356258263403646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Abstract Meaning Representation (AMR) parsing translates sentences to the
semantic representation with a hierarchical structure, which is recently
empowered by pretrained encoder-decoder models. However, the flat
sentence-to-AMR training paradigm impedes the representation learning of
concepts and relations in the deeper AMR sub-graph. To make the
sequence-to-sequence models better adapt to the inherent AMR structure, we
propose a hierarchical curriculum learning (HCL) which consists of (1)
structure-level curriculum (SC) and (2) instance-level curriculum (IC). SC
switches progressively from shallow to deep AMR sub-graphs while IC transits
from easy to hard AMR instances during training. Extensive experiments show
that BART trained with HCL achieves the state-of-the-art performance on the
AMR-2.0 and AMR-3.0 benchmark, and significantly outperforms baselines on the
structure-dependent evaluation metrics and hard instances.
Related papers
- MSRS: Training Multimodal Speech Recognition Models from Scratch with Sparse Mask Optimization [49.00754561435518]
MSRS achieves competitive results in VSR and AVSR with 21.1% and 0.9% WER on the LRS3 benchmark, while reducing training time by at least 2x.
We explore other sparse approaches and show that only MSRS enables training from scratch by implicitly masking the weights affected by vanishing gradients.
arXiv Detail & Related papers (2024-06-25T15:00:43Z) - AMR-RE: Abstract Meaning Representations for Retrieval-Based In-Context Learning in Relation Extraction [9.12646853282321]
We propose an AMR-enhanced retrieval-based ICL method for relation extraction.
Our model retrieves in-context examples based on semantic structure similarity between task inputs and training samples.
arXiv Detail & Related papers (2024-06-14T22:36:08Z) - Sequential Visual and Semantic Consistency for Semi-supervised Text
Recognition [56.968108142307976]
Scene text recognition (STR) is a challenging task that requires large-scale annotated data for training.
Most existing STR methods resort to synthetic data, which may introduce domain discrepancy and degrade the performance of STR models.
This paper proposes a novel semi-supervised learning method for STR that incorporates word-level consistency regularization from both visual and semantic aspects.
arXiv Detail & Related papers (2024-02-24T13:00:54Z) - AMR Parsing with Causal Hierarchical Attention and Pointers [54.382865897298046]
We introduce new target forms of AMR parsing and a novel model, CHAP, which is equipped with causal hierarchical attention and the pointer mechanism.
Experiments show that our model outperforms baseline models on four out of five benchmarks in the setting of no additional data.
arXiv Detail & Related papers (2023-10-18T13:44:26Z) - An AMR-based Link Prediction Approach for Document-level Event Argument
Extraction [51.77733454436013]
Recent works have introduced Abstract Meaning Representation (AMR) for Document-level Event Argument Extraction (Doc-level EAE)
This work reformulates EAE as a link prediction problem on AMR graphs.
We propose a novel graph structure, Tailored AMR Graph (TAG), which compresses less informative subgraphs and edge types, integrates span information, and highlights surrounding events in the same document.
arXiv Detail & Related papers (2023-05-30T16:07:48Z) - Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal
Structured Representations [70.41385310930846]
We present an end-to-end framework Structure-CLIP to enhance multi-modal structured representations.
We use scene graphs to guide the construction of semantic negative examples, which results in an increased emphasis on learning structured representations.
A Knowledge-Enhance (KEE) is proposed to leverage SGK as input to further enhance structured representations.
arXiv Detail & Related papers (2023-05-06T03:57:05Z) - A Survey : Neural Networks for AMR-to-Text [2.3924114046608627]
AMR-to-Text is one of the key techniques in the NLP community that aims at generating sentences from the Abstract Meaning Representation (AMR) graphs.
Since AMR was proposed in 2013, the study on AMR-to-Text has become increasingly prevalent as an essential branch of structured data to text.
arXiv Detail & Related papers (2022-06-15T07:20:28Z) - ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs [34.55175412186001]
auxiliary tasks which are semantically or formally related can better enhance AMR parsing.
From an empirical perspective, we propose a principled method to involve auxiliary tasks to boost AMR parsing.
arXiv Detail & Related papers (2022-04-19T13:15:59Z) - CoPHE: A Count-Preserving Hierarchical Evaluation Metric in Large-Scale
Multi-Label Text Classification [70.554573538777]
We argue for hierarchical evaluation of the predictions of neural LMTC models.
We describe a structural issue in the representation of the structured label space in prior art.
We propose a set of metrics for hierarchical evaluation using the depth-based representation.
arXiv Detail & Related papers (2021-09-10T13:09:12Z) - Probabilistic, Structure-Aware Algorithms for Improved Variety,
Accuracy, and Coverage of AMR Alignments [9.74672460306765]
We present algorithms for aligning components of Abstract Meaning Representation (AMR) spans in English sentences.
We leverage unsupervised learning in combination with graphs, taking the best of both worlds from previous AMR.
Our approach covers a wider variety of AMR substructures than previously considered, achieves higher coverage of nodes and edges, and does so with higher accuracy.
arXiv Detail & Related papers (2021-06-10T18:46:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.