Pushing the Limits of AMR Parsing with Self-Learning
- URL: http://arxiv.org/abs/2010.10673v1
- Date: Tue, 20 Oct 2020 23:45:04 GMT
- Title: Pushing the Limits of AMR Parsing with Self-Learning
- Authors: Young-Suk Lee, Ramon Fernandez Astudillo, Tahira Naseem, Revanth Gangi
Reddy, Radu Florian, Salim Roukos
- Abstract summary: We show how trained models can be applied to improve AMR parsing performance.
We show that without any additional human annotations, these techniques improve an already performant and achieve state-of-the-art results.
- Score: 24.998016423211375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Abstract Meaning Representation (AMR) parsing has experienced a notable
growth in performance in the last two years, due both to the impact of transfer
learning and the development of novel architectures specific to AMR. At the
same time, self-learning techniques have helped push the performance boundaries
of other natural language processing applications, such as machine translation
or question answering. In this paper, we explore different ways in which
trained models can be applied to improve AMR parsing performance, including
generation of synthetic text and AMR annotations as well as refinement of
actions oracle. We show that, without any additional human annotations, these
techniques improve an already performant parser and achieve state-of-the-art
results on AMR 1.0 and AMR 2.0.
Related papers
- Intuition-aware Mixture-of-Rank-1-Experts for Parameter Efficient Finetuning [50.73666458313015]
Large Language Models (LLMs) have demonstrated significant potential in performing multiple tasks in multimedia applications.
MoE has been emerged as a promising solution with its sparse architecture for effective task decoupling.
Intuition-MoR1E achieves superior efficiency and 2.15% overall accuracy improvement across 14 public datasets.
arXiv Detail & Related papers (2024-04-13T12:14:58Z) - RA-DIT: Retrieval-Augmented Dual Instruction Tuning [90.98423540361946]
Retrieval-augmented language models (RALMs) improve performance by accessing long-tail and up-to-date knowledge from external data stores.
Existing approaches require either expensive retrieval-specific modifications to LM pre-training or use post-hoc integration of the data store that leads to suboptimal performance.
We introduce Retrieval-Augmented Dual Instruction Tuning (RA-DIT), a lightweight fine-tuning methodology that provides a third option.
arXiv Detail & Related papers (2023-10-02T17:16:26Z) - Cross-domain Generalization for AMR Parsing [30.34105706152887]
We evaluate five representative AMRs on five domains and analyze challenges to cross-domain AMR parsing.
Based on our observation, we investigate two approaches to reduce the domain distribution divergence of text and AMR features.
arXiv Detail & Related papers (2022-10-22T13:24:13Z) - Retrofitting Multilingual Sentence Embeddings with Abstract Meaning
Representation [70.58243648754507]
We introduce a new method to improve existing multilingual sentence embeddings with Abstract Meaning Representation (AMR)
Compared with the original textual input, AMR is a structured semantic representation that presents the core concepts and relations in a sentence explicitly and unambiguously.
Experiment results show that retrofitting multilingual sentence embeddings with AMR leads to better state-of-the-art performance on both semantic similarity and transfer tasks.
arXiv Detail & Related papers (2022-10-18T11:37:36Z) - A Survey : Neural Networks for AMR-to-Text [2.3924114046608627]
AMR-to-Text is one of the key techniques in the NLP community that aims at generating sentences from the Abstract Meaning Representation (AMR) graphs.
Since AMR was proposed in 2013, the study on AMR-to-Text has become increasingly prevalent as an essential branch of structured data to text.
arXiv Detail & Related papers (2022-06-15T07:20:28Z) - ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs [34.55175412186001]
auxiliary tasks which are semantically or formally related can better enhance AMR parsing.
From an empirical perspective, we propose a principled method to involve auxiliary tasks to boost AMR parsing.
arXiv Detail & Related papers (2022-04-19T13:15:59Z) - Maximum Bayes Smatch Ensemble Distillation for AMR Parsing [15.344108027018006]
We show that it is possible to overcome this diminishing returns of silver data by combining Smatch-based ensembling techniques with ensemble distillation.
We attain a new state-of-the-art for cross-lingual AMR parsing for Chinese, German, Italian and Spanish.
arXiv Detail & Related papers (2021-12-14T23:29:37Z) - A Comparative Study on Non-Autoregressive Modelings for Speech-to-Text
Generation [59.64193903397301]
Non-autoregressive (NAR) models simultaneously generate multiple outputs in a sequence, which significantly reduces the inference speed at the cost of accuracy drop compared to autoregressive baselines.
We conduct a comparative study of various NAR modeling methods for end-to-end automatic speech recognition (ASR)
The results on various tasks provide interesting findings for developing an understanding of NAR ASR, such as the accuracy-speed trade-off and robustness against long-form utterances.
arXiv Detail & Related papers (2021-10-11T13:05:06Z) - Smelting Gold and Silver for Improved Multilingual AMR-to-Text
Generation [55.117031558677674]
We study different techniques for automatically generating AMR annotations.
Our models trained on gold AMR with silver (machine translated) sentences outperform approaches which leverage generated silver AMR.
Our models surpass the previous state of the art for German, Italian, Spanish, and Chinese by a large margin.
arXiv Detail & Related papers (2021-09-08T17:55:46Z) - A Dependency Syntactic Knowledge Augmented Interactive Architecture for
End-to-End Aspect-based Sentiment Analysis [73.74885246830611]
We propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA.
This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn)
Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-04T14:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.