A Survey : Neural Networks for AMR-to-Text
- URL: http://arxiv.org/abs/2206.07328v2
- Date: Sat, 11 May 2024 07:49:20 GMT
- Title: A Survey : Neural Networks for AMR-to-Text
- Authors: Hongyu Hao, Guangtong Li, Zhiming Hu, Huafeng Wang,
- Abstract summary: AMR-to-Text is one of the key techniques in the NLP community that aims at generating sentences from the Abstract Meaning Representation (AMR) graphs.
Since AMR was proposed in 2013, the study on AMR-to-Text has become increasingly prevalent as an essential branch of structured data to text.
- Score: 2.3924114046608627
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: AMR-to-text is one of the key techniques in the NLP community that aims at generating sentences from the Abstract Meaning Representation (AMR) graphs. Since AMR was proposed in 2013, the study on AMR-to-Text has become increasingly prevalent as an essential branch of structured data to text because of the unique advantages of AMR as a high-level semantic description of natural language. In this paper, we provide a brief survey of AMR-to-Text. Firstly, we introduce the current scenario of this technique and point out its difficulties. Secondly, based on the methods used in previous studies, we roughly divided them into five categories according to their respective mechanisms, i.e., Rules-based, Seq-to-Seq-based, Graph-to-Seq-based, Transformer-based, and Pre-trained Language Model (PLM)-based. In particular, we detail the neural network-based method and present the latest progress of AMR-to-Text, which refers to AMR reconstruction, Decoder optimization, etc. Furthermore, we present the benchmarks and evaluation methods of AMR-to-Text. Eventually, we provide a summary of current techniques and the outlook for future research.
Related papers
- AMR Parsing with Causal Hierarchical Attention and Pointers [54.382865897298046]
We introduce new target forms of AMR parsing and a novel model, CHAP, which is equipped with causal hierarchical attention and the pointer mechanism.
Experiments show that our model outperforms baseline models on four out of five benchmarks in the setting of no additional data.
arXiv Detail & Related papers (2023-10-18T13:44:26Z) - A Comprehensive Survey on Relation Extraction: Recent Advances and New Frontiers [76.51245425667845]
Relation extraction (RE) involves identifying the relations between entities from underlying content.
Deep neural networks have dominated the field of RE and made noticeable progress.
This survey is expected to facilitate researchers' collaborative efforts to address the challenges of real-world RE systems.
arXiv Detail & Related papers (2023-06-03T08:39:25Z) - An AMR-based Link Prediction Approach for Document-level Event Argument
Extraction [51.77733454436013]
Recent works have introduced Abstract Meaning Representation (AMR) for Document-level Event Argument Extraction (Doc-level EAE)
This work reformulates EAE as a link prediction problem on AMR graphs.
We propose a novel graph structure, Tailored AMR Graph (TAG), which compresses less informative subgraphs and edge types, integrates span information, and highlights surrounding events in the same document.
arXiv Detail & Related papers (2023-05-30T16:07:48Z) - TMR: Text-to-Motion Retrieval Using Contrastive 3D Human Motion
Synthesis [59.465092047829835]
We present TMR, a simple yet effective approach for text to 3D human motion retrieval.
Our method extends the state-of-the-art text-to-motion synthesis model TEMOS.
We show that maintaining the motion generation loss, along with the contrastive training, is crucial to obtain good performance.
arXiv Detail & Related papers (2023-05-02T17:52:41Z) - An Overview of Distant Supervision for Relation Extraction with a Focus
on Denoising and Pre-training Methods [0.0]
Relation Extraction is a foundational task of natural language processing.
The history of RE methods can be roughly organized into four phases: pattern-based RE, statistical-based RE, neural-based RE, and large language model-based RE.
arXiv Detail & Related papers (2022-07-17T21:02:04Z) - Hierarchical Curriculum Learning for AMR Parsing [29.356258263403646]
Flat sentence-to-AMR training impedes the representation learning of concepts and relations in the deeper AMR sub-graph.
We propose a hierarchical curriculum learning (HCL) which consists of structure-level curriculum (SC) and instance-level curriculum (IC)
arXiv Detail & Related papers (2021-10-15T04:45:15Z) - Neural Model Reprogramming with Similarity Based Mapping for
Low-Resource Spoken Command Recognition [71.96870151495536]
We propose a novel adversarial reprogramming (AR) approach for low-resource spoken command recognition (SCR)
The AR procedure aims to modify the acoustic signals (from the target domain) to repurpose a pretrained SCR model.
We evaluate the proposed AR-SCR system on three low-resource SCR datasets, including Arabic, Lithuanian, and dysarthric Mandarin speech.
arXiv Detail & Related papers (2021-10-08T05:07:35Z) - Smelting Gold and Silver for Improved Multilingual AMR-to-Text
Generation [55.117031558677674]
We study different techniques for automatically generating AMR annotations.
Our models trained on gold AMR with silver (machine translated) sentences outperform approaches which leverage generated silver AMR.
Our models surpass the previous state of the art for German, Italian, Spanish, and Chinese by a large margin.
arXiv Detail & Related papers (2021-09-08T17:55:46Z) - Probabilistic, Structure-Aware Algorithms for Improved Variety,
Accuracy, and Coverage of AMR Alignments [9.74672460306765]
We present algorithms for aligning components of Abstract Meaning Representation (AMR) spans in English sentences.
We leverage unsupervised learning in combination with graphs, taking the best of both worlds from previous AMR.
Our approach covers a wider variety of AMR substructures than previously considered, achieves higher coverage of nodes and edges, and does so with higher accuracy.
arXiv Detail & Related papers (2021-06-10T18:46:32Z) - Towards a Decomposable Metric for Explainable Evaluation of Text
Generation from AMR [22.8438857884398]
AMR systems are typically evaluated using metrics that compare the generated texts to reference texts from which the input meaning representations were constructed.
We show that besides well-known issues from which such metrics suffer, an additional problem arises when applying these metrics for AMR-to-text evaluation.
We show that fulfillment of both principles offers benefits for AMR-to-text evaluation, including explainability of scores.
arXiv Detail & Related papers (2020-08-20T11:25:26Z) - GPT-too: A language-model-first approach for AMR-to-text generation [22.65728041544785]
We propose an approach that combines a strong pre-trained language model with cycle consistency-based re-scoring.
Despite the simplicity of the approach, our experimental results show these models outperform all previous techniques.
arXiv Detail & Related papers (2020-05-18T22:50:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.