Parsing Indonesian Sentence into Abstract Meaning Representation using
Machine Learning Approach
- URL: http://arxiv.org/abs/2103.03730v1
- Date: Fri, 5 Mar 2021 15:01:59 GMT
- Title: Parsing Indonesian Sentence into Abstract Meaning Representation using
Machine Learning Approach
- Authors: Adylan Roaffa Ilmy and Masayu Leylia Khodra
- Abstract summary: We develop a system that aims to parse an Indonesian sentence using a machine learning approach.
Our system consists of three steps: pair prediction, label prediction, and graph construction.
Our model achieved the SMATCH score of 0.820 for simple sentence test data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Abstract Meaning Representation (AMR) provides many information of a sentence
such as semantic relations, coreferences, and named entity relation in one
representation. However, research on AMR parsing for Indonesian sentence is
fairly limited. In this paper, we develop a system that aims to parse an
Indonesian sentence using a machine learning approach. Based on Zhang et al.
work, our system consists of three steps: pair prediction, label prediction,
and graph construction. Pair prediction uses dependency parsing component to
get the edges between the words for the AMR. The result of pair prediction is
passed to the label prediction process which used a supervised learning
algorithm to predict the label between the edges of the AMR. We used simple
sentence dataset that is gathered from articles and news article sentences. Our
model achieved the SMATCH score of 0.820 for simple sentence test data.
Related papers
- An AMR-based Link Prediction Approach for Document-level Event Argument
Extraction [51.77733454436013]
Recent works have introduced Abstract Meaning Representation (AMR) for Document-level Event Argument Extraction (Doc-level EAE)
This work reformulates EAE as a link prediction problem on AMR graphs.
We propose a novel graph structure, Tailored AMR Graph (TAG), which compresses less informative subgraphs and edge types, integrates span information, and highlights surrounding events in the same document.
arXiv Detail & Related papers (2023-05-30T16:07:48Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - SBERT studies Meaning Representations: Decomposing Sentence Embeddings
into Explainable AMR Meaning Features [22.8438857884398]
We create similarity metrics that are highly effective, while also providing an interpretable rationale for their rating.
Our approach works in two steps: We first select AMR graph metrics that measure meaning similarity of sentences with respect to key semantic facets.
Second, we employ these metrics to induce Semantically Structured Sentence BERT embeddings, which are composed of different meaning aspects captured in different sub-spaces.
arXiv Detail & Related papers (2022-06-14T17:37:18Z) - Auto-ABSA: Cross-Domain Aspect Detection and Sentiment Analysis Using Auxiliary Sentences [1.368483823700914]
We proposed a method that uses an auxiliary sentence about aspects that the sentence contains to help sentiment prediction.
The first is aspect detection, which uses a multi-aspects detection model to predict all aspects that the sentence has.
The second is to do out-of-domain aspect-based sentiment analysis(ABSA), train sentiment classification model with one kind of dataset and validate it with another kind of dataset.
arXiv Detail & Related papers (2022-01-05T04:23:29Z) - DocAMR: Multi-Sentence AMR Representation and Evaluation [19.229112468305267]
We introduce a simple algorithm for deriving a unified graph representation, avoiding the pitfalls of information loss from over-merging and lack of coherence from under-merging.
We also present a pipeline approach combining the top performing AMR and coreference resolution systems, providing a strong baseline for future research.
arXiv Detail & Related papers (2021-12-15T22:38:26Z) - Ensembling Graph Predictions for AMR Parsing [28.625065956013778]
In many machine learning tasks, models are trained to predict structure data such as graphs.
In this work, we formalize this problem as mining the largest graph that is the most supported by a collection of graph predictions.
We show that the proposed approach can combine the strength of state-of-the-art AMRs to create new predictions that are more accurate than any individual models in five standard benchmark datasets.
arXiv Detail & Related papers (2021-10-18T09:35:39Z) - Making Better Use of Bilingual Information for Cross-Lingual AMR Parsing [88.08581016329398]
We argue that the misprediction of concepts is due to the high relevance between English tokens and AMR concepts.
We introduce bilingual input, namely the translated texts as well as non-English texts, in order to enable the model to predict more accurate concepts.
arXiv Detail & Related papers (2021-06-09T05:14:54Z) - A Differentiable Relaxation of Graph Segmentation and Alignment for AMR
Parsing [75.36126971685034]
We treat alignment and segmentation as latent variables in our model and induce them as part of end-to-end training.
Our method also approaches that of a model that relies on citetLyu2018AMRPA's segmentation rules, which were hand-crafted to handle individual AMR constructions.
arXiv Detail & Related papers (2020-10-23T21:22:50Z) - Constructing interval variables via faceted Rasch measurement and
multitask deep learning: a hate speech application [63.10266319378212]
We propose a method for measuring complex variables on a continuous, interval spectrum by combining supervised deep learning with the Constructing Measures approach to faceted Rasch item response theory (IRT)
We demonstrate this new method on a dataset of 50,000 social media comments sourced from YouTube, Twitter, and Reddit and labeled by 11,000 U.S.-based Amazon Mechanical Turk workers.
arXiv Detail & Related papers (2020-09-22T02:15:05Z) - Extractive Summarization as Text Matching [123.09816729675838]
This paper creates a paradigm shift with regard to the way we build neural extractive summarization systems.
We formulate the extractive summarization task as a semantic text matching problem.
We have driven the state-of-the-art extractive result on CNN/DailyMail to a new level (44.41 in ROUGE-1)
arXiv Detail & Related papers (2020-04-19T08:27:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.