Widely Interpretable Semantic Representation: Frameless Meaning
Representation for Broader Applicability
- URL: http://arxiv.org/abs/2309.06460v1
- Date: Tue, 12 Sep 2023 17:44:40 GMT
- Title: Widely Interpretable Semantic Representation: Frameless Meaning
Representation for Broader Applicability
- Authors: Lydia Feng, Gregor Williamson, Han He, Jinho D. Choi
- Abstract summary: This paper presents a novel semantic representation, WISeR, that overcomes challenges for Abstract Meaning Representation (AMR)
Despite its strengths, AMR is not easily applied to languages or domains without predefined semantic frames.
We create a new corpus of 1K English dialogue sentences in both WISeR and AMR WISeR.
- Score: 10.710058244695128
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a novel semantic representation, WISeR, that overcomes
challenges for Abstract Meaning Representation (AMR). Despite its strengths,
AMR is not easily applied to languages or domains without predefined semantic
frames, and its use of numbered arguments results in semantic role labels,
which are not directly interpretable and are semantically overloaded for
parsers. We examine the numbered arguments of predicates in AMR and convert
them to thematic roles that do not require reference to semantic frames. We
create a new corpus of 1K English dialogue sentences annotated in both WISeR
and AMR. WISeR shows stronger inter-annotator agreement for beginner and
experienced annotators, with beginners becoming proficient in WISeR annotation
more quickly. Finally, we train a state-of-the-art parser on the AMR 3.0 corpus
and a WISeR corpus converted from AMR 3.0. The parser is evaluated on these
corpora and our dialogue corpus. The WISeR model exhibits higher accuracy than
its AMR counterpart across the board, demonstrating that WISeR is easier for
parsers to learn.
Related papers
- Scope-enhanced Compositional Semantic Parsing for DRT [52.657454970993086]
We introduce the AMS Theory, a compositional, neurosymbolic semantic neurosymbolic for Discourse Representation (DRT)
We show that the AMS reliably produces well-formed outputs and performs well on DRT parsing, especially on complex sentences.
arXiv Detail & Related papers (2024-07-02T02:50:15Z) - AMR Parsing is Far from Solved: GrAPES, the Granular AMR Parsing
Evaluation Suite [18.674172788583967]
Granular AMR Parsing Evaluation Suite (GrAPES)
We present the Granular AMR Parsing Evaluation Suite (GrAPES)
GrAPES reveals in depth the abilities and shortcomings of current AMRs.
arXiv Detail & Related papers (2023-12-06T13:19:56Z) - "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of
Abstract Meaning Representation [60.863629647985526]
We examine the successes and limitations of the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaning structure.
We find that models can reliably reproduce the basic format of AMR, and can often capture core event, argument, and modifier structure.
Overall, our findings indicate that these models out-of-the-box can capture aspects of semantic structure, but there remain key limitations in their ability to support fully accurate semantic analyses or parses.
arXiv Detail & Related papers (2023-10-26T21:47:59Z) - Retrofitting Multilingual Sentence Embeddings with Abstract Meaning
Representation [70.58243648754507]
We introduce a new method to improve existing multilingual sentence embeddings with Abstract Meaning Representation (AMR)
Compared with the original textual input, AMR is a structured semantic representation that presents the core concepts and relations in a sentence explicitly and unambiguously.
Experiment results show that retrofitting multilingual sentence embeddings with AMR leads to better state-of-the-art performance on both semantic similarity and transfer tasks.
arXiv Detail & Related papers (2022-10-18T11:37:36Z) - Transition-based Abstract Meaning Representation Parsing with Contextual
Embeddings [0.0]
We study a way of combing two of the most successful routes to meaning of language--statistical language models and symbolic semantics formalisms--in the task of semantic parsing.
We explore the utility of incorporating pretrained context-aware word embeddings--such as BERT and RoBERTa--in the problem of parsing.
arXiv Detail & Related papers (2022-06-13T15:05:24Z) - Dialogue Meaning Representation for Task-Oriented Dialogue Systems [51.91615150842267]
We propose Dialogue Meaning Representation (DMR), a flexible and easily extendable representation for task-oriented dialogue.
Our representation contains a set of nodes and edges with inheritance hierarchy to represent rich semantics for compositional semantics and task-specific concepts.
We propose two evaluation tasks to evaluate different machine learning based dialogue models, and further propose a novel coreference resolution model GNNCoref for the graph-based coreference resolution task.
arXiv Detail & Related papers (2022-04-23T04:17:55Z) - Making Better Use of Bilingual Information for Cross-Lingual AMR Parsing [88.08581016329398]
We argue that the misprediction of concepts is due to the high relevance between English tokens and AMR concepts.
We introduce bilingual input, namely the translated texts as well as non-English texts, in order to enable the model to predict more accurate concepts.
arXiv Detail & Related papers (2021-06-09T05:14:54Z) - Translate, then Parse! A strong baseline for Cross-Lingual AMR Parsing [10.495114898741205]
We develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures.
In this paper, we revisit a simple two-step base-line, and enhance it with a strong NMT system and a strong AMR.
Our experiments show that T+P outperforms a recent state-of-the-art system across all tested languages.
arXiv Detail & Related papers (2021-06-08T17:52:48Z) - Pareto Probing: Trading Off Accuracy for Complexity [87.09294772742737]
We argue for a probe metric that reflects the fundamental trade-off between probe complexity and performance.
Our experiments with dependency parsing reveal a wide gap in syntactic knowledge between contextual and non-contextual representations.
arXiv Detail & Related papers (2020-10-05T17:27:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.