Scope-enhanced Compositional Semantic Parsing for DRT
- URL: http://arxiv.org/abs/2407.01899v2
- Date: Wed, 09 Oct 2024 14:25:00 GMT
- Title: Scope-enhanced Compositional Semantic Parsing for DRT
- Authors: Xiulin Yang, Jonas Groschwitz, Alexander Koller, Johan Bos,
- Abstract summary: We introduce the AMS Theory, a compositional, neurosymbolic semantic neurosymbolic for Discourse Representation (DRT)
We show that the AMS reliably produces well-formed outputs and performs well on DRT parsing, especially on complex sentences.
- Score: 52.657454970993086
- License:
- Abstract: Discourse Representation Theory (DRT) distinguishes itself from other semantic representation frameworks by its ability to model complex semantic and discourse phenomena through structural nesting and variable binding. While seq2seq models hold the state of the art on DRT parsing, their accuracy degrades with the complexity of the sentence, and they sometimes struggle to produce well-formed DRT representations. We introduce the AMS parser, a compositional, neurosymbolic semantic parser for DRT. It rests on a novel mechanism for predicting quantifier scope. We show that the AMS parser reliably produces well-formed outputs and performs well on DRT parsing, especially on complex sentences.
Related papers
- Complex Event Recognition with Symbolic Register Transducers: Extended Technical Report [51.86861492527722]
We present a system for Complex Event Recognition based on automata.
Our system is based on an automaton model which is a combination of symbolic and register automata.
We show how SRT can be used in CER in order to detect patterns upon streams of events.
arXiv Detail & Related papers (2024-07-03T07:59:13Z) - Spatial Semantic Recurrent Mining for Referring Image Segmentation [63.34997546393106]
We propose Stextsuperscript2RM to achieve high-quality cross-modality fusion.
It follows a working strategy of trilogy: distributing language feature, spatial semantic recurrent coparsing, and parsed-semantic balancing.
Our proposed method performs favorably against other state-of-the-art algorithms.
arXiv Detail & Related papers (2024-05-15T00:17:48Z) - "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of
Abstract Meaning Representation [60.863629647985526]
We examine the successes and limitations of the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaning structure.
We find that models can reliably reproduce the basic format of AMR, and can often capture core event, argument, and modifier structure.
Overall, our findings indicate that these models out-of-the-box can capture aspects of semantic structure, but there remain key limitations in their ability to support fully accurate semantic analyses or parses.
arXiv Detail & Related papers (2023-10-26T21:47:59Z) - Widely Interpretable Semantic Representation: Frameless Meaning
Representation for Broader Applicability [10.710058244695128]
This paper presents a novel semantic representation, WISeR, that overcomes challenges for Abstract Meaning Representation (AMR)
Despite its strengths, AMR is not easily applied to languages or domains without predefined semantic frames.
We create a new corpus of 1K English dialogue sentences in both WISeR and AMR WISeR.
arXiv Detail & Related papers (2023-09-12T17:44:40Z) - Enriching Transformers with Structured Tensor-Product Representations
for Abstractive Summarization [131.23966358405767]
We adapt TP-TRANSFORMER with the explicitly compositional Product Representation (TPR) for the task of abstractive summarization.
Key feature of our model is a structural bias that we introduce by encoding two separate representations for each token.
We show that our TP-TRANSFORMER outperforms the Transformer and the original TP-TRANSFORMER significantly on several abstractive summarization datasets.
arXiv Detail & Related papers (2021-06-02T17:32:33Z) - Learning symbol relation tree for online mathematical expression
recognition [7.868468656324007]
This paper proposes a method for recognizing online handwritten mathematical expressions (OnHME) by building a symbol relation tree (SRT) directly from a sequence of strokes.
A bidirectional recurrent neural network learns from multiple derived paths of SRT to predict both symbols and spatial relations between symbols using global context.
The recognition system achieves 44.12% and 41.76% expression recognition rates on the Competition on Recognition of Online Handwritten Mathematical expressions (CROHME) 2014 and 2016 testing sets.
arXiv Detail & Related papers (2021-05-13T05:18:17Z) - DRS at MRP 2020: Dressing up Discourse Representation Structures as
Graphs [4.21235641628176]
The paper describes the procedure of dressing up DRSs as directed labeled graphs to include DRT as a new framework.
The conversion procedure was biased towards making the DRT graph framework somewhat similar to other graph-based meaning representation frameworks.
arXiv Detail & Related papers (2020-12-29T16:36:49Z) - Unsupervised Distillation of Syntactic Information from Contextualized
Word Representations [62.230491683411536]
We tackle the task of unsupervised disentanglement between semantics and structure in neural language representations.
To this end, we automatically generate groups of sentences which are structurally similar but semantically different.
We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.
arXiv Detail & Related papers (2020-10-11T15:13:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.