Normalizing Compositional Structures Across Graphbanks
- URL: http://arxiv.org/abs/2004.14236v2
- Date: Thu, 30 Apr 2020 10:04:12 GMT
- Title: Normalizing Compositional Structures Across Graphbanks
- Authors: Lucia Donatelli, Jonas Groschwitz, Alexander Koller, Matthias
Lindemann, Pia Wei{\ss}enhorn
- Abstract summary: We present a methodology for normalizing discrepancies between MRs at the compositional level.
Our work significantly increases the match in compositional structure between MRs and improves multi-task learning.
- Score: 67.7047900945161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The emergence of a variety of graph-based meaning representations (MRs) has
sparked an important conversation about how to adequately represent semantic
structure. These MRs exhibit structural differences that reflect different
theoretical and design considerations, presenting challenges to uniform
linguistic analysis and cross-framework semantic parsing. Here, we ask the
question of which design differences between MRs are meaningful and
semantically-rooted, and which are superficial. We present a methodology for
normalizing discrepancies between MRs at the compositional level (Lindemann et
al., 2019), finding that we can normalize the majority of divergent phenomena
using linguistically-grounded rules. Our work significantly increases the match
in compositional structure between MRs and improves multi-task learning (MTL)
in a low-resource setting, demonstrating the usefulness of careful MR design
analysis and comparison.
Related papers
- Systematic Abductive Reasoning via Diverse Relation Representations in Vector-symbolic Architecture [10.27696004820717]
We propose a Systematic Abductive Reasoning model with diverse relation representations (Rel-SAR) in Vector-symbolic Architecture (VSA)
To derive representations with symbolic reasoning potential, we introduce not only various types of atomic vectors represent numeric, periodic and logical semantics, but also the structured high-dimentional representation (S)
For systematic reasoning, we propose novel numerical and logical functions and perform rule abduction and generalization execution in a unified framework that integrates these relation representations.
arXiv Detail & Related papers (2025-01-21T05:17:08Z) - A Variable Occurrence-Centric Framework for Inconsistency Handling (Extended Version) [13.706331473063882]
We introduce a framework for analyzing and handling inconsistencies in propositional bases.
We propose two dual concepts: Minimal Inconsistency Relation (MIR) and Maximal Consistency Relation (MCR)
arXiv Detail & Related papers (2024-12-16T15:22:10Z) - GRS-QA -- Graph Reasoning-Structured Question Answering Dataset [50.223851616680754]
We introduce the Graph Reasoning-Structured Question Answering dataset (GRS-QA), which includes both semantic contexts and reasoning structures for QA pairs.
Unlike existing M-QA datasets, GRS-QA explicitly captures intricate reasoning pathways by constructing reasoning graphs.
Our empirical analysis reveals that LLMs perform differently when handling questions with varying reasoning structures.
arXiv Detail & Related papers (2024-11-01T05:14:03Z) - Analyzing the Role of Semantic Representations in the Era of Large Language Models [104.18157036880287]
We investigate the role of semantic representations in the era of large language models (LLMs)
We propose an AMR-driven chain-of-thought prompting method, which we call AMRCoT.
We find that it is difficult to predict which input examples AMR may help or hurt on, but errors tend to arise with multi-word expressions.
arXiv Detail & Related papers (2024-05-02T17:32:59Z) - Knowing Your Nonlinearities: Shapley Interactions Reveal the Underlying Structure of Data [8.029715695737567]
We use Shapley Taylor interaction indices (STII) to analyze the impact of underlying data structure on model representations.
Considering linguistic structure in masked and auto-regressive language models (ML and ALMs), we find that STII increases within idiomatic expressions.
Our speech model findings reflect the phonetic principal that the openness of the oral cavity determines how much a phoneme varies based on its context.
arXiv Detail & Related papers (2024-03-19T19:13:22Z) - Multimodal Relation Extraction with Cross-Modal Retrieval and Synthesis [89.04041100520881]
This research proposes to retrieve textual and visual evidence based on the object, sentence, and whole image.
We develop a novel approach to synthesize the object-level, image-level, and sentence-level information for better reasoning between the same and different modalities.
arXiv Detail & Related papers (2023-05-25T15:26:13Z) - Variational Cross-Graph Reasoning and Adaptive Structured Semantics
Learning for Compositional Temporal Grounding [143.5927158318524]
Temporal grounding is the task of locating a specific segment from an untrimmed video according to a query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We argue that the inherent structured semantics inside the videos and language is the crucial factor to achieve compositional generalization.
arXiv Detail & Related papers (2023-01-22T08:02:23Z) - Retrofitting Multilingual Sentence Embeddings with Abstract Meaning
Representation [70.58243648754507]
We introduce a new method to improve existing multilingual sentence embeddings with Abstract Meaning Representation (AMR)
Compared with the original textual input, AMR is a structured semantic representation that presents the core concepts and relations in a sentence explicitly and unambiguously.
Experiment results show that retrofitting multilingual sentence embeddings with AMR leads to better state-of-the-art performance on both semantic similarity and transfer tasks.
arXiv Detail & Related papers (2022-10-18T11:37:36Z) - SBERT studies Meaning Representations: Decomposing Sentence Embeddings
into Explainable AMR Meaning Features [22.8438857884398]
We create similarity metrics that are highly effective, while also providing an interpretable rationale for their rating.
Our approach works in two steps: We first select AMR graph metrics that measure meaning similarity of sentences with respect to key semantic facets.
Second, we employ these metrics to induce Semantically Structured Sentence BERT embeddings, which are composed of different meaning aspects captured in different sub-spaces.
arXiv Detail & Related papers (2022-06-14T17:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.