The Role of Semantic Parsing in Understanding Procedural Text
- URL: http://arxiv.org/abs/2302.06829v2
- Date: Thu, 18 May 2023 02:41:52 GMT
- Title: The Role of Semantic Parsing in Understanding Procedural Text
- Authors: Hossein Rajaby Faghihi, Parisa Kordjamshidi, Choh Man Teng, and James
Allen
- Abstract summary: We consider a deep semantic(TRIPS) and semantic role labeling as two sources of semantic parsing knowledge.
We propose PROPOLIS, a symbolic parsing-based procedural reasoning framework.
- Score: 15.318057744502822
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate whether symbolic semantic representations,
extracted from deep semantic parsers, can help reasoning over the states of
involved entities in a procedural text. We consider a deep semantic
parser~(TRIPS) and semantic role labeling as two sources of semantic parsing
knowledge. First, we propose PROPOLIS, a symbolic parsing-based procedural
reasoning framework. Second, we integrate semantic parsing information into
state-of-the-art neural models to conduct procedural reasoning. Our experiments
indicate that explicitly incorporating such semantic knowledge improves
procedural understanding. This paper presents new metrics for evaluating
procedural reasoning tasks that clarify the challenges and identify differences
among neural, symbolic, and integrated models.
Related papers
- H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables [56.73919743039263]
This paper introduces a novel algorithm that integrates both symbolic and semantic (textual) approaches in a two-stage process to address limitations.
Our experiments demonstrate that H-STAR significantly outperforms state-of-the-art methods across three question-answering (QA) and fact-verification datasets.
arXiv Detail & Related papers (2024-06-29T21:24:19Z) - Neural Semantic Parsing with Extremely Rich Symbolic Meaning Representations [7.774674200374255]
We introduce a novel compositional symbolic representation for concepts based on their position in the taxonomical hierarchy.
This representation provides richer semantic information and enhances interpretability.
Our experimental findings demonstrate that the taxonomical model, trained on much richer and complex meaning representations, is slightly subordinate in performance to the traditional model using the standard metrics for evaluation, but outperforms it when dealing with out-of-vocabulary concepts.
arXiv Detail & Related papers (2024-04-19T08:06:01Z) - Pixel Sentence Representation Learning [67.4775296225521]
In this work, we conceptualize the learning of sentence-level textual semantics as a visual representation learning process.
We employ visually-grounded text perturbation methods like typos and word order shuffling, resonating with human cognitive patterns, and enabling perturbation to be perceived as continuous.
Our approach is further bolstered by large-scale unsupervised topical alignment training and natural language inference supervision.
arXiv Detail & Related papers (2024-02-13T02:46:45Z) - Semantic Parsing for Question Answering over Knowledge Graphs [3.10647754288788]
We introduce a novel method with graph-to-segment mapping for question answering over knowledge graphs.
This method centers on semantic parsing, a key approach for interpreting these utterances.
Our framework employs a combination of rule-based and neural-based techniques to parse and construct semantic segment sequences.
arXiv Detail & Related papers (2023-12-01T20:45:06Z) - Agentivit\`a e telicit\`a in GilBERTo: implicazioni cognitive [77.71680953280436]
The goal of this study is to investigate whether a Transformer-based neural language model infers lexical semantics.
The semantic properties considered are telicity (also combined with definiteness) and agentivity.
arXiv Detail & Related papers (2023-07-06T10:52:22Z) - Supporting Vision-Language Model Inference with Confounder-pruning Knowledge Prompt [71.77504700496004]
Vision-language models are pre-trained by aligning image-text pairs in a common space to deal with open-set visual concepts.
To boost the transferability of the pre-trained models, recent works adopt fixed or learnable prompts.
However, how and what prompts can improve inference performance remains unclear.
arXiv Detail & Related papers (2022-05-23T07:51:15Z) - Design considerations for a hierarchical semantic compositional
framework for medical natural language understanding [3.7003326903946756]
We describe a framework inspired by mechanisms of human cognition in an attempt to jump the NLP performance curve.
The paper describes insights from four key aspects including semantic memory, semantic composition, semantic activation.
We discuss the design of a generative semantic model and an associated semantic used to transform a free-text sentence into a logical representation of its meaning.
arXiv Detail & Related papers (2022-04-05T09:04:34Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Semantics-Aware Inferential Network for Natural Language Understanding [79.70497178043368]
We propose a Semantics-Aware Inferential Network (SAIN) to meet such a motivation.
Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues.
Our model achieves significant improvement on 11 tasks including machine reading comprehension and natural language inference.
arXiv Detail & Related papers (2020-04-28T07:24:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.