Identification of Entailment and Contradiction Relations between Natural Language Sentences: A Neurosymbolic Approach
- URL: http://arxiv.org/abs/2405.01259v1
- Date: Thu, 2 May 2024 13:06:24 GMT
- Title: Identification of Entailment and Contradiction Relations between Natural Language Sentences: A Neurosymbolic Approach
- Authors: Xuyao Feng, Anthony Hunter,
- Abstract summary: Natural language inference (NLI) is an important aspect of natural language understanding.
To address the need for an explainable approach to RTE, we propose a novel pipeline that is based on translating text into an abstract representation.
We then translate the AMR graph into propositional logic and use a SAT solver for automated reasoning.
- Score: 8.931767102433637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Natural language inference (NLI), also known as Recognizing Textual Entailment (RTE), is an important aspect of natural language understanding. Most research now uses machine learning and deep learning to perform this task on specific datasets, meaning their solution is not explainable nor explicit. To address the need for an explainable approach to RTE, we propose a novel pipeline that is based on translating text into an Abstract Meaning Representation (AMR) graph. For this we use a pre-trained AMR parser. We then translate the AMR graph into propositional logic and use a SAT solver for automated reasoning. In text, often commonsense suggests that an entailment (or contradiction) relationship holds between a premise and a claim, but because different wordings are used, this is not identified from their logical representations. To address this, we introduce relaxation methods to allow replacement or forgetting of some propositions. Our experimental results show this pipeline performs well on four RTE datasets.
Related papers
- Analyzing the Role of Semantic Representations in the Era of Large Language Models [104.18157036880287]
We investigate the role of semantic representations in the era of large language models (LLMs)
We propose an AMR-driven chain-of-thought prompting method, which we call AMRCoT.
We find that it is difficult to predict which input examples AMR may help or hurt on, but errors tend to arise with multi-word expressions.
arXiv Detail & Related papers (2024-05-02T17:32:59Z) - Enhancing Systematic Decompositional Natural Language Inference Using
Informal Logic [53.363888563647976]
We develop a consistent and theoretically grounded approach to annotating decompositional entailment datasets.
We find that our resulting dataset, RDTE, has a substantially higher internal consistency (+9%) than prior decompositional entailment datasets.
We also find that training an RDTE-oriented entailment classifier via knowledge distillation and employing it in a modern neuro-symbolic reasoning engine significantly improves results.
arXiv Detail & Related papers (2024-02-22T18:55:17Z) - Prompt-based Logical Semantics Enhancement for Implicit Discourse
Relation Recognition [4.7938839332508945]
We propose a Prompt-based Logical Semantics Enhancement (PLSE) method for Implicit Discourse Relation Recognition (IDRR)
Our method seamlessly injects knowledge relevant to discourse relation into pre-trained language models through prompt-based connective prediction.
Experimental results on PDTB 2.0 and CoNLL16 datasets demonstrate that our method achieves outstanding and consistent performance against the current state-of-the-art models.
arXiv Detail & Related papers (2023-11-01T08:38:08Z) - An AMR-based Link Prediction Approach for Document-level Event Argument
Extraction [51.77733454436013]
Recent works have introduced Abstract Meaning Representation (AMR) for Document-level Event Argument Extraction (Doc-level EAE)
This work reformulates EAE as a link prediction problem on AMR graphs.
We propose a novel graph structure, Tailored AMR Graph (TAG), which compresses less informative subgraphs and edge types, integrates span information, and highlights surrounding events in the same document.
arXiv Detail & Related papers (2023-05-30T16:07:48Z) - Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning [27.224364543134094]
We introduce a novel logic-driven data augmentation approach, AMR-LDA.
AMR-LDA converts the original text into an Abstract Meaning Representation (AMR) graph.
The modified AMR graphs are subsequently converted back into text to create augmented data.
arXiv Detail & Related papers (2023-05-21T23:16:26Z) - Enriching Relation Extraction with OpenIE [70.52564277675056]
Relation extraction (RE) is a sub-discipline of information extraction (IE)
In this work, we explore how recent approaches for open information extraction (OpenIE) may help to improve the task of RE.
Our experiments over two annotated corpora, KnowledgeNet and FewRel, demonstrate the improved accuracy of our enriched models.
arXiv Detail & Related papers (2022-12-19T11:26:23Z) - Towards Relation Extraction From Speech [56.36416922396724]
We propose a new listening information extraction task, i.e., speech relation extraction.
We construct the training dataset for speech relation extraction via text-to-speech systems, and we construct the testing dataset via crowd-sourcing with native English speakers.
We conduct comprehensive experiments to distinguish the challenges in speech relation extraction, which may shed light on future explorations.
arXiv Detail & Related papers (2022-10-17T05:53:49Z) - RelationPrompt: Leveraging Prompts to Generate Synthetic Data for
Zero-Shot Relation Triplet Extraction [65.4337085607711]
We introduce the task setting of Zero-Shot Relation Triplet Extraction (ZeroRTE)
Given an input sentence, each extracted triplet consists of the head entity, relation label, and tail entity where the relation label is not seen at the training stage.
We propose to synthesize relation examples by prompting language models to generate structured texts.
arXiv Detail & Related papers (2022-03-17T05:55:14Z) - AdaLoGN: Adaptive Logic Graph Network for Reasoning-Based Machine
Reading Comprehension [21.741085513119785]
Recent machine reading comprehension datasets such as ReClor and LogiQA require performing logical reasoning over text.
We present a neural-symbolic approach which, to predict an answer, passes messages over a graph representing logical relations between text units.
arXiv Detail & Related papers (2022-03-16T23:51:01Z) - Context-Dependent Semantic Parsing for Temporal Relation Extraction [2.5807659587068534]
We propose SMARTER, a neural semantic representation, to extract temporal information in text effectively.
In the inference phase, SMARTER generates a temporal relation graph by executing the logical form.
The accurate logical form representations of an event given context ensure the correctness of the extracted relations.
arXiv Detail & Related papers (2021-12-02T00:29:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.