Negation in Cognitive Reasoning
- URL: http://arxiv.org/abs/2012.12641v1
- Date: Wed, 23 Dec 2020 13:22:53 GMT
- Title: Negation in Cognitive Reasoning
- Authors: Claudia Schon, Sophie Siebert, Frieder Stolzenburg
- Abstract summary: Negation is an operation in formal logic and in natural language.
One task of cognitive reasoning is answering questions given by sentences in natural language.
- Score: 0.5801044612920815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Negation is both an operation in formal logic and in natural language by
which a proposition is replaced by one stating the opposite, as by the addition
of "not" or another negation cue. Treating negation in an adequate way is
required for cognitive reasoning, which comprises commonsense reasoning and
text comprehension. One task of cognitive reasoning is answering questions
given by sentences in natural language. There are tools based on discourse
representation theory to convert sentences automatically into a formal logical
representation. However, since the knowledge in logical databases in practice
always is incomplete, forward reasoning of automated reasoning systems alone
does not suffice to derive answers to questions because, instead of complete
proofs, often only partial positive knowledge can be derived. In consequence,
negative information from negated expressions does not help in this context,
because only negative knowledge can be derived from this. Therefore, we aim at
reducing syntactic negation, strictly speaking, the negated event or property,
to its inverse. This lays the basis of cognitive reasoning employing both logic
and machine learning for general question answering. In this paper, we describe
an effective procedure to determine the negated event or property in order to
replace it with it inverse and our overall system for cognitive reasoning. We
demonstrate the procedure with examples and evaluate it with several
benchmarks.
Related papers
- Paraphrasing in Affirmative Terms Improves Negation Understanding [9.818585902859363]
Negation is a common linguistic phenomenon.
We show improvements with CondaQA, a large corpus requiring reasoning with negation, and five natural language understanding tasks.
arXiv Detail & Related papers (2024-06-11T17:30:03Z) - Logical Negation Augmenting and Debiasing for Prompt-based Methods [19.879616265315637]
We focus on the effectiveness of prompt-based methods on first-order logical reasoning.
We find that the bottleneck lies in logical negation.
We propose a simple but effective method, Negation Augmenting and Negation Debiasing.
arXiv Detail & Related papers (2024-05-08T08:05:47Z) - Contrastive Chain-of-Thought Prompting [74.10511560147293]
We propose contrastive chain of thought to enhance language model reasoning.
Compared to the conventional chain of thought, our approach provides both valid and invalid reasoning demonstrations.
Our experiments on reasoning benchmarks demonstrate that contrastive chain of thought can serve as a general enhancement of chain-of-thought prompting.
arXiv Detail & Related papers (2023-11-15T18:54:01Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - Implicit Chain of Thought Reasoning via Knowledge Distillation [58.80851216530288]
Instead of explicitly producing the chain of thought reasoning steps, we use the language model's internal hidden states to perform implicit reasoning.
We find that this approach enables solving tasks previously not solvable without explicit chain-of-thought, at a speed comparable to no chain-of-thought.
arXiv Detail & Related papers (2023-11-02T17:59:49Z) - Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - Logical Reasoning over Natural Language as Knowledge Representation: A
Survey [43.29703101875716]
This paper provides an overview on a new paradigm of logical reasoning, which uses natural language as knowledge representation and pretrained language models as reasoners.
This new paradigm is promising since it not only alleviates many challenges of formal representation but also has advantages over end-to-end neural methods.
arXiv Detail & Related papers (2023-03-21T16:56:05Z) - Language Models as Inductive Reasoners [125.99461874008703]
We propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts.
We create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language.
We provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts.
arXiv Detail & Related papers (2022-12-21T11:12:14Z) - Conversational Negation using Worldly Context in Compositional
Distributional Semantics [0.0]
Given a word, our framework can create its negation similar to how humans perceive negation.
We propose and motivate a new logical negation using matrix inverse.
We conclude that the combination of subtraction negation and phaser in the basis of the negated word yields the highest Pearson correlation of 0.635 with human ratings.
arXiv Detail & Related papers (2021-05-12T16:04:36Z) - LOREN: Logic Enhanced Neural Reasoning for Fact Verification [24.768868510218002]
We propose LOREN, a novel approach for fact verification that integrates Logic guided Reasoning and Neural inference.
Instead of directly validating a single reasoning unit, LOREN turns it into a question-answering task.
Experiments show that our proposed LOREN outperforms other previously published methods and achieves 73.43% of the FEVER score.
arXiv Detail & Related papers (2020-12-25T13:57:04Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.