Logical Negation Augmenting and Debiasing for Prompt-based Methods
- URL: http://arxiv.org/abs/2405.04872v1
- Date: Wed, 8 May 2024 08:05:47 GMT
- Title: Logical Negation Augmenting and Debiasing for Prompt-based Methods
- Authors: Yitian Li, Jidong Tian, Hao He, Yaohui Jin,
- Abstract summary: We focus on the effectiveness of prompt-based methods on first-order logical reasoning.
We find that the bottleneck lies in logical negation.
We propose a simple but effective method, Negation Augmenting and Negation Debiasing.
- Score: 19.879616265315637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prompt-based methods have gained increasing attention on NLP and shown validity on many downstream tasks. Many works have focused on mining these methods' potential for knowledge extraction, but few explore their ability to make logical reasoning. In this work, we focus on the effectiveness of the prompt-based methods on first-order logical reasoning and find that the bottleneck lies in logical negation. Based on our analysis, logical negation tends to result in spurious correlations to negative answers, while propositions without logical negation correlate to positive answers. To solve the problem, we propose a simple but effective method, Negation Augmenting and Negation Debiasing (NAND), which introduces negative propositions to prompt-based methods without updating parameters. Specifically, these negative propositions can counteract spurious correlations by providing "not" for all instances so that models cannot make decisions only by whether expressions contain a logical negation. Experiments on three datasets show that NAND not only solves the problem of calibrating logical negation but also significantly enhances prompt-based methods of logical reasoning without model retraining.
Related papers
- Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models [10.106408289179463]
We propose Logic-of-Thought (LoT) prompting which employs propositional logic to generate expanded logical information from input context.
LoT boosts the performance of various prompting methods with a striking margin across five logical reasoning tasks.
arXiv Detail & Related papers (2024-09-26T04:59:45Z) - NL2FOL: Translating Natural Language to First-Order Logic for Logical Fallacy Detection [45.28949266878263]
We design a process to reliably detect logical fallacies by translating natural language to First-order Logic.
We then utilize Satisfiability Modulo Theory (SMT) solvers to reason about the validity of the formula.
Our approach is robust, interpretable and does not require training data or fine-tuning.
arXiv Detail & Related papers (2024-04-18T00:20:48Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - On the Paradox of Learning to Reason from Data [86.13662838603761]
We show that BERT can attain near-perfect accuracy on in-distribution test examples while failing to generalize to other data distributions over the exact same problem space.
Our study provides an explanation for this paradox: instead of learning to emulate the correct reasoning function, BERT has in fact learned statistical features that inherently exist in logical reasoning problems.
arXiv Detail & Related papers (2022-05-23T17:56:48Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Logic Embeddings for Complex Query Answering [56.25151854231117]
We propose Logic Embeddings, a new approach to embedding complex queries that uses Skolemisation to eliminate existential variables for efficient querying.
We show that Logic Embeddings are competitively fast and accurate in query answering over large, incomplete knowledge graphs, outperform on negation queries, and in particular, provide improved modeling of answer uncertainty.
arXiv Detail & Related papers (2021-02-28T07:52:37Z) - LOREN: Logic Enhanced Neural Reasoning for Fact Verification [24.768868510218002]
We propose LOREN, a novel approach for fact verification that integrates Logic guided Reasoning and Neural inference.
Instead of directly validating a single reasoning unit, LOREN turns it into a question-answering task.
Experiments show that our proposed LOREN outperforms other previously published methods and achieves 73.43% of the FEVER score.
arXiv Detail & Related papers (2020-12-25T13:57:04Z) - Negation in Cognitive Reasoning [0.5801044612920815]
Negation is an operation in formal logic and in natural language.
One task of cognitive reasoning is answering questions given by sentences in natural language.
arXiv Detail & Related papers (2020-12-23T13:22:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.