Large Language Models as an Indirect Reasoner: Contrapositive and
Contradiction for Automated Reasoning
- URL: http://arxiv.org/abs/2402.03667v1
- Date: Tue, 6 Feb 2024 03:41:12 GMT
- Title: Large Language Models as an Indirect Reasoner: Contrapositive and
Contradiction for Automated Reasoning
- Authors: Yanfang Zhang, Yiliu Sun, Yibing Zhan, Dapeng Tao, Dacheng Tao, Chen
Gong
- Abstract summary: This paper proposes a novel Indirect Reasoning (IR) method that employs the logic of contrapositives and contradictions to tackle IR tasks such as factual reasoning and mathematic proof.
The experimental results on popular LLMs, such as GPT-3.5-turbo and Gemini-pro, show that our IR method enhances the overall accuracy of factual reasoning by 27.33% and mathematical proof by 31.43%.
- Score: 79.37150041259066
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, increasing attention has been focused drawn on to improve the
ability of Large Language Models (LLMs) to perform complex reasoning. However,
previous methods, such as Chain-of-Thought and Self-Consistency, mainly follow
Direct Reasoning (DR) frameworks, so they will meet difficulty in solving
numerous real-world tasks which can hardly be solved via DR. Therefore, to
strengthen the reasoning power of LLMs, this paper proposes a novel Indirect
Reasoning (IR) method that employs the logic of contrapositives and
contradictions to tackle IR tasks such as factual reasoning and mathematic
proof. Specifically, our methodology comprises two steps. Firstly, we leverage
the logical equivalence of contrapositive to augment the data and rules to
enhance the comprehensibility of LLMs. Secondly, we design a set of prompt
templates to trigger LLMs to conduct IR based on proof by contradiction that is
logically equivalent to the original DR process. Our IR method is simple yet
effective and can be straightforwardly integrated with existing DR methods to
further boost the reasoning abilities of LLMs. The experimental results on
popular LLMs, such as GPT-3.5-turbo and Gemini-pro, show that our IR method
enhances the overall accuracy of factual reasoning by 27.33% and mathematical
proof by 31.43%, when compared with traditional DR methods. Moreover, the
methods combining IR and DR significantly outperform the methods solely using
IR or DR, further demonstrating the effectiveness of our strategy.
Related papers
- SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs [48.28847964704554]
Chain-of-Thought (CoT) reasoning enables Large Language Models (LLMs) to solve complex reasoning tasks.
We propose a novel approach for continuous-space reasoning that does not require modifying the underlying LLM.
arXiv Detail & Related papers (2025-02-17T18:52:29Z) - Toward Adaptive Reasoning in Large Language Models with Thought Rollback [33.714789952452094]
This paper proposes a new reasoning framework, called Thought Rollback (TR)
TR allows large language models (LLMs) to adaptively build thought structure while maintaining effective reasoning toward problem-solving under hallucinations''
arXiv Detail & Related papers (2024-12-27T16:02:34Z) - Critical-Questions-of-Thought: Steering LLM reasoning with Argumentative Querying [0.3659498819753633]
State-of-the-art Large Language models (LLMs) continue to struggle when performing logical and mathematical reasoning.
This paper makes use of the notion of critical questions from the literature on argumentation theory, focusing in particular on Toulmin's model of argumentation.
We show that employing these critical questions can improve the reasoning capabilities of LLMs.
arXiv Detail & Related papers (2024-12-19T18:51:30Z) - Make LLMs better zero-shot reasoners: Structure-orientated autonomous reasoning [52.83539473110143]
We introduce a novel structure-oriented analysis method to help Large Language Models (LLMs) better understand a question.
To further improve the reliability in complex question-answering tasks, we propose a multi-agent reasoning system, Structure-oriented Autonomous Reasoning Agents (SARA)
Extensive experiments verify the effectiveness of the proposed reasoning system. Surprisingly, in some cases, the system even surpasses few-shot methods.
arXiv Detail & Related papers (2024-10-18T05:30:33Z) - Break the Chain: Large Language Models Can be Shortcut Reasoners [18.047917626825548]
Chain-of-Thought (CoT) reasoning utilize complex modules but are hampered by high token consumption, limited applicability, and challenges in thinking.
This paper conducts a critical evaluation of CoT prompting, extending beyond arithmetic to include complex logical and commonsense reasoning tasks.
We propose the integration of human-likes and shortcuts into language models (LMs) through "break the chain" strategies.
arXiv Detail & Related papers (2024-06-04T14:02:53Z) - Aggregation of Reasoning: A Hierarchical Framework for Enhancing Answer Selection in Large Language Models [84.15513004135576]
Current research enhances the reasoning performance of Large Language Models (LLMs) by sampling multiple reasoning chains and ensembling based on the answer frequency.
This approach fails in scenarios where the correct answers are in the minority.
We introduce a hierarchical reasoning aggregation framework AoR, which selects answers based on the evaluation of reasoning chains.
arXiv Detail & Related papers (2024-05-21T17:12:19Z) - LaRS: Latent Reasoning Skills for Chain-of-Thought Reasoning [61.7853049843921]
Chain-of-thought (CoT) prompting is a popular in-context learning approach for large language models (LLMs)
This paper introduces a new approach named Latent Reasoning Skills (LaRS) that employs unsupervised learning to create a latent space representation of rationales.
arXiv Detail & Related papers (2023-12-07T20:36:10Z) - Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs [95.07757789781213]
Two lines of approaches are adopted for complex reasoning with LLMs.
One line of work prompts LLMs with various reasoning structures, while the structural outputs can be naturally regarded as intermediate reasoning steps.
The other line of work adopt LLM-free declarative solvers to do the reasoning task, rendering higher reasoning accuracy but lacking interpretability due to the black-box nature of the solvers.
We present a simple extension to the latter line of work. Specifically, we showcase that the intermediate search logs generated by Prolog interpreters can be accessed and interpreted into human-readable reasoning.
arXiv Detail & Related papers (2023-11-16T11:26:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.