SymBa: Symbolic Backward Chaining for Multi-step Natural Language
Reasoning
- URL: http://arxiv.org/abs/2402.12806v1
- Date: Tue, 20 Feb 2024 08:27:05 GMT
- Title: SymBa: Symbolic Backward Chaining for Multi-step Natural Language
Reasoning
- Authors: Jinu Lee, Wonseok Hwang
- Abstract summary: We propose SymBa (Symbolic Backward Chaining) to address the limitations of current backward chaining implementations.
In SymBa, the symbolic top-down solver controls the entire proof process and the LLM is called to generate a single reasoning step only when the solver encounters a dead end.
By this novel solver-LLM integration, SymBa achieves significant improvement in performance, proof faithfulness, and efficiency in diverse multi-step reasoning benchmarks.
- Score: 6.961946145048321
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large Language Models (LLMs) have recently demonstrated remarkable reasoning
ability as in Chain-of-thought prompting, but faithful multi-step reasoning
remains a challenge. We specifically focus on backward chaining, where the
query is recursively decomposed using logical rules until proven. To address
the limitations of current backward chaining implementations, we propose SymBa
(Symbolic Backward Chaining). In SymBa, the symbolic top-down solver controls
the entire proof process and the LLM is called to generate a single reasoning
step only when the solver encounters a dead end. By this novel solver-LLM
integration, while being able to produce an interpretable, structured proof,
SymBa achieves significant improvement in performance, proof faithfulness, and
efficiency in diverse multi-step reasoning benchmarks (ProofWriter,
Birds-Electricity, GSM8k, CLUTRR-TF, ECtHR Article 6) compared to backward
chaining baselines.
Related papers
- Faithful Logical Reasoning via Symbolic Chain-of-Thought [39.94884827166363]
We propose SymbCoT, a framework that integrates symbolic expressions and logic rules with Chain-of-Thought prompting.
We show that SymbCoT shows striking improvements over the CoT method consistently.
This is the first to combine symbolic expressions and rules into CoT for logical reasoning with LLMs.
arXiv Detail & Related papers (2024-05-28T16:55:33Z) - Aggregation of Reasoning: A Hierarchical Framework for Enhancing Answer Selection in Large Language Models [84.15513004135576]
Current research enhances the reasoning performance of Large Language Models (LLMs) by sampling multiple reasoning chains and ensembling based on the answer frequency.
This approach fails in scenarios where the correct answers are in the minority.
We introduce a hierarchical reasoning aggregation framework AoR, which selects answers based on the evaluation of reasoning chains.
arXiv Detail & Related papers (2024-05-21T17:12:19Z) - Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs [102.00359477458029]
We present a neuro-symbolic integration method, in which a neural LLM is used to represent the knowledge of the problem.
An LLM-free symbolic solver is adopted to do deliberative reasoning using the knowledge.
arXiv Detail & Related papers (2023-11-16T11:26:21Z) - LINC: A Neurosymbolic Approach for Logical Reasoning by Combining
Language Models with First-Order Logic Provers [60.009969929857704]
Logical reasoning is an important task for artificial intelligence with potential impacts on science, mathematics, and society.
In this work, we reformulating such tasks as modular neurosymbolic programming, which we call LINC.
We observe significant performance gains on FOLIO and a balanced subset of ProofWriter for three different models in nearly all experimental conditions we evaluate.
arXiv Detail & Related papers (2023-10-23T17:58:40Z) - Resprompt: Residual Connection Prompting Advances Multi-Step Reasoning in Large Language Models [73.4425450752596]
Chain-of-thought (CoT) prompting has impressively unlocked the reasoning potential of large language models (LLMs)
Yet, the standard CoT is less effective in problems demanding multiple reasoning steps.
We propose RESPROMPT, a new prompting strategy that advances multi-step reasoning in LLMs.
arXiv Detail & Related papers (2023-10-07T08:56:28Z) - Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models [81.01397924280612]
Large language models (LLMs) can achieve highly effective performance on various reasoning tasks by incorporating step-by-step chain-of-thought (CoT) prompting as demonstrations.
We introduce Iter-CoT (Iterative bootstrapping in Chain-of-Thoughts Prompting), an iterative bootstrapping approach for selecting exemplars and generating reasoning chains.
arXiv Detail & Related papers (2023-04-23T13:54:39Z) - LAMBADA: Backward Chaining for Automated Reasoning in Natural Language [11.096348678079574]
Backward Chaining algorithm, called LAMBADA, decomposes reasoning into four sub-modules.
We show that LAMBADA achieves sizable accuracy boosts over state-of-the-art forward reasoning methods.
arXiv Detail & Related papers (2022-12-20T18:06:03Z) - Language Models Are Greedy Reasoners: A Systematic Formal Analysis of
Chain-of-Thought [10.524051272257614]
Large language models (LLMs) have shown remarkable reasoning capabilities given chain-of-thought prompts.
We present a new synthetic question-answering dataset called PrOntoQA, where each example is generated as a synthetic world model.
This allows us to parse the generated chain-of-thought into symbolic proofs for formal analysis.
arXiv Detail & Related papers (2022-10-03T21:34:32Z) - Complexity-Based Prompting for Multi-Step Reasoning [72.0057198610614]
We study the task of prompting large-scale language models to perform multi-step reasoning.
A central question is which reasoning examples make the most effective prompts.
We propose complexity-based prompting, a simple and effective example selection scheme for multi-step reasoning.
arXiv Detail & Related papers (2022-10-03T05:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.