Deductive Beam Search: Decoding Deducible Rationale for Chain-of-Thought
Reasoning
- URL: http://arxiv.org/abs/2401.17686v2
- Date: Sun, 4 Feb 2024 13:18:34 GMT
- Title: Deductive Beam Search: Decoding Deducible Rationale for Chain-of-Thought
Reasoning
- Authors: Tinghui Zhu, Kai Zhang, Jian Xie, Yu Su
- Abstract summary: Previous methods fail to address reasoning errors in intermediate steps, leading to accumulative errors.
We propose Deductive Beam Search (DBS), which seamlessly integrates chain-of-thought reasoning with step-wise beam search for Large Language Models.
Our approach deploys a verifier, verifying the deducibility of a reasoning step and its premises, thus alleviating the error accumulation.
- Score: 11.866321562684535
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements have significantly augmented the reasoning capabilities
of Large Language Models (LLMs) through various methodologies, especially
chain-of-thought (CoT) reasoning. However, previous methods fail to address
reasoning errors in intermediate steps, leading to accumulative errors. In this
paper, we propose Deductive Beam Search (DBS), which seamlessly integrates CoT
and deductive reasoning with step-wise beam search for LLMs. Our approach
deploys a verifier, verifying the deducibility of a reasoning step and its
premises, thus alleviating the error accumulation. Furthermore, we introduce a
scalable and labor-free data construction method to amplify our model's
verification capabilities. Extensive experiments demonstrate that our approach
significantly enhances the base performance of LLMs of various scales (7B, 13B,
70B, and ChatGPT) across 8 reasoning datasets from 3 diverse reasoning genres,
including arithmetic, commonsense, and symbolic. Moreover, our analysis proves
DBS's capability of detecting diverse and subtle reasoning errors and
robustness on different model scales.
Related papers
- Boosting Deductive Reasoning with Step Signals In RLHF [15.441793744822457]
We have developed an automated method, Multi-step Deduction (MuseD), for deductive reasoning data.
MuseD has allowed us to create training and testing datasets for multi-step reasoning.
Our training data has demonstrated significant improvements in logical capabilities for both in-domain of out-of-domain reasoning tasks.
arXiv Detail & Related papers (2024-10-12T13:19:11Z) - Unveiling the Statistical Foundations of Chain-of-Thought Prompting Methods [59.779795063072655]
Chain-of-Thought (CoT) prompting and its variants have gained popularity as effective methods for solving multi-step reasoning problems.
We analyze CoT prompting from a statistical estimation perspective, providing a comprehensive characterization of its sample complexity.
arXiv Detail & Related papers (2024-08-25T04:07:18Z) - Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models [63.36637269634553]
We present a novel method of further improving performance by requiring models to compare multiple reasoning chains.
We find that instruction tuning on DCoT datasets boosts the performance of even smaller, and therefore more accessible, language models.
arXiv Detail & Related papers (2024-07-03T15:01:18Z) - General Purpose Verification for Chain of Thought Prompting [16.381123651223763]
We explore ways to improve reasoning capabilities of Large Language Models (LLMs)
We propose three general principles that a model should adhere to while reasoning.
We apply these constraints to the reasoning steps generated by the LLM to improve the accuracy of the final generation.
arXiv Detail & Related papers (2024-04-30T21:15:17Z) - PathFinder: Guided Search over Multi-Step Reasoning Paths [80.56102301441899]
We propose PathFinder, a tree-search-based reasoning path generation approach.
It enhances diverse branching and multi-hop reasoning through the integration of dynamic decoding.
Our model generalizes well to longer, unseen reasoning chains, reflecting similar complexities to beam search with large branching factors.
arXiv Detail & Related papers (2023-12-08T17:05:47Z) - A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning [73.77088902676306]
We take a closer look at the self-verification abilities of large language models (LLMs) in the context of logical reasoning.
Our main findings suggest that existing LLMs could struggle to identify fallacious reasoning steps accurately and may fall short of guaranteeing the validity of self-verification methods.
arXiv Detail & Related papers (2023-11-14T07:13:10Z) - Noisy Exemplars Make Large Language Models More Robust: A
Domain-Agnostic Behavioral Analysis [10.06218778776515]
We introduce a systematic approach to test the robustness of large language models (LLMs) in multi-hop reasoning tasks via domain-agnostic perturbations.
We find that models are more sensitive to certain perturbations such as replacing words with their synonyms.
We also demonstrate that increasing the proportion of perturbed exemplars in the prompts improves the robustness of few-shot prompting methods.
arXiv Detail & Related papers (2023-11-01T03:15:05Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models [81.01397924280612]
Large language models (LLMs) can achieve highly effective performance on various reasoning tasks by incorporating step-by-step chain-of-thought (CoT) prompting as demonstrations.
We introduce Iter-CoT (Iterative bootstrapping in Chain-of-Thoughts Prompting), an iterative bootstrapping approach for selecting exemplars and generating reasoning chains.
arXiv Detail & Related papers (2023-04-23T13:54:39Z) - Faithful Reasoning Using Large Language Models [12.132449274592668]
We show how LMs can be made to perform faithful multi-step reasoning via a process whose causal structure mirrors the underlying logical structure of the problem.
Our approach works by chaining together reasoning steps, where each step results from calls to two fine-tuned LMs.
We demonstrate the effectiveness of our model on multi-step logical deduction and scientific question-answering, showing that it outperforms baselines on final answer accuracy.
arXiv Detail & Related papers (2022-08-30T13:44:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.