GRACE: Discriminator-Guided Chain-of-Thought Reasoning
- URL: http://arxiv.org/abs/2305.14934v2
- Date: Tue, 24 Oct 2023 01:21:05 GMT
- Title: GRACE: Discriminator-Guided Chain-of-Thought Reasoning
- Authors: Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu
Wang
- Abstract summary: We propose Guiding chain-of-thought ReAsoning with a CorrectnEss Discriminator (GRACE) to steer the decoding process towards producing correct reasoning steps.
GRACE employs a discriminator trained with a contrastive loss over correct and incorrect steps, which is used during decoding to score next-step candidates.
- Score: 75.35436025709049
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the context of multi-step reasoning, e.g., with chain-of-thought, language
models (LMs) can easily assign a high likelihood to incorrect steps. As a
result, decoding strategies that optimize for solution likelihood often yield
incorrect solutions. To address this issue, we propose Guiding chain-of-thought
ReAsoning with a CorrectnEss Discriminator (GRACE), a stepwise decoding
approach that steers the decoding process towards producing correct reasoning
steps. GRACE employs a discriminator trained with a contrastive loss over
correct and incorrect steps, which is used during decoding to score next-step
candidates based on their correctness. Importantly, GRACE only requires
sampling from the LM, without the need for LM training or fine-tuning. Using
models from FLAN-T5 and LLaMA families, we evaluate GRACE over four math and
two symbolic reasoning tasks, where it exhibits substantial performance gains
compared to greedy decoding, verifiers, and self-consistency in most settings.
When further combined with self-consistency, GRACE outperforms all the
baselines by sizeable margins. Human and LLM evaluations over GSM8K show that
GRACE not only improves the final answer accuracy but also the correctness of
the intermediate reasoning. Our implementation can be accessed at
\url{https://github.com/mukhal/grace}.
Related papers
- Improving LLM Reasoning through Scaling Inference Computation with Collaborative Verification [52.095460362197336]
Large language models (LLMs) struggle with consistent and accurate reasoning.
LLMs are trained primarily on correct solutions, reducing their ability to detect and learn from errors.
We propose a novel collaborative method integrating Chain-of-Thought (CoT) and Program-of-Thought (PoT) solutions for verification.
arXiv Detail & Related papers (2024-10-05T05:21:48Z) - Automated Theorem Provers Help Improve Large Language Model Reasoning [0.18416014644193066]
We show how accuracy can be improved with a neuro-symbolic architecture.
We define a framework of syntactic and semantic error categories.
We extend our method with capabilities for automatically correcting syntactic and semantic errors.
arXiv Detail & Related papers (2024-08-07T01:03:56Z) - Learning to Check: Unleashing Potentials for Self-Correction in Large Language Models [5.463333911506443]
We aim to enhance the self-checking capabilities of large language models (LLMs) by constructing training data for checking tasks.
We propose a specialized checking format called "Step CoT Check"
Experiments demonstrate that fine-tuning with the "Step CoT Check" format significantly improves the self-checking and self-correction abilities of LLMs.
arXiv Detail & Related papers (2024-02-20T14:23:23Z) - Training Chain-of-Thought via Latent-Variable Inference [30.21067593018967]
Large language models (LLMs) solve problems more accurately and interpretably when instructed to work out the answer step by step using a chain-of-thought'' prompt.
Naively combining CoT with supervised tuning requires supervision not just of the correct answers, but also of detailed rationales that lead to those answers.
We propose a fine-tuning strategy that tries to maximize the emphmarginal log-likelihood of generating a correct answer using CoT prompting.
arXiv Detail & Related papers (2023-11-28T17:47:32Z) - SatLM: Satisfiability-Aided Language Models Using Declarative Prompting [68.40726892904286]
We propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of large language models (LLMs)
We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer.
We evaluate SATLM on 8 different datasets and show that it consistently outperforms program-aided LMs in the imperative paradigm.
arXiv Detail & Related papers (2023-05-16T17:55:51Z) - Self-Evaluation Guided Beam Search for Reasoning [61.523627290397556]
We introduce a stepwise self-evaluation mechanism to guide and calibrate the reasoning process of Large Language Model (LLM)
We propose a decoding algorithm integrating the self-evaluation guidance via beam search.
Our approach surpasses the corresponding Codex-backboned baselines in few-shot accuracy by $6.34%$, $9.56%$, and $5.46%$ on the GSM8K, AQuA, and StrategyQA.
arXiv Detail & Related papers (2023-05-01T02:37:59Z) - Large Language Models are Better Reasoners with Self-Verification [48.534270563880845]
Large language models (LLMs) have shown strong reasoning ability in several natural language processing tasks.
LLMs with chain of thought (CoT) prompting require multi-step prompting and multi-token prediction, which is highly sensitive to individual mistakes.
We propose and prove that LLMs also have similar self-verification abilities.
arXiv Detail & Related papers (2022-12-19T15:51:52Z) - LM-Critic: Language Models for Unsupervised Grammatical Error Correction [128.9174409251852]
We show how to leverage a pretrained language model (LM) in defining an LM-Critic, which judges a sentence to be grammatical.
We apply this LM-Critic and BIFI along with a large set of unlabeled sentences to bootstrap realistic ungrammatical / grammatical pairs for training a corrector.
arXiv Detail & Related papers (2021-09-14T17:06:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.