Maieutic Prompting: Logically Consistent Reasoning with Recursive
Explanations
- URL: http://arxiv.org/abs/2205.11822v1
- Date: Tue, 24 May 2022 06:36:42 GMT
- Title: Maieutic Prompting: Logically Consistent Reasoning with Recursive
Explanations
- Authors: Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra
Bhagavatula, Ronan Le Bras, Yejin Choi
- Abstract summary: We develop Maieutic Prompting, which infers a correct answer to a question even from the noisy and inconsistent generations of language models.
Maieutic Prompting achieves up to 20% better accuracy than state-of-the-art prompting methods.
- Score: 71.2950434944196
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite their impressive capabilities, large pre-trained language models
(LMs) struggle with consistent reasoning; recently, prompting LMs to generate
explanations that self-guide the inference has emerged as a promising direction
to amend this. However, these approaches are fundamentally bounded by the
correctness of explanations, which themselves are often noisy and inconsistent.
In this work, we develop Maieutic Prompting, which infers a correct answer to a
question even from the noisy and inconsistent generations of LM. Maieutic
Prompting induces a tree of explanations abductively (e.g. X is true, because
...) and recursively, then frames the inference as a satisfiability problem
over these explanations and their logical relations. We test Maieutic Prompting
for true/false QA on three challenging benchmarks that require complex
commonsense reasoning. Maieutic Prompting achieves up to 20% better accuracy
than state-of-the-art prompting methods, and as a fully unsupervised approach,
performs competitively with supervised models. We also show that Maieutic
Prompting improves robustness in inference while providing interpretable
rationales.
Related papers
- Towards More Faithful Natural Language Explanation Using Multi-Level
Contrastive Learning in VQA [7.141288053123662]
Natural language explanation in visual question answer (VQA-NLE) aims to explain the decision-making process of models by generating natural language sentences to increase users' trust in the black-box systems.
Existing post-hoc explanations are not always aligned with human logical inference, suffering from the issues on: 1) Deductive unsatisfiability, the generated explanations do not logically lead to the answer; 2) Factual inconsistency, the model falsifies its counterfactual explanation for answers without considering the facts in images; and 3) Semantic perturbation insensitivity, the model can not recognize the semantic changes caused by small perturbations
arXiv Detail & Related papers (2023-12-21T05:51:55Z) - Neuro-Symbolic Integration Brings Causal and Reliable Reasoning Proofs [95.07757789781213]
Two lines of approaches are adopted for complex reasoning with LLMs.
One line of work prompts LLMs with various reasoning structures, while the structural outputs can be naturally regarded as intermediate reasoning steps.
The other line of work adopt LLM-free declarative solvers to do the reasoning task, rendering higher reasoning accuracy but lacking interpretability due to the black-box nature of the solvers.
We present a simple extension to the latter line of work. Specifically, we showcase that the intermediate search logs generated by Prolog interpreters can be accessed and interpreted into human-readable reasoning.
arXiv Detail & Related papers (2023-11-16T11:26:21Z) - Question Decomposition Improves the Faithfulness of Model-Generated
Reasoning [23.34325378824462]
Large language models (LLMs) are difficult to verify the correctness and safety of their behavior.
One approach is to prompt LLMs to externalize their reasoning, by having them generate step-by-step reasoning as they answer a question.
This approach relies on the stated reasoning faithfully reflecting the model's actual reasoning, which is not always the case.
Decomposition-based methods achieve strong performance on question-answering tasks, sometimes approaching that of CoT.
arXiv Detail & Related papers (2023-07-17T00:54:10Z) - Boosting Language Models Reasoning with Chain-of-Knowledge Prompting [18.326858925174605]
Chain-of-Knowledge (CoK) prompting aims at eliciting explicit pieces of knowledge evidence in the form of structure triple.
Benefiting from CoK, we additionally introduce a F2-Verification method to estimate the reliability of the reasoning chains.
Extensive experiments demonstrate that our method can further improve the performance of commonsense, factual, symbolic, and arithmetic reasoning tasks.
arXiv Detail & Related papers (2023-06-10T12:42:36Z) - Language Models with Rationality [57.37201135072838]
Large language models (LLMs) are proficient at question-answering (QA)
It is not always clear how (or even if) an answer follows from their latent "beliefs"
arXiv Detail & Related papers (2023-05-23T17:04:25Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - Towards Understanding Chain-of-Thought Prompting: An Empirical Study of
What Matters [82.84696222087396]
Chain-of-Thought (CoT) prompting can dramatically improve the multi-step reasoning abilities of large language models (LLMs)
We show that CoT reasoning is possible even with invalid demonstrations.
arXiv Detail & Related papers (2022-12-20T05:20:54Z) - Faithful Reasoning Using Large Language Models [12.132449274592668]
We show how LMs can be made to perform faithful multi-step reasoning via a process whose causal structure mirrors the underlying logical structure of the problem.
Our approach works by chaining together reasoning steps, where each step results from calls to two fine-tuned LMs.
We demonstrate the effectiveness of our model on multi-step logical deduction and scientific question-answering, showing that it outperforms baselines on final answer accuracy.
arXiv Detail & Related papers (2022-08-30T13:44:41Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.