Natural Language Deduction through Search over Statement Compositions
- URL: http://arxiv.org/abs/2201.06028v1
- Date: Sun, 16 Jan 2022 12:05:48 GMT
- Title: Natural Language Deduction through Search over Statement Compositions
- Authors: Kaj Bostrom, Zayne Sprague, Swarat Chaudhuri and Greg Durrett
- Abstract summary: We propose a system for natural language deduction that decomposes the task into separate steps coordinated by best-first search.
Our experiments demonstrate that the proposed system can better distinguish verifiable hypotheses from unverifiable ones.
- Score: 43.93269297653265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In settings from fact-checking to question answering, we frequently want to
know whether a collection of evidence entails a hypothesis. Existing methods
primarily focus on end-to-end discriminative versions of this task, but less
work has treated the generative version in which a model searches over the
space of entailed statements to derive the hypothesis. We propose a system for
natural language deduction that decomposes the task into separate steps
coordinated by best-first search, producing a tree of intermediate conclusions
that faithfully reflects the system's reasoning process. Our experiments
demonstrate that the proposed system can better distinguish verifiable
hypotheses from unverifiable ones and produce natural language explanations
that are more internally consistent than those produced by an end-to-end T5
model.
Related papers
- QUITE: Quantifying Uncertainty in Natural Language Text in Bayesian Reasoning Scenarios [15.193544498311603]
We present QUITE, a dataset of real-world Bayesian reasoning scenarios with categorical random variables and complex relationships.
We conduct an extensive set of experiments, finding that logic-based models outperform out-of-the-box large language models on all reasoning types.
Our results provide evidence that neuro-symbolic models are a promising direction for improving complex reasoning.
arXiv Detail & Related papers (2024-10-14T12:44:59Z) - Hypothesis Testing Prompting Improves Deductive Reasoning in Large Language Models [19.879616265315637]
textitHypothesis Testing Prompting adds conclusion assumptions, backward reasoning, and fact verification during intermediate reasoning steps.
Experiments show that hypothesis testing prompting not only significantly improves the effect, but also generates a more reasonable and standardized reasoning process.
arXiv Detail & Related papers (2024-05-09T08:46:17Z) - Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement [92.61557711360652]
Language models (LMs) often fall short on inductive reasoning, despite achieving impressive success on research benchmarks.
We conduct a systematic study of the inductive reasoning capabilities of LMs through iterative hypothesis refinement.
We reveal several discrepancies between the inductive reasoning processes of LMs and humans, shedding light on both the potentials and limitations of using LMs in inductive reasoning tasks.
arXiv Detail & Related papers (2023-10-12T17:51:10Z) - Deductive Additivity for Planning of Natural Language Proofs [43.93269297653265]
We investigate whether an efficient planning is possible via embedding spaces compatible with deductive reasoning.
Our findings suggest that while standard embedding methods frequently embed conclusions near the sums of their premises, they fall short of being effectives and lack the ability to model certain categories of reasoning.
arXiv Detail & Related papers (2023-07-05T17:45:48Z) - Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - STREET: A Multi-Task Structured Reasoning and Explanation Benchmark [56.555662318619135]
We introduce a unified multi-task and multi-domain natural language reasoning and explanation benchmark.
We expect models to not only answer questions, but also produce step-by-step structured explanations describing how premises in the question are used to produce intermediate conclusions that can prove the correctness of a certain answer.
arXiv Detail & Related papers (2023-02-13T22:34:02Z) - Natural Language Deduction with Incomplete Information [43.93269297653265]
We propose a new system that can handle the underspecified setting where not all premises are stated at the outset.
By using a natural language generation model to abductively infer a premise given another premise and a conclusion, we can impute missing pieces of evidence needed for the conclusion to be true.
arXiv Detail & Related papers (2022-11-01T17:27:55Z) - Probing via Prompting [71.7904179689271]
This paper introduces a novel model-free approach to probing, by formulating probing as a prompting task.
We conduct experiments on five probing tasks and show that our approach is comparable or better at extracting information than diagnostic probes.
We then examine the usefulness of a specific linguistic property for pre-training by removing the heads that are essential to that property and evaluating the resulting model's performance on language modeling.
arXiv Detail & Related papers (2022-07-04T22:14:40Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - An Interpretable Neuro-Symbolic Reasoning Framework for Task-Oriented
Dialogue Generation [21.106357884651363]
We introduce neuro-symbolic to perform explicit reasoning that justifies model decisions by reasoning chains.
We propose a two-phase approach that consists of a hypothesis generator and a reasoner.
The whole system is trained by exploiting raw textual dialogues without using any reasoning chain annotations.
arXiv Detail & Related papers (2022-03-11T10:44:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.