Natural Language Deduction with Incomplete Information
- URL: http://arxiv.org/abs/2211.00614v1
- Date: Tue, 1 Nov 2022 17:27:55 GMT
- Title: Natural Language Deduction with Incomplete Information
- Authors: Zayne Sprague, Kaj Bostrom, Swarat Chaudhuri, Greg Durrett
- Abstract summary: We propose a new system that can handle the underspecified setting where not all premises are stated at the outset.
By using a natural language generation model to abductively infer a premise given another premise and a conclusion, we can impute missing pieces of evidence needed for the conclusion to be true.
- Score: 43.93269297653265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A growing body of work studies how to answer a question or verify a claim by
generating a natural language "proof": a chain of deductive inferences yielding
the answer based on a set of premises. However, these methods can only make
sound deductions when they follow from evidence that is given. We propose a new
system that can handle the underspecified setting where not all premises are
stated at the outset; that is, additional assumptions need to be materialized
to prove a claim. By using a natural language generation model to abductively
infer a premise given another premise and a conclusion, we can impute missing
pieces of evidence needed for the conclusion to be true. Our system searches
over two fringes in a bidirectional fashion, interleaving deductive
(forward-chaining) and abductive (backward-chaining) generation steps. We
sample multiple possible outputs for each step to achieve coverage of the
search space, at the same time ensuring correctness by filtering low-quality
generations with a round-trip validation procedure. Results on a modified
version of the EntailmentBank dataset and a new dataset called Everyday Norms:
Why Not? show that abductive generation with validation can recover premises
across in- and out-of-domain settings.
Related papers
- Deductive Additivity for Planning of Natural Language Proofs [43.93269297653265]
We investigate whether an efficient planning is possible via embedding spaces compatible with deductive reasoning.
Our findings suggest that while standard embedding methods frequently embed conclusions near the sums of their premises, they fall short of being effectives and lack the ability to model certain categories of reasoning.
arXiv Detail & Related papers (2023-07-05T17:45:48Z) - Give Me More Details: Improving Fact-Checking with Latent Retrieval [58.706972228039604]
Evidence plays a crucial role in automated fact-checking.
Existing fact-checking systems either assume the evidence sentences are given or use the search snippets returned by the search engine.
We propose to incorporate full text from source documents as evidence and introduce two enriched datasets.
arXiv Detail & Related papers (2023-05-25T15:01:19Z) - ReCEval: Evaluating Reasoning Chains via Correctness and Informativeness [67.49087159888298]
ReCEval is a framework that evaluates reasoning chains via two key properties: correctness and informativeness.
We show that ReCEval effectively identifies various error types and yields notable improvements compared to prior methods.
arXiv Detail & Related papers (2023-04-21T02:19:06Z) - Generating Natural Language Proofs with Verifier-Guided Search [74.9614610172561]
We present a novel stepwise method NLProofS (Natural Language Proof Search)
NLProofS learns to generate relevant steps conditioning on the hypothesis.
It achieves state-of-the-art performance on EntailmentBank and RuleTaker.
arXiv Detail & Related papers (2022-05-25T02:22:30Z) - Natural Language Deduction through Search over Statement Compositions [43.93269297653265]
We propose a system for natural language deduction that decomposes the task into separate steps coordinated by best-first search.
Our experiments demonstrate that the proposed system can better distinguish verifiable hypotheses from unverifiable ones.
arXiv Detail & Related papers (2022-01-16T12:05:48Z) - multiPRover: Generating Multiple Proofs for Improved Interpretability in
Rule Reasoning [73.09791959325204]
We focus on a type of linguistic formal reasoning where the goal is to reason over explicit knowledge in the form of natural language facts and rules.
A recent work, named PRover, performs such reasoning by answering a question and also generating a proof graph that explains the answer.
In our work, we address a new and challenging problem of generating multiple proof graphs for reasoning over natural language rule-bases.
arXiv Detail & Related papers (2021-06-02T17:58:35Z) - ProofWriter: Generating Implications, Proofs, and Abductive Statements
over Natural Language [19.917022148887273]
Transformers have been shown to emulate logical deduction over natural language theories.
We show that a generative model, called ProofWriter, can reliably generate both implications of a theory and the natural language proof(s) that support them.
arXiv Detail & Related papers (2020-12-24T00:55:46Z) - L2R2: Leveraging Ranking for Abductive Reasoning [65.40375542988416]
The abductive natural language inference task ($alpha$NLI) is proposed to evaluate the abductive reasoning ability of a learning system.
A novel $L2R2$ approach is proposed under the learning-to-rank framework.
Experiments on the ART dataset reach the state-of-the-art in the public leaderboard.
arXiv Detail & Related papers (2020-05-22T15:01:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.