ProofWriter: Generating Implications, Proofs, and Abductive Statements
over Natural Language
- URL: http://arxiv.org/abs/2012.13048v1
- Date: Thu, 24 Dec 2020 00:55:46 GMT
- Title: ProofWriter: Generating Implications, Proofs, and Abductive Statements
over Natural Language
- Authors: Oyvind Tafjord, Bhavana Dalvi Mishra, Peter Clark
- Abstract summary: Transformers have been shown to emulate logical deduction over natural language theories.
We show that a generative model, called ProofWriter, can reliably generate both implications of a theory and the natural language proof(s) that support them.
- Score: 19.917022148887273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transformers have been shown to emulate logical deduction over natural
language theories (logical rules expressed in natural language), reliably
assigning true/false labels to candidate implications. However, their ability
to generate implications of a theory has not yet been demonstrated, and methods
for reconstructing proofs of answers are imperfect. In this work we show that a
generative model, called ProofWriter, can reliably generate both implications
of a theory and the natural language proof(s) that support them. In particular,
iterating a 1-step implication generator results in proofs that are highly
reliable, and represent actual model decisions (rather than post-hoc
rationalizations). On the RuleTaker dataset, the accuracy of ProofWriter's
proofs exceed previous methods by +9% absolute, and in a way that generalizes
to proof depths unseen in training and on out-of-domain problems. We also show
that generative techniques can perform a type of abduction with high precision:
Given a theory and an unprovable conclusion, identify a missing fact that
allows the conclusion to be proved, along with a proof. These results
significantly improve the viability of neural methods for systematically
reasoning over natural language.
Related papers
- Lean-STaR: Learning to Interleave Thinking and Proving [53.923617816215774]
We present Lean-STaR, a framework for training language models to produce informal thoughts prior to each step of a proof.
Lean-STaR achieves state-of-the-art results on the miniF2F-test benchmark within the Lean theorem proving environment.
arXiv Detail & Related papers (2024-07-14T01:43:07Z) - Natural Language Deduction with Incomplete Information [43.93269297653265]
We propose a new system that can handle the underspecified setting where not all premises are stated at the outset.
By using a natural language generation model to abductively infer a premise given another premise and a conclusion, we can impute missing pieces of evidence needed for the conclusion to be true.
arXiv Detail & Related papers (2022-11-01T17:27:55Z) - NaturalProver: Grounded Mathematical Proof Generation with Language
Models [84.2064569475095]
Theorem proving in natural mathematical language plays a central role in mathematical advances and education.
We develop NaturalProver, a language model that generates proofs by conditioning on background references.
NaturalProver is capable of proving some theorems that require short (2-6 step) proofs, and providing next-step suggestions that are rated as correct and useful over 40% of the time.
arXiv Detail & Related papers (2022-05-25T17:01:18Z) - Generating Natural Language Proofs with Verifier-Guided Search [74.9614610172561]
We present a novel stepwise method NLProofS (Natural Language Proof Search)
NLProofS learns to generate relevant steps conditioning on the hypothesis.
It achieves state-of-the-art performance on EntailmentBank and RuleTaker.
arXiv Detail & Related papers (2022-05-25T02:22:30Z) - ProoFVer: Natural Logic Theorem Proving for Fact Verification [24.61301908217728]
We propose ProoFVer, a proof system for fact verification using natural logic.
The generation of proofs makes ProoFVer an explainable system.
We find that humans correctly simulate ProoFVer's decisions more often using the proofs.
arXiv Detail & Related papers (2021-08-25T17:23:04Z) - multiPRover: Generating Multiple Proofs for Improved Interpretability in
Rule Reasoning [73.09791959325204]
We focus on a type of linguistic formal reasoning where the goal is to reason over explicit knowledge in the form of natural language facts and rules.
A recent work, named PRover, performs such reasoning by answering a question and also generating a proof graph that explains the answer.
In our work, we address a new and challenging problem of generating multiple proof graphs for reasoning over natural language rule-bases.
arXiv Detail & Related papers (2021-06-02T17:58:35Z) - AmbiFC: Fact-Checking Ambiguous Claims with Evidence [57.7091560922174]
We present AmbiFC, a fact-checking dataset with 10k claims derived from real-world information needs.
We analyze disagreements arising from ambiguity when comparing claims against evidence in AmbiFC.
We develop models for predicting veracity handling this ambiguity via soft labels.
arXiv Detail & Related papers (2021-04-01T17:40:08Z) - NaturalProofs: Mathematical Theorem Proving in Natural Language [132.99913141409968]
We develop NaturalProofs, a multi-domain corpus of mathematical statements and their proofs.
NaturalProofs unifies broad coverage, deep coverage, and low-resource mathematical sources.
We benchmark strong neural methods on mathematical reference retrieval and generation tasks.
arXiv Detail & Related papers (2021-03-24T03:14:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.