multiPRover: Generating Multiple Proofs for Improved Interpretability in
Rule Reasoning
- URL: http://arxiv.org/abs/2106.01354v1
- Date: Wed, 2 Jun 2021 17:58:35 GMT
- Title: multiPRover: Generating Multiple Proofs for Improved Interpretability in
Rule Reasoning
- Authors: Swarnadeep Saha, Prateek Yadav, Mohit Bansal
- Abstract summary: We focus on a type of linguistic formal reasoning where the goal is to reason over explicit knowledge in the form of natural language facts and rules.
A recent work, named PRover, performs such reasoning by answering a question and also generating a proof graph that explains the answer.
In our work, we address a new and challenging problem of generating multiple proof graphs for reasoning over natural language rule-bases.
- Score: 73.09791959325204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We focus on a type of linguistic formal reasoning where the goal is to reason
over explicit knowledge in the form of natural language facts and rules (Clark
et al., 2020). A recent work, named PRover (Saha et al., 2020), performs such
reasoning by answering a question and also generating a proof graph that
explains the answer. However, compositional reasoning is not always unique and
there may be multiple ways of reaching the correct answer. Thus, in our work,
we address a new and challenging problem of generating multiple proof graphs
for reasoning over natural language rule-bases. Each proof provides a different
rationale for the answer, thereby improving the interpretability of such
reasoning systems. In order to jointly learn from all proof graphs and exploit
the correlations between multiple proofs for a question, we pose this task as a
set generation problem over structured output spaces where each proof is
represented as a directed graph. We propose two variants of a proof-set
generation model, multiPRover. Our first model, Multilabel-multiPRover,
generates a set of proofs via multi-label classification and implicit
conditioning between the proofs; while the second model, Iterative-multiPRover,
generates proofs iteratively by explicitly conditioning on the previously
generated proofs. Experiments on multiple synthetic, zero-shot, and
human-paraphrased datasets reveal that both multiPRover models significantly
outperform PRover on datasets containing multiple gold proofs.
Iterative-multiPRover obtains state-of-the-art proof F1 in zero-shot scenarios
where all examples have single correct proofs. It also generalizes better to
questions requiring higher depths of reasoning where multiple proofs are more
frequent. Our code and models are publicly available at
https://github.com/swarnaHub/multiPRover
Related papers
- MUSTARD: Mastering Uniform Synthesis of Theorem and Proof Data [85.50740598523818]
MUSTARD is a framework that masters uniform synthesis of theorem and proof data of high quality and diversity.
We present a theorem-and-proof benchmark MUSTARDSAUCE with 5,866 valid data points.
We perform extensive analysis and demonstrate that MUSTARD generates validated high-quality step-by-step data.
arXiv Detail & Related papers (2024-02-14T05:57:58Z) - Testing the General Deductive Reasoning Capacity of Large Language
Models Using OOD Examples [36.63316546586304]
Large language models (LLMs) possess some abstract deductive reasoning ability given chain-of-thought prompts.
We test on a broad set of deduction rules and measure their ability to generalize to more complex proofs from simpler demonstrations.
Experiments on four LLMs of various sizes and training objectives show that they are able to generalize to compositional proofs.
arXiv Detail & Related papers (2023-05-24T15:55:51Z) - STREET: A Multi-Task Structured Reasoning and Explanation Benchmark [56.555662318619135]
We introduce a unified multi-task and multi-domain natural language reasoning and explanation benchmark.
We expect models to not only answer questions, but also produce step-by-step structured explanations describing how premises in the question are used to produce intermediate conclusions that can prove the correctness of a certain answer.
arXiv Detail & Related papers (2023-02-13T22:34:02Z) - NaturalProver: Grounded Mathematical Proof Generation with Language
Models [84.2064569475095]
Theorem proving in natural mathematical language plays a central role in mathematical advances and education.
We develop NaturalProver, a language model that generates proofs by conditioning on background references.
NaturalProver is capable of proving some theorems that require short (2-6 step) proofs, and providing next-step suggestions that are rated as correct and useful over 40% of the time.
arXiv Detail & Related papers (2022-05-25T17:01:18Z) - Generating Natural Language Proofs with Verifier-Guided Search [74.9614610172561]
We present a novel stepwise method NLProofS (Natural Language Proof Search)
NLProofS learns to generate relevant steps conditioning on the hypothesis.
It achieves state-of-the-art performance on EntailmentBank and RuleTaker.
arXiv Detail & Related papers (2022-05-25T02:22:30Z) - Graph-based Approximate Message Passing Iterations [0.0]
Approximate-message passing (AMP) algorithms have become an important element of high-dimensional statistical inference.
We show that AMP instances can be generically indexed by an oriented graph.
This enables to give a unified interpretation of these iterations, independent from the problem they solve, and a way of composing them arbitrarily.
arXiv Detail & Related papers (2021-09-24T11:56:59Z) - ProoFVer: Natural Logic Theorem Proving for Fact Verification [24.61301908217728]
We propose ProoFVer, a proof system for fact verification using natural logic.
The generation of proofs makes ProoFVer an explainable system.
We find that humans correctly simulate ProoFVer's decisions more often using the proofs.
arXiv Detail & Related papers (2021-08-25T17:23:04Z) - ProofWriter: Generating Implications, Proofs, and Abductive Statements
over Natural Language [19.917022148887273]
Transformers have been shown to emulate logical deduction over natural language theories.
We show that a generative model, called ProofWriter, can reliably generate both implications of a theory and the natural language proof(s) that support them.
arXiv Detail & Related papers (2020-12-24T00:55:46Z) - PRover: Proof Generation for Interpretable Reasoning over Rules [81.40404921232192]
We propose a transformer-based model that answers binary questions over rule-bases and generates the corresponding proofs.
Our model learns to predict nodes and edges corresponding to proof graphs in an efficient constrained training paradigm.
We conduct experiments on synthetic, hand-authored, and human-paraphrased rule-bases to show promising results for QA and proof generation.
arXiv Detail & Related papers (2020-10-06T15:47:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.