Interpretable Proof Generation via Iterative Backward Reasoning
- URL: http://arxiv.org/abs/2205.10714v2
- Date: Tue, 24 May 2022 08:58:36 GMT
- Title: Interpretable Proof Generation via Iterative Backward Reasoning
- Authors: Hanhao Qu, Yu Cao, Jun Gao, Liang Ding, Ruifeng Xu
- Abstract summary: We present IBR, an Iterative Backward Reasoning model to solve the proof generation tasks on rule-based Question Answering (QA)
We handle the limitations of existed works in two folds: 1) enhance the interpretability of reasoning procedures with detailed tracking, by predicting nodes and edges in the proof path iteratively backward from the question; 2) promote the efficiency and accuracy via reasoning on the elaborate representations of nodes and history paths, without any intermediate texts that may introduce external noise during proof generation.
- Score: 37.03964644070573
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present IBR, an Iterative Backward Reasoning model to solve the proof
generation tasks on rule-based Question Answering (QA), where models are
required to reason over a series of textual rules and facts to find out the
related proof path and derive the final answer. We handle the limitations of
existed works in two folds: 1) enhance the interpretability of reasoning
procedures with detailed tracking, by predicting nodes and edges in the proof
path iteratively backward from the question; 2) promote the efficiency and
accuracy via reasoning on the elaborate representations of nodes and history
paths, without any intermediate texts that may introduce external noise during
proof generation. There are three main modules in IBR, QA and proof strategy
prediction to obtain the answer and offer guidance for the following procedure;
parent node prediction to determine a node in the existing proof that a new
child node will link to; child node prediction to find out which new node will
be added to the proof. Experiments on both synthetic and paraphrased datasets
demonstrate that IBR has better in-domain performance as well as cross-domain
transferability than several strong baselines. Our code and models are
available at https://github.com/find-knowledge/IBR .
Related papers
- Self-Explainable Graph Neural Networks for Link Prediction [30.41648521030615]
Graph Neural Networks (GNNs) have achieved state-of-the-art performance for link prediction.
GNNs suffer from poor interpretability, which limits their adoptions in critical scenarios.
We propose a new framework and it can find various $K$ important neighbors of one node to learn pair-specific representations for links from this node to other nodes.
arXiv Detail & Related papers (2023-05-21T21:57:32Z) - Open-domain Question Answering via Chain of Reasoning over Heterogeneous
Knowledge [82.5582220249183]
We propose a novel open-domain question answering (ODQA) framework for answering single/multi-hop questions across heterogeneous knowledge sources.
Unlike previous methods that solely rely on the retriever for gathering all evidence in isolation, our intermediary performs a chain of reasoning over the retrieved set.
Our system achieves competitive performance on two ODQA datasets, OTT-QA and NQ, against tables and passages from Wikipedia.
arXiv Detail & Related papers (2022-10-22T03:21:32Z) - Entailment Tree Explanations via Iterative Retrieval-Generation Reasoner [56.08919422452905]
We propose an architecture called Iterative Retrieval-Generation Reasoner (IRGR)
Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises.
We outperform existing benchmarks on premise retrieval and entailment tree generation, with around 300% gain in overall correctness.
arXiv Detail & Related papers (2022-05-18T21:52:11Z) - multiPRover: Generating Multiple Proofs for Improved Interpretability in
Rule Reasoning [73.09791959325204]
We focus on a type of linguistic formal reasoning where the goal is to reason over explicit knowledge in the form of natural language facts and rules.
A recent work, named PRover, performs such reasoning by answering a question and also generating a proof graph that explains the answer.
In our work, we address a new and challenging problem of generating multiple proof graphs for reasoning over natural language rule-bases.
arXiv Detail & Related papers (2021-06-02T17:58:35Z) - FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale
Generation [19.73842483996047]
We develop FiD-Ex, which addresses shortcomings for seq2seq models by introducing sentence markers to eliminate explanation fabrication.
FiD-Ex significantly improves over prior work in terms of explanation metrics and task accuracy, on multiple tasks from the ERASER explainability benchmark.
arXiv Detail & Related papers (2020-12-31T07:22:15Z) - PRover: Proof Generation for Interpretable Reasoning over Rules [81.40404921232192]
We propose a transformer-based model that answers binary questions over rule-bases and generates the corresponding proofs.
Our model learns to predict nodes and edges corresponding to proof graphs in an efficient constrained training paradigm.
We conduct experiments on synthetic, hand-authored, and human-paraphrased rule-bases to show promising results for QA and proof generation.
arXiv Detail & Related papers (2020-10-06T15:47:53Z) - BSN++: Complementary Boundary Regressor with Scale-Balanced Relation
Modeling for Temporal Action Proposal Generation [85.13713217986738]
We present BSN++, a new framework which exploits complementary boundary regressor and relation modeling for temporal proposal generation.
Not surprisingly, the proposed BSN++ ranked 1st place in the CVPR19 - ActivityNet challenge leaderboard on temporal action localization task.
arXiv Detail & Related papers (2020-09-15T07:08:59Z) - Inductive Link Prediction for Nodes Having Only Attribute Information [21.714834749122137]
In attributed graphs, both the structure and attribute information can be utilized for link prediction.
We propose a model called DEAL, which consists of three components: two node embedding encoders and one alignment mechanism.
Our proposed model significantly outperforms existing inductive link prediction methods, and also outperforms the state-of-the-art methods on transductive link prediction.
arXiv Detail & Related papers (2020-07-16T00:51:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.