Explaining Answers with Entailment Trees
- URL: http://arxiv.org/abs/2104.08661v1
- Date: Sat, 17 Apr 2021 23:13:56 GMT
- Title: Explaining Answers with Entailment Trees
- Authors: Bhavana Dalvi, Peter Jansen, Oyvind Tafjord, Zhengnan Xie, Hannah
Smith, Leighanna Pipatanangkura, Peter Clark
- Abstract summary: We aim to explain answers by showing how evidence leads to the answer in a systematic way.
Our approach is to generate explanations in the form of entailment trees, namely a tree of entailment steps from facts that are known, through intermediate conclusions, to the final answer.
To train a model with this skill, we created ENTAILMENTBANK, the first dataset to contain multistep entailment trees.
- Score: 16.555369850015055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our goal, in the context of open-domain textual question-answering (QA), is
to explain answers by not just listing supporting textual evidence
("rationales"), but also showing how such evidence leads to the answer in a
systematic way. If this could be done, new opportunities for understanding and
debugging the system's reasoning would become possible. Our approach is to
generate explanations in the form of entailment trees, namely a tree of
entailment steps from facts that are known, through intermediate conclusions,
to the final answer. To train a model with this skill, we created
ENTAILMENTBANK, the first dataset to contain multistep entailment trees. At
each node in the tree (typically) two or more facts compose together to produce
a new conclusion. Given a hypothesis (question + answer), we define three
increasingly difficult explanation tasks: generate a valid entailment tree
given (a) all relevant sentences (the leaves of the gold entailment tree), (b)
all relevant and some irrelevant sentences, or (c) a corpus. We show that a
strong language model only partially solves these tasks, and identify several
new directions to improve performance. This work is significant as it provides
a new type of dataset (multistep entailments) and baselines, offering a new
avenue for the community to generate richer, more systematic explanations.
Related papers
- Integrating Hierarchical Semantic into Iterative Generation Model for Entailment Tree Explanation [7.5496857647335585]
We propose an architecture of integrating the Hierarchical Semantics of sentences under the framework of Controller-Generator (HiSCG) to explain answers.
The proposed method achieves comparable performance on all three settings of the EntailmentBank dataset.
arXiv Detail & Related papers (2024-09-26T11:46:58Z) - Probabilistic Tree-of-thought Reasoning for Answering
Knowledge-intensive Complex Questions [93.40614719648386]
Large language models (LLMs) are capable of answering knowledge-intensive complex questions with chain-of-thought (CoT) reasoning.
Recent works turn to retrieving external knowledge to augment CoT reasoning.
We propose a novel approach: Probabilistic Tree-of-thought Reasoning (ProbTree)
arXiv Detail & Related papers (2023-11-23T12:52:37Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - RLET: A Reinforcement Learning Based Approach for Explainable QA with
Entailment Trees [47.745218107037786]
We propose RLET, a Reinforcement Learning based Entailment Tree generation framework.
RLET iteratively performs single step reasoning with sentence selection and deduction generation modules.
Experiments on three settings of the EntailmentBank dataset demonstrate the strength of using RL framework.
arXiv Detail & Related papers (2022-10-31T06:45:05Z) - NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning [59.16962123636579]
This paper proposes a new take on Prolog-based inference engines.
We replace handcrafted rules with a combination of neural language modeling, guided generation, and semi dense retrieval.
Our implementation, NELLIE, is the first system to demonstrate fully interpretable, end-to-end grounded QA.
arXiv Detail & Related papers (2022-09-16T00:54:44Z) - Entailment Tree Explanations via Iterative Retrieval-Generation Reasoner [56.08919422452905]
We propose an architecture called Iterative Retrieval-Generation Reasoner (IRGR)
Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises.
We outperform existing benchmarks on premise retrieval and entailment tree generation, with around 300% gain in overall correctness.
arXiv Detail & Related papers (2022-05-18T21:52:11Z) - METGEN: A Module-Based Entailment Tree Generation Framework for Answer
Explanation [59.33241627273023]
We propose METGEN, a Module-based Entailment Tree GEN framework that has multiple modules and a reasoning controller.
Given a question, METGEN can iteratively generate the entailment tree by conducting single-step entailment with separate modules and selecting the reasoning flow with the controller.
Experiment results show that METGEN can outperform previous state-of-the-art models with only 9% of the parameters.
arXiv Detail & Related papers (2022-05-05T12:06:02Z) - Fact-Tree Reasoning for N-ary Question Answering over Knowledge Graphs [21.87251293779023]
We propose a novel fact-tree reasoning framework, through transforming the question into a fact tree and performing iterative fact reasoning on it to predict the correct answer.
We demonstrate that the proposed fact-tree reasoning framework has the desired advantage of high answer prediction accuracy.
arXiv Detail & Related papers (2021-08-17T13:27:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.