Entailment Tree Explanations via Iterative Retrieval-Generation Reasoner
- URL: http://arxiv.org/abs/2205.09224v1
- Date: Wed, 18 May 2022 21:52:11 GMT
- Title: Entailment Tree Explanations via Iterative Retrieval-Generation Reasoner
- Authors: Danilo Ribeiro, Shen Wang, Xiaofei Ma, Rui Dong, Xiaokai Wei, Henry
Zhu, Xinchi Chen, Zhiheng Huang, Peng Xu, Andrew Arnold, Dan Roth
- Abstract summary: We propose an architecture called Iterative Retrieval-Generation Reasoner (IRGR)
Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises.
We outperform existing benchmarks on premise retrieval and entailment tree generation, with around 300% gain in overall correctness.
- Score: 56.08919422452905
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models have achieved high performance on various question
answering (QA) benchmarks, but the explainability of their output remains
elusive. Structured explanations, called entailment trees, were recently
suggested as a way to explain and inspect a QA system's answer. In order to
better generate such entailment trees, we propose an architecture called
Iterative Retrieval-Generation Reasoner (IRGR). Our model is able to explain a
given hypothesis by systematically generating a step-by-step explanation from
textual premises. The IRGR model iteratively searches for suitable premises,
constructing a single entailment step at a time. Contrary to previous
approaches, our method combines generation steps and retrieval of premises,
allowing the model to leverage intermediate conclusions, and mitigating the
input size limit of baseline encoder-decoder models. We conduct experiments
using the EntailmentBank dataset, where we outperform existing benchmarks on
premise retrieval and entailment tree generation, with around 300% gain in
overall correctness.
Related papers
- Integrating Hierarchical Semantic into Iterative Generation Model for Entailment Tree Explanation [7.5496857647335585]
We propose an architecture of integrating the Hierarchical Semantics of sentences under the framework of Controller-Generator (HiSCG) to explain answers.
The proposed method achieves comparable performance on all three settings of the EntailmentBank dataset.
arXiv Detail & Related papers (2024-09-26T11:46:58Z) - A Logical Pattern Memory Pre-trained Model for Entailment Tree
Generation [23.375260036179252]
Generating coherent and credible explanations remains a significant challenge in the field of AI.
We propose the logical pattern memory pre-trained model (LMPM)
Our model produces more coherent and reasonable conclusions that closely align with the underlying premises.
arXiv Detail & Related papers (2024-03-11T03:45:09Z) - An Empirical Comparison of LM-based Question and Answer Generation
Methods [79.31199020420827]
Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context.
In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning.
Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches.
arXiv Detail & Related papers (2023-05-26T14:59:53Z) - Hierarchical clustering with dot products recovers hidden tree structure [53.68551192799585]
In this paper we offer a new perspective on the well established agglomerative clustering algorithm, focusing on recovery of hierarchical structure.
We recommend a simple variant of the standard algorithm, in which clusters are merged by maximum average dot product and not, for example, by minimum distance or within-cluster variance.
We demonstrate that the tree output by this algorithm provides a bona fide estimate of generative hierarchical structure in data, under a generic probabilistic graphical model.
arXiv Detail & Related papers (2023-05-24T11:05:12Z) - RLET: A Reinforcement Learning Based Approach for Explainable QA with
Entailment Trees [47.745218107037786]
We propose RLET, a Reinforcement Learning based Entailment Tree generation framework.
RLET iteratively performs single step reasoning with sentence selection and deduction generation modules.
Experiments on three settings of the EntailmentBank dataset demonstrate the strength of using RL framework.
arXiv Detail & Related papers (2022-10-31T06:45:05Z) - METGEN: A Module-Based Entailment Tree Generation Framework for Answer
Explanation [59.33241627273023]
We propose METGEN, a Module-based Entailment Tree GEN framework that has multiple modules and a reasoning controller.
Given a question, METGEN can iteratively generate the entailment tree by conducting single-step entailment with separate modules and selecting the reasoning flow with the controller.
Experiment results show that METGEN can outperform previous state-of-the-art models with only 9% of the parameters.
arXiv Detail & Related papers (2022-05-05T12:06:02Z) - Autoregressive Search Engines: Generating Substrings as Document
Identifiers [53.0729058170278]
Autoregressive language models are emerging as the de-facto standard for generating answers.
Previous work has explored ways to partition the search space into hierarchical structures.
In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers.
arXiv Detail & Related papers (2022-04-22T10:45:01Z) - Optimal Counterfactual Explanations in Tree Ensembles [3.8073142980733]
We advocate for a model-based search aiming at "optimal" explanations and propose efficient mixed-integer programming approaches.
We show that isolation forests can be modeled within our framework to focus the search on plausible explanations with a low outlier score.
arXiv Detail & Related papers (2021-06-11T22:44:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.