Entailment Tree Explanations via Iterative Retrieval-Generation Reasoner
- URL: http://arxiv.org/abs/2205.09224v1
- Date: Wed, 18 May 2022 21:52:11 GMT
- Title: Entailment Tree Explanations via Iterative Retrieval-Generation Reasoner
- Authors: Danilo Ribeiro, Shen Wang, Xiaofei Ma, Rui Dong, Xiaokai Wei, Henry
Zhu, Xinchi Chen, Zhiheng Huang, Peng Xu, Andrew Arnold, Dan Roth
- Abstract summary: We propose an architecture called Iterative Retrieval-Generation Reasoner (IRGR)
Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises.
We outperform existing benchmarks on premise retrieval and entailment tree generation, with around 300% gain in overall correctness.
- Score: 56.08919422452905
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models have achieved high performance on various question
answering (QA) benchmarks, but the explainability of their output remains
elusive. Structured explanations, called entailment trees, were recently
suggested as a way to explain and inspect a QA system's answer. In order to
better generate such entailment trees, we propose an architecture called
Iterative Retrieval-Generation Reasoner (IRGR). Our model is able to explain a
given hypothesis by systematically generating a step-by-step explanation from
textual premises. The IRGR model iteratively searches for suitable premises,
constructing a single entailment step at a time. Contrary to previous
approaches, our method combines generation steps and retrieval of premises,
allowing the model to leverage intermediate conclusions, and mitigating the
input size limit of baseline encoder-decoder models. We conduct experiments
using the EntailmentBank dataset, where we outperform existing benchmarks on
premise retrieval and entailment tree generation, with around 300% gain in
overall correctness.
Related papers
- Chain-of-Retrieval Augmented Generation [72.06205327186069]
This paper introduces an approach for training o1-like RAG models that retrieve and reason over relevant information step by step before generating the final answer.
Our proposed method, CoRAG, allows the model to dynamically reformulate the query based on the evolving state.
arXiv Detail & Related papers (2025-01-24T09:12:52Z) - Think&Cite: Improving Attributed Text Generation with Self-Guided Tree Search and Progress Reward Modeling [63.98194996746229]
Large language models (LLMs) are prone to hallucination and producing factually incorrect information.
We propose a novel framework, called Think&Cite, and formulate attributed text generation as a multi-step reasoning problem integrated with search.
arXiv Detail & Related papers (2024-12-19T13:55:48Z) - Integrating Hierarchical Semantic into Iterative Generation Model for Entailment Tree Explanation [7.5496857647335585]
We propose an architecture of integrating the Hierarchical Semantics of sentences under the framework of Controller-Generator (HiSCG) to explain answers.
The proposed method achieves comparable performance on all three settings of the EntailmentBank dataset.
arXiv Detail & Related papers (2024-09-26T11:46:58Z) - RLET: A Reinforcement Learning Based Approach for Explainable QA with
Entailment Trees [47.745218107037786]
We propose RLET, a Reinforcement Learning based Entailment Tree generation framework.
RLET iteratively performs single step reasoning with sentence selection and deduction generation modules.
Experiments on three settings of the EntailmentBank dataset demonstrate the strength of using RL framework.
arXiv Detail & Related papers (2022-10-31T06:45:05Z) - METGEN: A Module-Based Entailment Tree Generation Framework for Answer
Explanation [59.33241627273023]
We propose METGEN, a Module-based Entailment Tree GEN framework that has multiple modules and a reasoning controller.
Given a question, METGEN can iteratively generate the entailment tree by conducting single-step entailment with separate modules and selecting the reasoning flow with the controller.
Experiment results show that METGEN can outperform previous state-of-the-art models with only 9% of the parameters.
arXiv Detail & Related papers (2022-05-05T12:06:02Z) - Autoregressive Search Engines: Generating Substrings as Document
Identifiers [53.0729058170278]
Autoregressive language models are emerging as the de-facto standard for generating answers.
Previous work has explored ways to partition the search space into hierarchical structures.
In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers.
arXiv Detail & Related papers (2022-04-22T10:45:01Z) - Optimal Counterfactual Explanations in Tree Ensembles [3.8073142980733]
We advocate for a model-based search aiming at "optimal" explanations and propose efficient mixed-integer programming approaches.
We show that isolation forests can be modeled within our framework to focus the search on plausible explanations with a low outlier score.
arXiv Detail & Related papers (2021-06-11T22:44:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.