METGEN: A Module-Based Entailment Tree Generation Framework for Answer
Explanation
- URL: http://arxiv.org/abs/2205.02593v1
- Date: Thu, 5 May 2022 12:06:02 GMT
- Title: METGEN: A Module-Based Entailment Tree Generation Framework for Answer
Explanation
- Authors: Ruixin Hong, Hongming Zhang, Xintong Yu, Changshui Zhang
- Abstract summary: We propose METGEN, a Module-based Entailment Tree GEN framework that has multiple modules and a reasoning controller.
Given a question, METGEN can iteratively generate the entailment tree by conducting single-step entailment with separate modules and selecting the reasoning flow with the controller.
Experiment results show that METGEN can outperform previous state-of-the-art models with only 9% of the parameters.
- Score: 59.33241627273023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowing the reasoning chains from knowledge to the predicted answers can help
construct an explainable question answering (QA) system. Advances on QA
explanation propose to explain the answers with entailment trees composed of
multiple entailment steps. While current work proposes to generate entailment
trees with end-to-end generative models, the steps in the generated trees are
not constrained and could be unreliable. In this paper, we propose METGEN, a
Module-based Entailment Tree GENeration framework that has multiple modules and
a reasoning controller. Given a question and several supporting knowledge,
METGEN can iteratively generate the entailment tree by conducting single-step
entailment with separate modules and selecting the reasoning flow with the
controller. As each module is guided to perform a specific type of entailment
reasoning, the steps generated by METGEN are more reliable and valid.
Experiment results on the standard benchmark show that METGEN can outperform
previous state-of-the-art models with only 9% of the parameters.
Related papers
- An efficient solution to Hidden Markov Models on trees with coupled branches [0.0]
We extend the framework of Hidden Models (HMMs) on trees to address scenarios where the tree-like structure of the data includes coupled branches.
We develop a programming algorithm that efficiently solves the likelihood, decoding, and parameter learning problems for tree-based HMMs with coupled branches.
arXiv Detail & Related papers (2024-06-03T18:00:00Z) - Probabilistic Tree-of-thought Reasoning for Answering
Knowledge-intensive Complex Questions [93.40614719648386]
Large language models (LLMs) are capable of answering knowledge-intensive complex questions with chain-of-thought (CoT) reasoning.
Recent works turn to retrieving external knowledge to augment CoT reasoning.
We propose a novel approach: Probabilistic Tree-of-thought Reasoning (ProbTree)
arXiv Detail & Related papers (2023-11-23T12:52:37Z) - Faithful Question Answering with Monte-Carlo Planning [78.02429369951363]
We propose FAME (FAithful question answering with MontE-carlo planning) to answer questions based on faithful reasoning steps.
We formulate the task as a discrete decision-making problem and solve it through the interaction of a reasoning environment and a controller.
FAME achieves state-of-the-art performance on the standard benchmark.
arXiv Detail & Related papers (2023-05-04T05:21:36Z) - RLET: A Reinforcement Learning Based Approach for Explainable QA with
Entailment Trees [47.745218107037786]
We propose RLET, a Reinforcement Learning based Entailment Tree generation framework.
RLET iteratively performs single step reasoning with sentence selection and deduction generation modules.
Experiments on three settings of the EntailmentBank dataset demonstrate the strength of using RL framework.
arXiv Detail & Related papers (2022-10-31T06:45:05Z) - Entailment Tree Explanations via Iterative Retrieval-Generation Reasoner [56.08919422452905]
We propose an architecture called Iterative Retrieval-Generation Reasoner (IRGR)
Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises.
We outperform existing benchmarks on premise retrieval and entailment tree generation, with around 300% gain in overall correctness.
arXiv Detail & Related papers (2022-05-18T21:52:11Z) - Text Modular Networks: Learning to Decompose Tasks in the Language of
Existing Models [61.480085460269514]
We propose a framework for building interpretable systems that learn to solve complex tasks by decomposing them into simpler ones solvable by existing models.
We use this framework to build ModularQA, a system that can answer multi-hop reasoning questions by decomposing them into sub-questions answerable by a neural factoid single-span QA model and a symbolic calculator.
arXiv Detail & Related papers (2020-09-01T23:45:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.