Reasoning over Hierarchical Question Decomposition Tree for Explainable
Question Answering
- URL: http://arxiv.org/abs/2305.15056v1
- Date: Wed, 24 May 2023 11:45:59 GMT
- Title: Reasoning over Hierarchical Question Decomposition Tree for Explainable
Question Answering
- Authors: Jiajie Zhang, Shulin Cao, Tingjia Zhang, Xin Lv, Jiaxin Shi, Qi Tian,
Juanzi Li, Lei Hou
- Abstract summary: We propose to leverage question decomposing for heterogeneous knowledge integration.
We propose a novel two-stage XQA framework, Reasoning over Hierarchical Question Decomposition Tree (RoHT)
Experiments on complex QA datasets KQA Pro and Musique show that our framework outperforms SOTA methods significantly.
- Score: 83.74210749046551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable question answering (XQA) aims to answer a given question and
provide an explanation why the answer is selected. Existing XQA methods focus
on reasoning on a single knowledge source, e.g., structured knowledge bases,
unstructured corpora, etc. However, integrating information from heterogeneous
knowledge sources is essential to answer complex questions. In this paper, we
propose to leverage question decomposing for heterogeneous knowledge
integration, by breaking down a complex question into simpler ones, and
selecting the appropriate knowledge source for each sub-question. To facilitate
reasoning, we propose a novel two-stage XQA framework, Reasoning over
Hierarchical Question Decomposition Tree (RoHT). First, we build the
Hierarchical Question Decomposition Tree (HQDT) to understand the semantics of
a complex question; then, we conduct probabilistic reasoning over HQDT from
root to leaves recursively, to aggregate heterogeneous knowledge at different
tree levels and search for a best solution considering the decomposing and
answering probabilities. The experiments on complex QA datasets KQA Pro and
Musique show that our framework outperforms SOTA methods significantly,
demonstrating the effectiveness of leveraging question decomposing for
knowledge integration and our RoHT framework.
Related papers
- Probabilistic Tree-of-thought Reasoning for Answering
Knowledge-intensive Complex Questions [93.40614719648386]
Large language models (LLMs) are capable of answering knowledge-intensive complex questions with chain-of-thought (CoT) reasoning.
Recent works turn to retrieving external knowledge to augment CoT reasoning.
We propose a novel approach: Probabilistic Tree-of-thought Reasoning (ProbTree)
arXiv Detail & Related papers (2023-11-23T12:52:37Z) - Tree of Clarifications: Answering Ambiguous Questions with
Retrieval-Augmented Large Language Models [30.186503757127188]
Tree of Clarifications (ToC) is a framework to generate a long-form answer to ambiguous questions.
ToC outperforms existing baselines on ASQA in a few-shot setup across the metrics.
arXiv Detail & Related papers (2023-10-23T08:42:49Z) - Open-Set Knowledge-Based Visual Question Answering with Inference Paths [79.55742631375063]
The purpose of Knowledge-Based Visual Question Answering (KB-VQA) is to provide a correct answer to the question with the aid of external knowledge bases.
We propose a new retriever-ranker paradigm of KB-VQA, Graph pATH rankER (GATHER for brevity)
Specifically, it contains graph constructing, pruning, and path-level ranking, which not only retrieves accurate answers but also provides inference paths that explain the reasoning process.
arXiv Detail & Related papers (2023-10-12T09:12:50Z) - Question Decomposition Tree for Answering Complex Questions over
Knowledge Bases [9.723321745919186]
We propose Question Decomposition Tree (QDT) to represent the structure of complex questions.
Inspired by recent advances in natural language generation (NLG), we present a two-staged method called Clue-Decipher to generate QDT.
To verify that QDT can enhance KBQA task, we design a decomposition-based KBQA system called QDTQA.
arXiv Detail & Related papers (2023-06-13T07:44:29Z) - ReasonChainQA: Text-based Complex Question Answering with Explainable
Evidence Chains [15.837457557803507]
We present a benchmark textbfReasonChainQA with explanatory and explicit evidence chains.
ReasonChainQA consists of two subtasks: answer generation and evidence chains extraction, it also contains higher diversity for multi-hop questions.
Additional experiment on supervised and unsupervised retrieval fully indicates the significance of ReasonChainQA.
arXiv Detail & Related papers (2022-10-17T06:07:39Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z) - A Survey on Complex Question Answering over Knowledge Base: Recent
Advances and Challenges [71.4531144086568]
Question Answering (QA) over Knowledge Base (KB) aims to automatically answer natural language questions.
Researchers have shifted their attention from simple questions to complex questions, which require more KB triples and constraint inference.
arXiv Detail & Related papers (2020-07-26T07:13:32Z) - Unsupervised Question Decomposition for Question Answering [102.56966847404287]
We propose an algorithm for One-to-N Unsupervised Sequence Sequence (ONUS) that learns to map one hard, multi-hop question to many simpler, single-hop sub-questions.
We show large QA improvements on HotpotQA over a strong baseline on the original, out-of-domain, and multi-hop dev sets.
arXiv Detail & Related papers (2020-02-22T19:40:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.