ReasonChainQA: Text-based Complex Question Answering with Explainable
Evidence Chains
- URL: http://arxiv.org/abs/2210.08763v1
- Date: Mon, 17 Oct 2022 06:07:39 GMT
- Title: ReasonChainQA: Text-based Complex Question Answering with Explainable
Evidence Chains
- Authors: Minjun Zhu, Yixuan Weng, Shizhu He, Kang Liu, Jun Zhao
- Abstract summary: We present a benchmark textbfReasonChainQA with explanatory and explicit evidence chains.
ReasonChainQA consists of two subtasks: answer generation and evidence chains extraction, it also contains higher diversity for multi-hop questions.
Additional experiment on supervised and unsupervised retrieval fully indicates the significance of ReasonChainQA.
- Score: 15.837457557803507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability of reasoning over evidence has received increasing attention in
question answering (QA). Recently, natural language database (NLDB) conducts
complex QA in knowledge base with textual evidences rather than structured
representations, this task attracts a lot of attention because of the
flexibility and richness of textual evidence. However, existing text-based
complex question answering datasets fail to provide explicit reasoning process,
while it's important for retrieval effectiveness and reasoning
interpretability. Therefore, we present a benchmark \textbf{ReasonChainQA} with
explanatory and explicit evidence chains. ReasonChainQA consists of two
subtasks: answer generation and evidence chains extraction, it also contains
higher diversity for multi-hop questions with varying depths, 12 reasoning
types and 78 relations. To obtain high-quality textual evidences for answering
complex question. Additional experiment on supervised and unsupervised
retrieval fully indicates the significance of ReasonChainQA. Dataset and codes
will be made publicly available upon accepted.
Related papers
- GRSQA -- Graph Reasoning-Structured Question Answering Dataset [50.223851616680754]
We introduce the Graph Reasoning-Structured Question Answering dataset (GRS-QA), which includes both semantic contexts and reasoning structures for QA pairs.
Unlike existing M-QA datasets, GRS-QA explicitly captures intricate reasoning pathways by constructing reasoning graphs.
Our empirical analysis reveals that LLMs perform differently when handling questions with varying reasoning structures.
arXiv Detail & Related papers (2024-11-01T05:14:03Z) - Leveraging Structured Information for Explainable Multi-hop Question
Answering and Reasoning [14.219239732584368]
In this work, we investigate constructing and leveraging extracted semantic structures (graphs) for multi-hop question answering.
Empirical results and human evaluations show that our framework: generates more faithful reasoning chains and substantially improves the QA performance on two benchmark datasets.
arXiv Detail & Related papers (2023-11-07T05:32:39Z) - DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - Reasoning over Hierarchical Question Decomposition Tree for Explainable
Question Answering [83.74210749046551]
We propose to leverage question decomposing for heterogeneous knowledge integration.
We propose a novel two-stage XQA framework, Reasoning over Hierarchical Question Decomposition Tree (RoHT)
Experiments on complex QA datasets KQA Pro and Musique show that our framework outperforms SOTA methods significantly.
arXiv Detail & Related papers (2023-05-24T11:45:59Z) - HPE:Answering Complex Questions over Text by Hybrid Question Parsing and
Execution [92.69684305578957]
We propose a framework of question parsing and execution on textual QA.
The proposed framework can be viewed as a top-down question parsing followed by a bottom-up answer backtracking.
Our experiments on MuSiQue, 2WikiQA, HotpotQA, and NQ show that the proposed parsing and hybrid execution framework outperforms existing approaches in supervised, few-shot, and zero-shot settings.
arXiv Detail & Related papers (2023-05-12T22:37:06Z) - Grow-and-Clip: Informative-yet-Concise Evidence Distillation for Answer
Explanation [22.20733260041759]
We argue that the evidences of an answer is critical to enhancing the interpretability of QA models.
We are the first to explicitly define the concept of evidence as the supporting facts in a context which are informative, concise, and readable.
We propose Grow-and-Clip Evidence Distillation (GCED) algorithm to extract evidences from the contexts by trade-off informativeness, conciseness, and readability.
arXiv Detail & Related papers (2022-01-13T17:18:17Z) - Discourse Comprehension: A Question Answering Framework to Represent
Sentence Connections [35.005593397252746]
A key challenge in building and evaluating models for discourse comprehension is the lack of annotated data.
This paper presents a novel paradigm that enables scalable data collection targeting the comprehension of news documents.
The resulting corpus, DCQA, consists of 22,430 question-answer pairs across 607 English documents.
arXiv Detail & Related papers (2021-11-01T04:50:26Z) - Exploiting Reasoning Chains for Multi-hop Science Question Answering [51.86289192292466]
Our framework is capable of performing explainable reasoning without the need of any corpus-specific annotations.
A textitChain-aware loss, concerning both local and global chain information, is also designed to enable the generated chains to serve as distant supervision signals.
arXiv Detail & Related papers (2021-09-07T07:22:07Z) - Open Question Answering over Tables and Text [55.8412170633547]
In open question answering (QA), the answer to a question is produced by retrieving and then analyzing documents that might contain answers to the question.
Most open QA systems have considered only retrieving information from unstructured text.
We present a new large-scale dataset Open Table-and-Text Question Answering (OTT-QA) to evaluate performance on this task.
arXiv Detail & Related papers (2020-10-20T16:48:14Z) - QED: A Framework and Dataset for Explanations in Question Answering [27.85923397716627]
We release an expert-annotated dataset of QED explanations built upon a subset of the Google Natural Questions dataset.
A promising result suggests that training on a relatively small amount of QED data can improve question answering.
arXiv Detail & Related papers (2020-09-08T23:34:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.