Decomposing Complex Questions Makes Multi-Hop QA Easier and More
Interpretable
- URL: http://arxiv.org/abs/2110.13472v1
- Date: Tue, 26 Oct 2021 08:10:35 GMT
- Title: Decomposing Complex Questions Makes Multi-Hop QA Easier and More
Interpretable
- Authors: Ruiliu Fu, Han Wang, Xuejun Zhang, Jun Zhou and Yonghong Yan
- Abstract summary: Multi-hop QA requires the machine to answer complex questions through finding multiple clues and reasoning.
We propose Relation Extractor-Reader and Comparator (RERC), a three-stage framework based on complex question decomposition.
In the 2WikiMultiHopQA dataset, our RERC model has achieved the most advanced performance, with a winning joint F1 score of 53.58 on the leaderboard.
- Score: 25.676852169835833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-hop QA requires the machine to answer complex questions through finding
multiple clues and reasoning, and provide explanatory evidence to demonstrate
the machine reasoning process. We propose Relation Extractor-Reader and
Comparator (RERC), a three-stage framework based on complex question
decomposition, which is the first work that the RERC model has been proposed
and applied in solving the multi-hop QA challenges. The Relation Extractor
decomposes the complex question, and then the Reader answers the sub-questions
in turn, and finally the Comparator performs numerical comparison and
summarizes all to get the final answer, where the entire process itself
constitutes a complete reasoning evidence path. In the 2WikiMultiHopQA dataset,
our RERC model has achieved the most advanced performance, with a winning joint
F1 score of 53.58 on the leaderboard. All indicators of our RERC are close to
human performance, with only 1.95 behind the human level in F1 score of support
fact. At the same time, the evidence path provided by our RERC framework has
excellent readability and faithfulness.
Related papers
- Measuring Retrieval Complexity in Question Answering Systems [64.74106622822424]
Retrieval complexity (RC) is a novel metric conditioned on the completeness of retrieved documents.
We propose an unsupervised pipeline to measure RC given an arbitrary retrieval system.
Our system can have a major impact on retrieval-based systems.
arXiv Detail & Related papers (2024-06-05T19:30:52Z) - End-to-End Beam Retrieval for Multi-Hop Question Answering [37.13580394608824]
Multi-hop question answering involves finding multiple relevant passages and step-by-step reasoning to answer complex questions.
Previous retrievers were customized for two-hop questions, and most of them were trained separately across different hops.
We introduce Beam Retrieval, an end-to-end beam retrieval framework for multi-hop QA.
arXiv Detail & Related papers (2023-08-17T13:24:14Z) - Logical Message Passing Networks with One-hop Inference on Atomic
Formulas [57.47174363091452]
We propose a framework for complex query answering that decomposes the Knowledge Graph embeddings from neural set operators.
On top of the query graph, we propose the Logical Message Passing Neural Network (LMPNN) that connects the local one-hop inferences on atomic formulas to the global logical reasoning.
Our approach yields the new state-of-the-art neural CQA model.
arXiv Detail & Related papers (2023-01-21T02:34:06Z) - Successive Prompting for Decomposing Complex Questions [50.00659445976735]
Recent works leverage the capabilities of large language models (LMs) to perform complex question answering in a few-shot setting.
We introduce Successive Prompting'', where we iteratively break down a complex task into a simple task, solve it, and then repeat the process until we get the final solution.
Our best model (with successive prompting) achieves an improvement of 5% absolute F1 on a few-shot version of the DROP dataset.
arXiv Detail & Related papers (2022-12-08T06:03:38Z) - Interpretable AMR-Based Question Decomposition for Multi-hop Question
Answering [12.35571328854374]
We propose a Question Decomposition method based on Abstract Meaning Representation (QDAMR) for multi-hop QA.
We decompose a multi-hop question into simpler sub-questions and answer them in order.
Experimental results on HotpotQA demonstrate that our approach is competitive for interpretable reasoning.
arXiv Detail & Related papers (2022-06-16T23:46:33Z) - From Easy to Hard: Two-stage Selector and Reader for Multi-hop Question
Answering [12.072618400000763]
Multi-hop question answering (QA) is a challenging task requiring QA systems to perform complex reasoning over multiple documents.
We propose a novel framework, From Easy to Hard (FE2H), to remove distracting information and obtain better contextual representations.
FE2H divides both the document selector and reader into two stages following an easy-to-hard manner.
arXiv Detail & Related papers (2022-05-24T02:33:58Z) - Modeling Multi-hop Question Answering as Single Sequence Prediction [88.72621430714985]
We propose a simple generative approach (PathFid) that extends the task beyond just answer generation.
PathFid explicitly models the reasoning process to resolve the answer for multi-hop questions.
Our experiments demonstrate that PathFid leads to strong performance gains on two multi-hop QA datasets.
arXiv Detail & Related papers (2022-05-18T21:57:59Z) - Answering Any-hop Open-domain Questions with Iterative Document
Reranking [62.76025579681472]
We propose a unified QA framework to answer any-hop open-domain questions.
Our method consistently achieves performance comparable to or better than the state-of-the-art on both single-hop and multi-hop open-domain QA datasets.
arXiv Detail & Related papers (2020-09-16T04:31:38Z) - Retrospective Reader for Machine Reading Comprehension [90.6069071495214]
Machine reading comprehension (MRC) is an AI challenge that requires machine to determine the correct answers to questions based on a given passage.
When unanswerable questions are involved in the MRC task, an essential verification module called verifier is especially required in addition to the encoder.
This paper devotes itself to exploring better verifier design for the MRC task with unanswerable questions.
arXiv Detail & Related papers (2020-01-27T11:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.