Unsupervised Multi-hop Question Answering by Question Generation
- URL: http://arxiv.org/abs/2010.12623v2
- Date: Mon, 12 Apr 2021 01:48:29 GMT
- Title: Unsupervised Multi-hop Question Answering by Question Generation
- Authors: Liangming Pan, Wenhu Chen, Wenhan Xiong, Min-Yen Kan, William Yang
Wang
- Abstract summary: MQA-QG is an unsupervised framework that can generate human-like multi-hop training data.
Using only generated training data, we can train a competent multi-hop QA which achieves 61% and 83% of the supervised learning performance.
- Score: 108.61653629883753
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Obtaining training data for multi-hop question answering (QA) is
time-consuming and resource-intensive. We explore the possibility to train a
well-performed multi-hop QA model without referencing any human-labeled
multi-hop question-answer pairs, i.e., unsupervised multi-hop QA. We propose
MQA-QG, an unsupervised framework that can generate human-like multi-hop
training data from both homogeneous and heterogeneous data sources. MQA-QG
generates questions by first selecting/generating relevant information from
each data source and then integrating the multiple information to form a
multi-hop question. Using only generated training data, we can train a
competent multi-hop QA which achieves 61% and 83% of the supervised learning
performance for the HybridQA and the HotpotQA dataset, respectively. We also
show that pretraining the QA system with the generated data would greatly
reduce the demand for human-annotated training data. Our codes are publicly
available at https://github.com/teacherpeterpan/Unsupervised-Multi-hop-QA.
Related papers
- How Well Do Multi-hop Reading Comprehension Models Understand Date
Information? [31.243088887839257]
The ability of multi-hop models to perform step-by-step reasoning when finding an answer to a comparison question remains unclear.
It is also unclear how questions about the internal reasoning process are useful for training and evaluating question-answering (QA) systems.
arXiv Detail & Related papers (2022-10-11T07:24:07Z) - Understanding and Improving Zero-shot Multi-hop Reasoning in Generative
Question Answering [85.79940770146557]
We decompose multi-hop questions into multiple corresponding single-hop questions.
We find marked inconsistency in QA models' answers on these pairs of ostensibly identical question chains.
When trained only on single-hop questions, models generalize poorly to multi-hop questions.
arXiv Detail & Related papers (2022-10-09T11:48:07Z) - Prompt-based Conservation Learning for Multi-hop Question Answering [11.516763652013005]
Multi-hop question answering requires reasoning over multiple documents to answer a complex question.
Most existing multi-hop QA methods fail to answer a large fraction of sub-questions.
We propose the Prompt-based Conservation Learning framework for multi-hop QA.
arXiv Detail & Related papers (2022-09-14T20:50:46Z) - Modeling Multi-hop Question Answering as Single Sequence Prediction [88.72621430714985]
We propose a simple generative approach (PathFid) that extends the task beyond just answer generation.
PathFid explicitly models the reasoning process to resolve the answer for multi-hop questions.
Our experiments demonstrate that PathFid leads to strong performance gains on two multi-hop QA datasets.
arXiv Detail & Related papers (2022-05-18T21:57:59Z) - QA4QG: Using Question Answering to Constrain Multi-Hop Question
Generation [54.136509061542775]
Multi-hop question generation (MQG) aims to generate complex questions which require reasoning over multiple pieces of information of the input passage.
We propose a novel framework, QA4QG, a QA-augmented BART-based framework for MQG.
Our results on the HotpotQA dataset show that QA4QG outperforms all state-of-the-art models.
arXiv Detail & Related papers (2022-02-14T08:16:47Z) - Multi-hop Question Generation with Graph Convolutional Network [58.31752179830959]
Multi-hop Question Generation (QG) aims to generate answer-related questions by aggregating and reasoning over multiple scattered evidence from different paragraphs.
We propose Multi-Hop volution Fusion Network for Question Generation (MulQG), which does context encoding in multiple hops.
Our proposed model is able to generate fluent questions with high completeness and outperforms the strongest baseline by 20.8% in the multi-hop evaluation.
arXiv Detail & Related papers (2020-10-19T06:15:36Z) - Reinforced Multi-task Approach for Multi-hop Question Generation [47.15108724294234]
We take up Multi-hop question generation, which aims at generating relevant questions based on supporting facts in the context.
We employ multitask learning with the auxiliary task of answer-aware supporting fact prediction to guide the question generator.
We demonstrate the effectiveness of our approach through experiments on the multi-hop question answering dataset, HotPotQA.
arXiv Detail & Related papers (2020-04-05T10:16:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.