QA4QG: Using Question Answering to Constrain Multi-Hop Question
Generation
- URL: http://arxiv.org/abs/2202.06538v1
- Date: Mon, 14 Feb 2022 08:16:47 GMT
- Title: QA4QG: Using Question Answering to Constrain Multi-Hop Question
Generation
- Authors: Dan Su, Peng Xu, Pascale Fung
- Abstract summary: Multi-hop question generation (MQG) aims to generate complex questions which require reasoning over multiple pieces of information of the input passage.
We propose a novel framework, QA4QG, a QA-augmented BART-based framework for MQG.
Our results on the HotpotQA dataset show that QA4QG outperforms all state-of-the-art models.
- Score: 54.136509061542775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-hop question generation (MQG) aims to generate complex questions which
require reasoning over multiple pieces of information of the input passage.
Most existing work on MQG has focused on exploring graph-based networks to
equip the traditional Sequence-to-sequence framework with reasoning ability.
However, these models do not take full advantage of the constraint between
questions and answers. Furthermore, studies on multi-hop question answering
(QA) suggest that Transformers can replace the graph structure for multi-hop
reasoning. Therefore, in this work, we propose a novel framework, QA4QG, a
QA-augmented BART-based framework for MQG. It augments the standard BART model
with an additional multi-hop QA module to further constrain the generated
question. Our results on the HotpotQA dataset show that QA4QG outperforms all
state-of-the-art models, with an increase of 8 BLEU-4 and 8 ROUGE points
compared to the best results previously reported. Our work suggests the
advantage of introducing pre-trained language models and QA module for the MQG
task.
Related papers
- GenDec: A robust generative Question-decomposition method for Multi-hop
reasoning [32.12904215053187]
Multi-hop QA involves step-by-step reasoning to answer complex questions.
Existing large language models'(LLMs) reasoning ability in multi-hop question answering remains exploration.
It is unclear whether LLMs follow a desired reasoning chain to reach the right final answer.
arXiv Detail & Related papers (2024-02-17T02:21:44Z) - Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - An Empirical Comparison of LM-based Question and Answer Generation
Methods [79.31199020420827]
Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context.
In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning.
Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches.
arXiv Detail & Related papers (2023-05-26T14:59:53Z) - Understanding and Improving Zero-shot Multi-hop Reasoning in Generative
Question Answering [85.79940770146557]
We decompose multi-hop questions into multiple corresponding single-hop questions.
We find marked inconsistency in QA models' answers on these pairs of ostensibly identical question chains.
When trained only on single-hop questions, models generalize poorly to multi-hop questions.
arXiv Detail & Related papers (2022-10-09T11:48:07Z) - Modeling Multi-hop Question Answering as Single Sequence Prediction [88.72621430714985]
We propose a simple generative approach (PathFid) that extends the task beyond just answer generation.
PathFid explicitly models the reasoning process to resolve the answer for multi-hop questions.
Our experiments demonstrate that PathFid leads to strong performance gains on two multi-hop QA datasets.
arXiv Detail & Related papers (2022-05-18T21:57:59Z) - Ask to Understand: Question Generation for Multi-hop Question Answering [11.626390908264872]
Multi-hop Question Answering (QA) requires the machine to answer complex questions by finding scattering clues and reasoning from multiple documents.
We propose a novel method to complete multi-hop QA from the perspective of Question Generation (QG)
arXiv Detail & Related papers (2022-03-17T04:02:29Z) - Unified Question Generation with Continual Lifelong Learning [41.81627903996791]
Existing QG methods mainly focus on building or training models for specific QG datasets.
We propose a model named UnifiedQG based on lifelong learning techniques, which can continually learn QG tasks.
In addition, we transform the ability of a single trained Unified-QG model in improving $8$ Question Answering (QA) systems' performance.
arXiv Detail & Related papers (2022-01-24T14:05:18Z) - Multi-hop Question Generation with Graph Convolutional Network [58.31752179830959]
Multi-hop Question Generation (QG) aims to generate answer-related questions by aggregating and reasoning over multiple scattered evidence from different paragraphs.
We propose Multi-Hop volution Fusion Network for Question Generation (MulQG), which does context encoding in multiple hops.
Our proposed model is able to generate fluent questions with high completeness and outperforms the strongest baseline by 20.8% in the multi-hop evaluation.
arXiv Detail & Related papers (2020-10-19T06:15:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.