Improving Question Generation with Multi-level Content Planning
- URL: http://arxiv.org/abs/2310.13512v2
- Date: Mon, 23 Oct 2023 03:37:22 GMT
- Title: Improving Question Generation with Multi-level Content Planning
- Authors: Zehua Xia, Qi Gou, Bowen Yu, Haiyang Yu, Fei Huang, Yongbin Li, Cam-Tu
Nguyen
- Abstract summary: This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
- Score: 70.37285816596527
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper addresses the problem of generating questions from a given context
and an answer, specifically focusing on questions that require multi-hop
reasoning across an extended context. Previous studies have suggested that key
phrase selection is essential for question generation (QG), yet it is still
challenging to connect such disjointed phrases into meaningful questions,
particularly for long context. To mitigate this issue, we propose MultiFactor,
a novel QG framework based on multi-level content planning. Specifically,
MultiFactor includes two components: FA-model, which simultaneously selects key
phrases and generates full answers, and Q-model which takes the generated full
answer as an additional input to generate questions. Here, full answer
generation is introduced to connect the short answer with the selected key
phrases, thus forming an answer-aware summary to facilitate QG. Both FA-model
and Q-model are formalized as simple-yet-effective Phrase-Enhanced
Transformers, our joint model for phrase selection and text generation.
Experimental results show that our method outperforms strong baselines on two
popular QG datasets. Our code is available at
https://github.com/zeaver/MultiFactor.
Related papers
- SEMQA: Semi-Extractive Multi-Source Question Answering [94.04430035121136]
We introduce a new QA task for answering multi-answer questions by summarizing multiple diverse sources in a semi-extractive fashion.
We create the first dataset of this kind, QuoteSum, with human-written semi-extractive answers to natural and generated questions.
arXiv Detail & Related papers (2023-11-08T18:46:32Z) - Diversity Enhanced Narrative Question Generation for Storybooks [4.043005183192124]
We introduce a multi-question generation model (mQG) capable of generating multiple, diverse, and answerable questions.
To validate the answerability of the generated questions, we employ a SQuAD2.0 fine-tuned question answering model.
mQG shows promising results across various evaluation metrics, among strong baselines.
arXiv Detail & Related papers (2023-10-25T08:10:04Z) - Modeling What-to-ask and How-to-ask for Answer-unaware Conversational
Question Generation [30.086071993793823]
What-to-ask and how-to-ask are the two main challenges in the answer-unaware setting.
We present SG-CQG, a two-stage CQG framework.
arXiv Detail & Related papers (2023-05-04T18:06:48Z) - Modeling Multi-hop Question Answering as Single Sequence Prediction [88.72621430714985]
We propose a simple generative approach (PathFid) that extends the task beyond just answer generation.
PathFid explicitly models the reasoning process to resolve the answer for multi-hop questions.
Our experiments demonstrate that PathFid leads to strong performance gains on two multi-hop QA datasets.
arXiv Detail & Related papers (2022-05-18T21:57:59Z) - Ask to Understand: Question Generation for Multi-hop Question Answering [11.626390908264872]
Multi-hop Question Answering (QA) requires the machine to answer complex questions by finding scattering clues and reasoning from multiple documents.
We propose a novel method to complete multi-hop QA from the perspective of Question Generation (QG)
arXiv Detail & Related papers (2022-03-17T04:02:29Z) - MultiModalQA: Complex Question Answering over Text, Tables and Images [52.25399438133274]
We present MultiModalQA: a dataset that requires joint reasoning over text, tables and images.
We create MMQA using a new framework for generating complex multi-modal questions at scale.
We then define a formal language that allows us to take questions that can be answered from a single modality, and combine them to generate cross-modal questions.
arXiv Detail & Related papers (2021-04-13T09:14:28Z) - Multi-hop Question Generation with Graph Convolutional Network [58.31752179830959]
Multi-hop Question Generation (QG) aims to generate answer-related questions by aggregating and reasoning over multiple scattered evidence from different paragraphs.
We propose Multi-Hop volution Fusion Network for Question Generation (MulQG), which does context encoding in multiple hops.
Our proposed model is able to generate fluent questions with high completeness and outperforms the strongest baseline by 20.8% in the multi-hop evaluation.
arXiv Detail & Related papers (2020-10-19T06:15:36Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - Simplifying Paragraph-level Question Generation via Transformer Language
Models [0.0]
Question generation (QG) is a natural language generation task where a model is trained to ask questions corresponding to some input text.
A single Transformer-based unidirectional language model leveraging transfer learning can be used to produce high quality questions.
Our QG model, finetuned from GPT-2 Small, outperforms several paragraph-level QG baselines on the SQuAD dataset by 0.95 METEOR points.
arXiv Detail & Related papers (2020-05-03T14:57:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.