Asking Questions Like Educational Experts: Automatically Generating
Question-Answer Pairs on Real-World Examination Data
- URL: http://arxiv.org/abs/2109.05179v1
- Date: Sat, 11 Sep 2021 04:10:57 GMT
- Title: Asking Questions Like Educational Experts: Automatically Generating
Question-Answer Pairs on Real-World Examination Data
- Authors: Fanyi Qu, Xin Jia, Yunfang Wu
- Abstract summary: This paper addresses the question-answer pair generation task on the real-world examination data, and proposes a new unified framework on RACE.
We propose a multi-agent communication model to generate and optimize the question and keyphrases iteratively, and then apply the generated question and keyphrases to guide the generation of answers.
Experimental results show that our model makes great breakthroughs in the question-answer pair generation task.
- Score: 10.353009081072992
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating high quality question-answer pairs is a hard but meaningful task.
Although previous works have achieved great results on answer-aware question
generation, it is difficult to apply them into practical application in the
education field. This paper for the first time addresses the question-answer
pair generation task on the real-world examination data, and proposes a new
unified framework on RACE. To capture the important information of the input
passage we first automatically generate(rather than extracting) keyphrases,
thus this task is reduced to keyphrase-question-answer triplet joint
generation. Accordingly, we propose a multi-agent communication model to
generate and optimize the question and keyphrases iteratively, and then apply
the generated question and keyphrases to guide the generation of answers. To
establish a solid benchmark, we build our model on the strong generative
pre-training model. Experimental results show that our model makes great
breakthroughs in the question-answer pair generation task. Moreover, we make a
comprehensive analysis on our model, suggesting new directions for this
challenging task.
Related papers
- Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - QASnowball: An Iterative Bootstrapping Framework for High-Quality
Question-Answering Data Generation [67.27999343730224]
We introduce an iterative bootstrapping framework for QA data augmentation (named QASnowball)
QASnowball can iteratively generate large-scale high-quality QA data based on a seed set of supervised examples.
We conduct experiments in the high-resource English scenario and the medium-resource Chinese scenario, and the experimental results show that the data generated by QASnowball can facilitate QA models.
arXiv Detail & Related papers (2023-09-19T05:20:36Z) - Weakly Supervised Visual Question Answer Generation [2.7605547688813172]
We present a weakly supervised method that synthetically generates question-answer pairs procedurally from visual information and captions.
We perform an exhaustive experimental analysis on VQA dataset and see that our model significantly outperforms SOTA methods on BLEU scores.
arXiv Detail & Related papers (2023-06-11T08:46:42Z) - An Empirical Comparison of LM-based Question and Answer Generation
Methods [79.31199020420827]
Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context.
In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning.
Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches.
arXiv Detail & Related papers (2023-05-26T14:59:53Z) - Consecutive Question Generation via Dynamic Multitask Learning [17.264399861776187]
We propose the task of consecutive question generation (CQG), which generates a set of logically related question-answer pairs to understand a whole passage.
We first examine the four key elements of CQG, and propose a novel dynamic multitask framework with one main task generating a question-answer pair, and four auxiliary tasks generating other elements.
We prove that our strategy can improve question generation significantly and benefit multiple related NLP tasks.
arXiv Detail & Related papers (2022-11-16T11:50:36Z) - Read before Generate! Faithful Long Form Question Answering with Machine
Reading [77.17898499652306]
Long-form question answering (LFQA) aims to generate a paragraph-length answer for a given question.
We propose a new end-to-end framework that jointly models answer generation and machine reading.
arXiv Detail & Related papers (2022-03-01T10:41:17Z) - Generating Self-Contained and Summary-Centric Question Answer Pairs via
Differentiable Reward Imitation Learning [7.2745835227138045]
We propose a model for generating question-answer pairs (QA pairs) with self-contained, summary-centric questions and length-constrained, article-summarizing answers.
This dataset is used to learn a QA pair generation model producing summaries as answers that balance brevity with sufficiency jointly with their corresponding questions.
arXiv Detail & Related papers (2021-09-10T06:34:55Z) - Generating Answer Candidates for Quizzes and Answer-Aware Question
Generators [16.44011627249311]
We propose a model that can generate a specified number of answer candidates for a given passage of text.
Our experiments show that our proposed answer candidate generation model outperforms several baselines.
arXiv Detail & Related papers (2021-08-29T19:33:51Z) - Cooperative Learning of Zero-Shot Machine Reading Comprehension [9.868221447090855]
We propose a cooperative, self-play learning model for question generation and answering.
We can train question generation and answering models on any textual corpora without annotation.
Our model outperforms the state-of-the-art pretrained language models on standard question answering benchmarks.
arXiv Detail & Related papers (2021-03-12T18:22:28Z) - Understanding Unnatural Questions Improves Reasoning over Text [54.235828149899625]
Complex question answering (CQA) over raw text is a challenging task.
Learning an effective CQA model requires large amounts of human-annotated data.
We address the challenge of learning a high-quality programmer (parser) by projecting natural human-generated questions into unnatural machine-generated questions.
arXiv Detail & Related papers (2020-10-19T10:22:16Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.