Improving Controllability of Educational Question Generation by Keyword
Provision
- URL: http://arxiv.org/abs/2112.01012v1
- Date: Thu, 2 Dec 2021 06:54:44 GMT
- Title: Improving Controllability of Educational Question Generation by Keyword
Provision
- Authors: Ying-Hong Chan, Ho-Lam Chung, Yao-Chung Fan
- Abstract summary: We report a state-of-the-art exam-like QG model by advancing the current best model from 11.96 to 20.19.
We propose to investigate a variant of QG setting by allowing users to provide keywords for guiding QG direction.
Experiments are also performed and the results demonstrate the feasibility and potentials of improving QG diversity and controllability.
- Score: 2.305378099875569
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Question Generation (QG) receives increasing research attention in NLP
community. One motivation for QG is that QG significantly facilitates the
preparation of educational reading practice and assessments. While the
significant advancement of QG techniques was reported, current QG results are
not ideal for educational reading practice assessment in terms of
\textit{controllability} and \textit{question difficulty}. This paper reports
our results toward the two issues. First, we report a state-of-the-art
exam-like QG model by advancing the current best model from 11.96 to 20.19 (in
terms of BLEU 4 score). Second, we propose to investigate a variant of QG
setting by allowing users to provide keywords for guiding QG direction. We also
present a simple but effective model toward the QG controllability task.
Experiments are also performed and the results demonstrate the feasibility and
potentials of improving QG diversity and controllability by the proposed
keyword provision QG model.
Related papers
- Advancing Question Generation with Joint Narrative and Difficulty Control [0.0]
We propose a strategy for Joint Narrative and Difficulty Control, enabling simultaneous control over these two attributes in the generation of reading comprehension questions.<n>Our evaluation provides preliminary evidence that this approach is feasible, though it is not effective across all instances.
arXiv Detail & Related papers (2025-06-07T14:26:11Z) - Davidsonian Scene Graph: Improving Reliability in Fine-grained Evaluation for Text-to-Image Generation [64.64849950642619]
We develop an evaluation framework inspired by formal semantics for evaluating text-to-image models.
We show that Davidsonian Scene Graph (DSG) produces atomic and unique questions organized in dependency graphs.
We also present DSG-1k, an open-sourced evaluation benchmark that includes 1,060 prompts.
arXiv Detail & Related papers (2023-10-27T16:20:10Z) - Towards Enriched Controllability for Educational Question Generation [0.0]
Question Generation (QG) is a task within Natural Language Processing (NLP)
Recent work on QG aims to control the type of generated questions so that they meet educational needs.
This study aims to enrich controllability in QG by introducing a new guidance attribute: question explicitness.
arXiv Detail & Related papers (2023-06-21T11:21:08Z) - Towards Diverse and Effective Question-Answer Pair Generation from
Children Storybooks [3.850557558248366]
We propose a framework that enhances QA type diversity by producing different interrogative sentences and implicit/explicit answers.
Our framework comprises a QFS-based answer generator, an iterative QA generator, and a relevancy-aware ranker.
arXiv Detail & Related papers (2023-06-11T06:55:59Z) - Learning Answer Generation using Supervision from Automatic Question
Answering Evaluators [98.9267570170737]
We propose a novel training paradigm for GenQA using supervision from automatic QA evaluation models (GAVA)
We evaluate our proposed methods on two academic and one industrial dataset, obtaining a significant improvement in answering accuracy over the previous state of the art.
arXiv Detail & Related papers (2023-05-24T16:57:04Z) - Closed-book Question Generation via Contrastive Learning [20.644215991166895]
We propose a new QG model empowered by a contrastive learning module and an answer reconstruction module.
We show how to leverage the proposed model to improve existing closed-book QA systems.
arXiv Detail & Related papers (2022-10-13T06:45:46Z) - Generative Language Models for Paragraph-Level Question Generation [79.31199020420827]
Powerful generative models have led to recent progress in question generation (QG)
It is difficult to measure advances in QG research since there are no standardized resources that allow a uniform comparison among approaches.
We introduce QG-Bench, a benchmark for QG that unifies existing question answering datasets by converting them to a standard QG setting.
arXiv Detail & Related papers (2022-10-08T10:24:39Z) - Quiz Design Task: Helping Teachers Create Quizzes with Automated
Question Generation [87.34509878569916]
This paper focuses on the use case of helping teachers automate the generation of reading comprehension quizzes.
In our study, teachers building a quiz receive question suggestions, which they can either accept or refuse with a reason.
arXiv Detail & Related papers (2022-05-03T18:59:03Z) - QA4QG: Using Question Answering to Constrain Multi-Hop Question
Generation [54.136509061542775]
Multi-hop question generation (MQG) aims to generate complex questions which require reasoning over multiple pieces of information of the input passage.
We propose a novel framework, QA4QG, a QA-augmented BART-based framework for MQG.
Our results on the HotpotQA dataset show that QA4QG outperforms all state-of-the-art models.
arXiv Detail & Related papers (2022-02-14T08:16:47Z) - Unified Question Generation with Continual Lifelong Learning [41.81627903996791]
Existing QG methods mainly focus on building or training models for specific QG datasets.
We propose a model named UnifiedQG based on lifelong learning techniques, which can continually learn QG tasks.
In addition, we transform the ability of a single trained Unified-QG model in improving $8$ Question Answering (QA) systems' performance.
arXiv Detail & Related papers (2022-01-24T14:05:18Z) - QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question
Answering [122.84513233992422]
We propose a new model, QA-GNN, which addresses the problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs)
We show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning.
arXiv Detail & Related papers (2021-04-13T17:32:51Z) - EQG-RACE: Examination-Type Question Generation [21.17100754955864]
We propose an innovative Examination-type Question Generation approach (EQG-RACE) to generate exam-like questions based on a dataset extracted from RACE.
Two main strategies are employed in EQG-RACE for dealing with discrete answer information and reasoning among long contexts.
Experimental results show a state-of-the-art performance of EQG-RACE, which is apparently superior to the baselines.
arXiv Detail & Related papers (2020-12-11T03:52:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.