Question Generation for Reading Comprehension Assessment by Modeling How
and What to Ask
- URL: http://arxiv.org/abs/2204.02908v1
- Date: Wed, 6 Apr 2022 15:52:24 GMT
- Title: Question Generation for Reading Comprehension Assessment by Modeling How
and What to Ask
- Authors: Bilal Ghanem, Lauren Lutz Coleman, Julia Rivard Dexter, Spencer
McIntosh von der Ohe, Alona Fyshe
- Abstract summary: We study Question Generation (QG) for reading comprehension where inferential questions are critical.
We propose a two-step model (HTA-WTA) that takes advantage of previous datasets.
We show that the HTA-WTA model tests for strong SCRS by asking deep inferential questions.
- Score: 3.470121495099
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Reading is integral to everyday life, and yet learning to read is a struggle
for many young learners. During lessons, teachers can use comprehension
questions to increase engagement, test reading skills, and improve retention.
Historically such questions were written by skilled teachers, but recently
language models have been used to generate comprehension questions. However,
many existing Question Generation (QG) systems focus on generating literal
questions from the text, and have no way to control the type of the generated
question. In this paper, we study QG for reading comprehension where
inferential questions are critical and extractive techniques cannot be used. We
propose a two-step model (HTA-WTA) that takes advantage of previous datasets,
and can generate questions for a specific targeted comprehension skill. We
propose a new reading comprehension dataset that contains questions annotated
with story-based reading comprehension skills (SBRCS), allowing for a more
complete reader assessment. Across several experiments, our results show that
HTA-WTA outperforms multiple strong baselines on this new dataset. We show that
the HTA-WTA model tests for strong SCRS by asking deep inferential questions.
Related papers
- How to Engage Your Readers? Generating Guiding Questions to Promote Active Reading [60.19226384241482]
We introduce GuidingQ, a dataset of 10K in-text questions from textbooks and scientific articles.
We explore various approaches to generate such questions using language models.
We conduct a human study to understand the implication of such questions on reading comprehension.
arXiv Detail & Related papers (2024-07-19T13:42:56Z) - ChatPRCS: A Personalized Support System for English Reading
Comprehension based on ChatGPT [3.847982502219679]
This paper presents a novel personalized support system for reading comprehension, referred to as ChatPRCS.
ChatPRCS employs methods including reading comprehension proficiency prediction, question generation, and automatic evaluation.
arXiv Detail & Related papers (2023-09-22T11:46:44Z) - Analyzing Multiple-Choice Reading and Listening Comprehension Tests [0.0]
This work investigates how much of a contextual passage needs to be read in multiple-choice reading based on conversation transcriptions and listening comprehension tests to be able to work out the correct answer.
We find that automated reading comprehension systems can perform significantly better than random with partial or even no access to the context passage.
arXiv Detail & Related papers (2023-07-03T14:55:02Z) - Improving Reading Comprehension Question Generation with Data
Augmentation and Overgenerate-and-rank [3.854023945160742]
Automated answer-aware reading comprehension question generation has significant potential to scale up learner support in educational activities.
One key technical challenge in this setting is that there can be multiple questions, sometimes very different from each other, with the same answer.
We propose 1) a data augmentation method that enriches the training dataset with diverse questions given the same context and answer and 2) an overgenerate-and-rank method to select the best question from a pool of candidates.
arXiv Detail & Related papers (2023-06-15T04:23:25Z) - SkillQG: Learning to Generate Question for Reading Comprehension
Assessment [54.48031346496593]
We present a question generation framework with controllable comprehension types for assessing and improving machine reading comprehension models.
We first frame the comprehension type of questions based on a hierarchical skill-based schema, then formulate $textttSkillQG$ as a skill-conditioned question generator.
Empirical results demonstrate that $textttSkillQG$ outperforms baselines in terms of quality, relevance, and skill-controllability.
arXiv Detail & Related papers (2023-05-08T14:40:48Z) - What should I Ask: A Knowledge-driven Approach for Follow-up Questions
Generation in Conversational Surveys [63.51903260461746]
We propose a novel task for knowledge-driven follow-up question generation in conversational surveys.
We constructed a new human-annotated dataset of human-written follow-up questions with dialogue history and labeled knowledge.
We then propose a two-staged knowledge-driven model for the task, which generates informative and coherent follow-up questions.
arXiv Detail & Related papers (2022-05-23T00:57:33Z) - Fantastic Questions and Where to Find Them: FairytaleQA -- An Authentic
Dataset for Narrative Comprehension [136.82507046638784]
We introduce FairytaleQA, a dataset focusing on narrative comprehension of kindergarten to eighth-grade students.
FairytaleQA consists of 10,580 explicit and implicit questions derived from 278 children-friendly stories.
arXiv Detail & Related papers (2022-03-26T00:20:05Z) - Open-Retrieval Conversational Machine Reading [80.13988353794586]
In conversational machine reading, systems need to interpret natural language rules, answer high-level questions, and ask follow-up clarification questions.
Existing works assume the rule text is provided for each user question, which neglects the essential retrieval step in real scenarios.
In this work, we propose and investigate an open-retrieval setting of conversational machine reading.
arXiv Detail & Related papers (2021-02-17T08:55:01Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.