I Do Not Understand What I Cannot Define: Automatic Question Generation
With Pedagogically-Driven Content Selection
- URL: http://arxiv.org/abs/2110.04123v1
- Date: Fri, 8 Oct 2021 13:29:13 GMT
- Title: I Do Not Understand What I Cannot Define: Automatic Question Generation
With Pedagogically-Driven Content Selection
- Authors: Tim Steuer, Anna Filighera, Tobias Meuser and Christoph Rensing
- Abstract summary: Posing questions about what learners have read is a well-established way of fostering their text comprehension.
Many textbooks lack self-assessment questions because authoring them is time-consuming and expensive.
This paper introduces a novel pedagogically meaningful content selection mechanism to find question-worthy sentences and answers in arbitrary textbook contents.
- Score: 0.08602553195689512
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Most learners fail to develop deep text comprehension when reading textbooks
passively. Posing questions about what learners have read is a well-established
way of fostering their text comprehension. However, many textbooks lack
self-assessment questions because authoring them is timeconsuming and
expensive. Automatic question generators may alleviate this scarcity by
generating sound pedagogical questions. However, generating questions
automatically poses linguistic and pedagogical challenges. What should we ask?
And, how do we phrase the question automatically? We address those challenges
with an automatic question generator grounded in learning theory. The paper
introduces a novel pedagogically meaningful content selection mechanism to find
question-worthy sentences and answers in arbitrary textbook contents. We
conducted an empirical evaluation study with educational experts, annotating
150 generated questions in six different domains. Results indicate a high
linguistic quality of the generated questions. Furthermore, the evaluation
results imply that the majority of the generated questions inquire central
information related to the given text and may foster text comprehension in
specific learning scenarios.
Related papers
- How to Engage Your Readers? Generating Guiding Questions to Promote Active Reading [60.19226384241482]
We introduce GuidingQ, a dataset of 10K in-text questions from textbooks and scientific articles.
We explore various approaches to generate such questions using language models.
We conduct a human study to understand the implication of such questions on reading comprehension.
arXiv Detail & Related papers (2024-07-19T13:42:56Z) - Auto FAQ Generation [0.0]
We propose a system for generating FAQ documents that extract the salient questions and their corresponding answers from sizeable text documents.
We use existing text summarization, sentence ranking via the Text rank algorithm, and question-generation tools to create an initial set of questions and answers.
arXiv Detail & Related papers (2024-05-13T03:30:27Z) - Which questions should I answer? Salience Prediction of Inquisitive Questions [118.097974193544]
We show that highly salient questions are empirically more likely to be answered in the same article.
We further validate our findings by showing that answering salient questions is an indicator of summarization quality in news.
arXiv Detail & Related papers (2024-04-16T21:33:05Z) - Answering Ambiguous Questions with a Database of Questions, Answers, and
Revisions [95.92276099234344]
We present a new state-of-the-art for answering ambiguous questions that exploits a database of unambiguous questions generated from Wikipedia.
Our method improves performance by 15% on recall measures and 10% on measures which evaluate disambiguating questions from predicted outputs.
arXiv Detail & Related papers (2023-08-16T20:23:16Z) - SkillQG: Learning to Generate Question for Reading Comprehension
Assessment [54.48031346496593]
We present a question generation framework with controllable comprehension types for assessing and improving machine reading comprehension models.
We first frame the comprehension type of questions based on a hierarchical skill-based schema, then formulate $textttSkillQG$ as a skill-conditioned question generator.
Empirical results demonstrate that $textttSkillQG$ outperforms baselines in terms of quality, relevance, and skill-controllability.
arXiv Detail & Related papers (2023-05-08T14:40:48Z) - Automatic question generation based on sentence structure analysis using
machine learning approach [0.0]
This article introduces our framework for generating factual questions from unstructured text in the English language.
It uses a combination of traditional linguistic approaches based on sentence patterns with several machine learning methods.
The framework also includes a question evaluation module which estimates the quality of generated questions.
arXiv Detail & Related papers (2022-05-25T14:35:29Z) - Question Generation for Reading Comprehension Assessment by Modeling How
and What to Ask [3.470121495099]
We study Question Generation (QG) for reading comprehension where inferential questions are critical.
We propose a two-step model (HTA-WTA) that takes advantage of previous datasets.
We show that the HTA-WTA model tests for strong SCRS by asking deep inferential questions.
arXiv Detail & Related papers (2022-04-06T15:52:24Z) - Educational Question Generation of Children Storybooks via Question Type Distribution Learning and Event-Centric Summarization [67.1483219601714]
We propose a novel question generation method that first learns the question type distribution of an input story paragraph.
We finetune a pre-trained transformer-based sequence-to-sequence model using silver samples composed by educational question-answer pairs.
Our work indicates the necessity of decomposing question type distribution learning and event-centric summary generation for educational question generation.
arXiv Detail & Related papers (2022-03-27T02:21:19Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.