Procedural Generation of STEM Quizzes
- URL: http://arxiv.org/abs/2009.03868v1
- Date: Tue, 8 Sep 2020 17:15:16 GMT
- Title: Procedural Generation of STEM Quizzes
- Authors: Carlos Andujar
- Abstract summary: We argue that procedural question generation greatly facilitates the task of creating varied, formative, up-to-date, adaptive question banks for STEM quizzes.
We present and evaluate a proof-of-concept Python API for script-based question generation.
A side advantage of our system is that the question bank is actually embedded in Python code, making collaboration, version control, and maintenance tasks very easy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Electronic quizzes are used extensively for summative and formative
assessment. Current Learning Management Systems (LMS) allow instructors to
create quizzes through a Graphical User Interface. Despite having a smooth
learning curve, question generation/editing process with such interfaces is
often slow and the creation of question variants is mostly limited to random
parameters. In this paper we argue that procedural question generation greatly
facilitates the task of creating varied, formative, up-to-date, adaptive
question banks for STEM quizzes. We present and evaluate a proof-of-concept
Python API for script-based question generation, and propose different question
design patterns that greatly facilitate question authoring. The API supports
questions including mathematical formulas, dynamically generated images and
videos, as well as interactive content such as 3D model viewers. Output
questions can be imported in major LMS. For basic usage, the required
programming skills are minimal. More advanced uses do require some programming
knowledge, but at a level that is common in STEM instructors. A side advantage
of our system is that the question bank is actually embedded in Python code,
making collaboration, version control, and maintenance tasks very easy. We
demonstrate the benefits of script-based generation over traditional GUI-based
approaches, in terms of question richness, authoring speed and content
re-usability.
Related papers
- Multimodal Reranking for Knowledge-Intensive Visual Question Answering [77.24401833951096]
We introduce a multi-modal reranker to improve the ranking quality of knowledge candidates for answer generation.
Experiments on OK-VQA and A-OKVQA show that multi-modal reranker from distant supervision provides consistent improvements.
arXiv Detail & Related papers (2024-07-17T02:58:52Z) - Self-Improvement Programming for Temporal Knowledge Graph Question Answering [31.33908040172437]
Temporal Knowledge Graph Question Answering (TKGQA) aims to answer questions with temporal intent over Temporal Knowledge Graphs (TKGs)
Existing end-to-end methods implicitly model the time constraints by learning time-aware embeddings of questions and candidate answers.
We introduce a novel self-improvement Programming method for TKGQA (Prog-TQA)
arXiv Detail & Related papers (2024-04-02T08:14:27Z) - Improving Question Generation with Multi-level Content Planning [70.37285816596527]
This paper addresses the problem of generating questions from a given context and an answer, specifically focusing on questions that require multi-hop reasoning across an extended context.
We propose MultiFactor, a novel QG framework based on multi-level content planning. Specifically, MultiFactor includes two components: FA-model, which simultaneously selects key phrases and generates full answers, and Q-model which takes the generated full answer as an additional input to generate questions.
arXiv Detail & Related papers (2023-10-20T13:57:01Z) - Automatic Short Math Answer Grading via In-context Meta-learning [2.0263791972068628]
We study the problem of automatic short answer grading for students' responses to math questions.
We use MathBERT, a variant of the popular language model BERT adapted to mathematical content, as our base model.
Second, we use an in-context learning approach that provides scoring examples as input to the language model.
arXiv Detail & Related papers (2022-05-30T16:26:02Z) - Automatic question generation based on sentence structure analysis using
machine learning approach [0.0]
This article introduces our framework for generating factual questions from unstructured text in the English language.
It uses a combination of traditional linguistic approaches based on sentence patterns with several machine learning methods.
The framework also includes a question evaluation module which estimates the quality of generated questions.
arXiv Detail & Related papers (2022-05-25T14:35:29Z) - Leaf: Multiple-Choice Question Generation [19.910992586616477]
We present Leaf, a system for generating multiple-choice questions from factual text.
In addition to being very well suited for the classroom, Leaf could also be used in an industrial setting.
arXiv Detail & Related papers (2022-01-22T10:17:53Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Retrieve, Program, Repeat: Complex Knowledge Base Question Answering via
Alternate Meta-learning [56.771557756836906]
We present a novel method that automatically learns a retrieval model alternately with the programmer from weak supervision.
Our system leads to state-of-the-art performance on a large-scale task for complex question answering over knowledge bases.
arXiv Detail & Related papers (2020-10-29T18:28:16Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - Semantic Graphs for Generating Deep Questions [98.5161888878238]
We propose a novel framework which first constructs a semantic-level graph for the input document and then encodes the semantic graph by introducing an attention-based GGNN (Att-GGNN)
On the HotpotQA deep-question centric dataset, our model greatly improves performance over questions requiring reasoning over multiple facts, leading to state-of-the-art performance.
arXiv Detail & Related papers (2020-04-27T10:52:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.