Automated Distractor and Feedback Generation for Math Multiple-choice
Questions via In-context Learning
- URL: http://arxiv.org/abs/2308.03234v2
- Date: Thu, 11 Jan 2024 18:59:58 GMT
- Title: Automated Distractor and Feedback Generation for Math Multiple-choice
Questions via In-context Learning
- Authors: Hunter McNichols, Wanyong Feng, Jaewook Lee, Alexander Scarlatos,
Digory Smith, Simon Woodhead, Andrew Lan
- Abstract summary: Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and reliable form of assessment.
To date, the task of crafting high-quality distractors has largely remained a labor-intensive process for teachers and learning content designers.
We propose a simple, in-context learning-based solution for automated distractor and corresponding feedback message generation.
- Score: 43.83422798569986
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiple-choice questions (MCQs) are ubiquitous in almost all levels of
education since they are easy to administer, grade, and are a reliable form of
assessment. An important aspect of MCQs is the distractors, i.e., incorrect
options that are designed to target specific misconceptions or insufficient
knowledge among students. To date, the task of crafting high-quality
distractors has largely remained a labor-intensive process for teachers and
learning content designers, which has limited scalability. In this work, we
explore the task of automated distractor and corresponding feedback message
generation in math MCQs using large language models. We establish a formulation
of these two tasks and propose a simple, in-context learning-based solution.
Moreover, we propose generative AI-based metrics for evaluating the quality of
the feedback messages. We conduct extensive experiments on these tasks using a
real-world MCQ dataset. Our findings suggest that there is a lot of room for
improvement in automated distractor and feedback generation; based on these
findings, we outline several directions for future work.
Related papers
- Math Multiple Choice Question Generation via Human-Large Language Model Collaboration [5.081508251092439]
Multiple choice questions (MCQs) are a popular method for evaluating students' knowledge.
Recent advances in large language models (LLMs) have sparked interest in automating MCQ creation.
This paper introduces a prototype tool designed to facilitate collaboration between LLMs and educators.
arXiv Detail & Related papers (2024-05-01T20:53:13Z) - Improving Automated Distractor Generation for Math Multiple-choice Questions with Overgenerate-and-rank [44.04217284677347]
We propose a novel method to enhance the quality of generated distractors through overgenerate-and-rank.
Our ranking model increases alignment with human-authored distractors, although human-authored ones are still preferred over generated ones.
arXiv Detail & Related papers (2024-04-19T00:25:44Z) - Exploring Automated Distractor Generation for Math Multiple-choice Questions via Large Language Models [40.50115385623107]
Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and reliable format in assessments and practices.
One of the most important aspects of MCQs is the distractors, i.e., incorrect options that are designed to target common errors or misconceptions among real students.
To date, the task of crafting high-quality distractors largely remains a labor and time-intensive process for teachers and learning content designers, which has limited scalability.
arXiv Detail & Related papers (2024-04-02T17:31:58Z) - Automating question generation from educational text [1.9325905076281444]
The use of question-based activities (QBAs) is wide-spread in education, forming an integral part of the learning and assessment process.
We design and evaluate an automated question generation tool for formative and summative assessment in schools.
arXiv Detail & Related papers (2023-09-26T15:18:44Z) - Rethinking Label Smoothing on Multi-hop Question Answering [87.68071401870283]
Multi-Hop Question Answering (MHQA) is a significant area in question answering.
In this work, we analyze the primary factors limiting the performance of multi-hop reasoning.
We propose a novel label smoothing technique, F1 Smoothing, which incorporates uncertainty into the learning process.
arXiv Detail & Related papers (2022-12-19T14:48:08Z) - Learning to Reuse Distractors to support Multiple Choice Question
Generation in Education [19.408786425460498]
This paper studies how a large existing set of manually created answers and distractors can be leveraged to help teachers in creating new multiple choice questions (MCQs)
We built several data-driven models based on context-aware question and distractor representations, and compared them with static feature-based models.
Both automatic and human evaluations indicate that context-aware models consistently outperform a static feature-based approach.
arXiv Detail & Related papers (2022-10-25T12:48:56Z) - MuMuQA: Multimedia Multi-Hop News Question Answering via Cross-Media
Knowledge Extraction and Grounding [131.8797942031366]
We present a new QA evaluation benchmark with 1,384 questions over news articles that require cross-media grounding of objects in images onto text.
Specifically, the task involves multi-hop questions that require reasoning over image-caption pairs to identify the grounded visual object being referred to and then predicting a span from the news body text to answer the question.
We introduce a novel multimedia data augmentation framework, based on cross-media knowledge extraction and synthetic question-answer generation, to automatically augment data that can provide weak supervision for this task.
arXiv Detail & Related papers (2021-12-20T18:23:30Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Neural Multi-Task Learning for Teacher Question Detection in Online
Classrooms [50.19997675066203]
We build an end-to-end neural framework that automatically detects questions from teachers' audio recordings.
By incorporating multi-task learning techniques, we are able to strengthen the understanding of semantic relations among different types of questions.
arXiv Detail & Related papers (2020-05-16T02:17:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.