Results and Insights from Diagnostic Questions: The NeurIPS 2020
Education Challenge
- URL: http://arxiv.org/abs/2104.04034v1
- Date: Thu, 8 Apr 2021 20:09:58 GMT
- Title: Results and Insights from Diagnostic Questions: The NeurIPS 2020
Education Challenge
- Authors: Zichao Wang, Angus Lamb, Evgeny Saveliev, Pashmina Cameron, Yordan
Zaykov, Jose Miguel Hernandez-Lobato, Richard E. Turner, Richard G. Baraniuk,
Craig Barton, Simon Peyton Jones, Simon Woodhead, Cheng Zhang
- Abstract summary: This competition concerns educational diagnostic questions, which are pedagogically effective, multiple-choice questions (MCQs)
We seek to answer the question: how can we use data on hundreds of millions of answers to MCQs to drive automatic personalized learning in large-scale learning scenarios?
We report on our NeurIPS competition in which nearly 400 teams submitted approximately 4000 submissions, with encouragingly diverse and effective approaches to each of our tasks.
- Score: 40.96530220202453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This competition concerns educational diagnostic questions, which are
pedagogically effective, multiple-choice questions (MCQs) whose distractors
embody misconceptions. With a large and ever-increasing number of such
questions, it becomes overwhelming for teachers to know which questions are the
best ones to use for their students. We thus seek to answer the following
question: how can we use data on hundreds of millions of answers to MCQs to
drive automatic personalized learning in large-scale learning scenarios where
manual personalization is infeasible? Success in using MCQ data at scale helps
build more intelligent, personalized learning platforms that ultimately improve
the quality of education en masse. To this end, we introduce a new,
large-scale, real-world dataset and formulate 4 data mining tasks on MCQs that
mimic real learning scenarios and target various aspects of the above question
in a competition setting at NeurIPS 2020. We report on our NeurIPS competition
in which nearly 400 teams submitted approximately 4000 submissions, with
encouragingly diverse and effective approaches to each of our tasks.
Related papers
- LOVA3: Learning to Visual Question Answering, Asking and Assessment [61.51687164769517]
Question answering, asking, and assessment are three innate human traits crucial for understanding the world and acquiring knowledge.
Current Multimodal Large Language Models (MLLMs) primarily focus on question answering, often neglecting the full potential of questioning and assessment skills.
We introduce LOVA3, an innovative framework named "Learning tO Visual question Answering, Asking and Assessment"
arXiv Detail & Related papers (2024-05-23T18:21:59Z) - AI-TA: Towards an Intelligent Question-Answer Teaching Assistant using
Open-Source LLMs [2.6513660158945727]
We introduce an innovative solution that leverages open-source Large Language Models (LLMs) to ensure data privacy.
Our approach combines augmentation techniques such as retrieval augmented generation (RAG), supervised fine-tuning (SFT), and learning from human preferences data.
This work paves the way for the development of AI-TA, an intelligent QA assistant customizable for courses with an online QA platform.
arXiv Detail & Related papers (2023-11-05T21:43:02Z) - Automated Distractor and Feedback Generation for Math Multiple-choice
Questions via In-context Learning [43.83422798569986]
Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and reliable form of assessment.
To date, the task of crafting high-quality distractors has largely remained a labor-intensive process for teachers and learning content designers.
We propose a simple, in-context learning-based solution for automated distractor and corresponding feedback message generation.
arXiv Detail & Related papers (2023-08-07T01:03:04Z) - Learning to Reuse Distractors to support Multiple Choice Question
Generation in Education [19.408786425460498]
This paper studies how a large existing set of manually created answers and distractors can be leveraged to help teachers in creating new multiple choice questions (MCQs)
We built several data-driven models based on context-aware question and distractor representations, and compared them with static feature-based models.
Both automatic and human evaluations indicate that context-aware models consistently outperform a static feature-based approach.
arXiv Detail & Related papers (2022-10-25T12:48:56Z) - Few-Shot Complex Knowledge Base Question Answering via Meta
Reinforcement Learning [55.08037694027792]
Complex question-answering (CQA) involves answering complex natural-language questions on a knowledge base (KB)
The conventional neural program induction (NPI) approach exhibits uneven performance when the questions have different types.
This paper proposes a meta-reinforcement learning approach to program induction in CQA to tackle the potential distributional bias in questions.
arXiv Detail & Related papers (2020-10-29T18:34:55Z) - Unsupervised Multiple Choices Question Answering: Start Learning from
Basic Knowledge [75.7135212362517]
We study the possibility of almost unsupervised Multiple Choices Question Answering (MCQA)
The proposed method is shown to outperform the baseline approaches on RACE and even comparable with some supervised learning approaches on MC500.
arXiv Detail & Related papers (2020-10-21T13:44:35Z) - Instructions and Guide for Diagnostic Questions: The NeurIPS 2020
Education Challenge [40.96530220202453]
In this competition, participants will focus on the students' answer records to multiple-choice diagnostic questions.
We provide over 20 million examples of students' answers to mathematics questions from Eedi.
arXiv Detail & Related papers (2020-07-23T15:17:36Z) - Educational Question Mining At Scale: Prediction, Analysis and
Personalization [35.42197158180065]
We propose a framework for mining insights from educational questions at scale.
We utilize the state-of-the-art Bayesian deep learning method, in particular partial variational auto-encoders (p-VAE)
We apply our proposed framework to a real-world dataset with tens of thousands of questions and tens of millions of answers from an online education platform.
arXiv Detail & Related papers (2020-03-12T19:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.