Instructions and Guide for Diagnostic Questions: The NeurIPS 2020
Education Challenge
- URL: http://arxiv.org/abs/2007.12061v3
- Date: Mon, 12 Apr 2021 22:45:06 GMT
- Title: Instructions and Guide for Diagnostic Questions: The NeurIPS 2020
Education Challenge
- Authors: Zichao Wang, Angus Lamb, Evgeny Saveliev, Pashmina Cameron, Yordan
Zaykov, Jos\'e Miguel Hern\'andez-Lobato, Richard E. Turner, Richard G.
Baraniuk, Craig Barton, Simon Peyton Jones, Simon Woodhead, Cheng Zhang
- Abstract summary: In this competition, participants will focus on the students' answer records to multiple-choice diagnostic questions.
We provide over 20 million examples of students' answers to mathematics questions from Eedi.
- Score: 40.96530220202453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Digital technologies are becoming increasingly prevalent in education,
enabling personalized, high quality education resources to be accessible by
students across the world. Importantly, among these resources are diagnostic
questions: the answers that the students give to these questions reveal key
information about the specific nature of misconceptions that the students may
hold. Analyzing the massive quantities of data stemming from students'
interactions with these diagnostic questions can help us more accurately
understand the students' learning status and thus allow us to automate learning
curriculum recommendations. In this competition, participants will focus on the
students' answer records to these multiple-choice diagnostic questions, with
the aim of 1) accurately predicting which answers the students provide; 2)
accurately predicting which questions have high quality; and 3) determining a
personalized sequence of questions for each student that best predicts the
student's answers. These tasks closely mimic the goals of a real-world
educational platform and are highly representative of the educational
challenges faced today. We provide over 20 million examples of students'
answers to mathematics questions from Eedi, a leading educational platform
which thousands of students interact with daily around the globe. Participants
to this competition have a chance to make a lasting, real-world impact on the
quality of personalized education for millions of students across the world.
Related papers
- How to Engage Your Readers? Generating Guiding Questions to Promote Active Reading [60.19226384241482]
We introduce GuidingQ, a dataset of 10K in-text questions from textbooks and scientific articles.
We explore various approaches to generate such questions using language models.
We conduct a human study to understand the implication of such questions on reading comprehension.
arXiv Detail & Related papers (2024-07-19T13:42:56Z) - Enhancing Students' Learning Process Through Self-Generated Tests [0.0]
This paper describes an educational experiment aimed at the promotion of students' autonomous learning.
The main idea is to make the student feel part of the evaluation process by including students' questions in the evaluation exams.
Questions uploaded by students are visible to every enrolled student as well as to each involved teacher.
arXiv Detail & Related papers (2024-03-21T09:49:33Z) - Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges [60.62904929065257]
Large language models (LLMs) offer possibility for resolving this issue by comprehending individual requests.
This paper reviews the recently emerged LLM research related to educational capabilities, including mathematics, writing, programming, reasoning, and knowledge-based question answering.
arXiv Detail & Related papers (2023-12-27T14:37:32Z) - MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties
Grounded in Math Reasoning Problems [74.73881579517055]
We propose a framework to generate such dialogues by pairing human teachers with a Large Language Model prompted to represent common student errors.
We describe how we use this framework to collect MathDial, a dataset of 3k one-to-one teacher-student tutoring dialogues.
arXiv Detail & Related papers (2023-05-23T21:44:56Z) - Towards Mitigating ChatGPT's Negative Impact on Education: Optimizing
Question Design through Bloom's Taxonomy [0.0]
This paper introduces an evolutionary approach that aims to identify the best set of Bloom's taxonomy keywords to generate questions that these tools have low confidence in answering.
The effectiveness of this approach is evaluated through a case study that uses questions from a Data Structures and Representation course being taught at the University of New South Wales in Canberra, Australia.
arXiv Detail & Related papers (2023-03-31T00:01:59Z) - Question Personalization in an Intelligent Tutoring System [5.644357169513361]
We show that generating versions of the questions suitable for students at different levels of subject proficiency improves student learning gains.
This insight demonstrates that the linguistic realization of questions in an ITS affects the learning outcomes for students.
arXiv Detail & Related papers (2022-05-25T15:23:51Z) - Continuous Examination by Automatic Quiz Assessment Using Spiral Codes
and Image Processing [69.35569554213679]
Paper quizzes are affordable and within reach of campus education in classrooms.
correction of the quiz is a considerable obstacle.
We suggest mitigating the issue by a novel image processing technique.
arXiv Detail & Related papers (2022-01-26T22:58:15Z) - Results and Insights from Diagnostic Questions: The NeurIPS 2020
Education Challenge [40.96530220202453]
This competition concerns educational diagnostic questions, which are pedagogically effective, multiple-choice questions (MCQs)
We seek to answer the question: how can we use data on hundreds of millions of answers to MCQs to drive automatic personalized learning in large-scale learning scenarios?
We report on our NeurIPS competition in which nearly 400 teams submitted approximately 4000 submissions, with encouragingly diverse and effective approaches to each of our tasks.
arXiv Detail & Related papers (2021-04-08T20:09:58Z) - Educational Question Mining At Scale: Prediction, Analysis and
Personalization [35.42197158180065]
We propose a framework for mining insights from educational questions at scale.
We utilize the state-of-the-art Bayesian deep learning method, in particular partial variational auto-encoders (p-VAE)
We apply our proposed framework to a real-world dataset with tens of thousands of questions and tens of millions of answers from an online education platform.
arXiv Detail & Related papers (2020-03-12T19:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.