Assessing the Knowledge State of Online Students -- New Data, New
Approaches, Improved Accuracy
- URL: http://arxiv.org/abs/2109.01753v1
- Date: Sat, 4 Sep 2021 00:08:59 GMT
- Title: Assessing the Knowledge State of Online Students -- New Data, New
Approaches, Improved Accuracy
- Authors: Robin Schmucker, Jingbo Wang, Shijia Hu, Tom M. Mitchell
- Abstract summary: Student performance (SP) modeling is a critical step for building adaptive online teaching systems.
This study is the first to use four very large datasets made available recently from four distinct intelligent tutoring systems.
- Score: 28.719009375724028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of assessing the changing knowledge state of
individual students as they go through online courses. This student performance
(SP) modeling problem, also known as knowledge tracing, is a critical step for
building adaptive online teaching systems. Specifically, we conduct a study of
how to utilize various types and large amounts of students log data to train
accurate machine learning models that predict the knowledge state of future
students. This study is the first to use four very large datasets made
available recently from four distinct intelligent tutoring systems. Our results
include a new machine learning approach that defines a new state of the art for
SP modeling, improving over earlier methods in several ways: First, we achieve
improved accuracy by introducing new features that can be easily computed from
conventional question-response logs (e.g., the pattern in the student's most
recent answers). Second, we take advantage of features of the student history
that go beyond question-response pairs (e.g., which video segments the student
watched, or skipped) as well as information about prerequisite structure in the
curriculum. Third, we train multiple specialized modeling models for different
aspects of the curriculum (e.g., specializing in early versus later segments of
the student history), then combine these specialized models to create a group
prediction of student knowledge. Taken together, these innovations yield an
average AUC score across these four datasets of 0.807 compared to the previous
best logistic regression approach score of 0.766, and also outperforming
state-of-the-art deep neural net approaches. Importantly, we observe consistent
improvements from each of our three methodological innovations, in each
dataset, suggesting that our methods are of general utility and likely to
produce improvements for other online tutoring systems as well.
Related papers
- Multi-Stage Knowledge Integration of Vision-Language Models for Continual Learning [79.46570165281084]
We propose a Multi-Stage Knowledge Integration network (MulKI) to emulate the human learning process in distillation methods.
MulKI achieves this through four stages, including Eliciting Ideas, Adding New Ideas, Distinguishing Ideas, and Making Connections.
Our method demonstrates significant improvements in maintaining zero-shot capabilities while supporting continual learning across diverse downstream tasks.
arXiv Detail & Related papers (2024-11-11T07:36:19Z) - Continual Learning with Pre-Trained Models: A Survey [61.97613090666247]
Continual Learning aims to overcome the catastrophic forgetting of former knowledge when learning new ones.
This paper presents a comprehensive survey of the latest advancements in PTM-based CL.
arXiv Detail & Related papers (2024-01-29T18:27:52Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Predicting student performance using sequence classification with
time-based windows [1.5836913530330787]
We show that accurate predictive models can be built based on sequential patterns derived from students' behavioral data.
We present a methodology for capturing temporal aspects in behavioral data and analyze its influence on the predictive performance of the models.
The results of our improved sequence classification technique are capable of predicting student performance with high levels of accuracy, reaching 90 percent for course-specific models.
arXiv Detail & Related papers (2022-08-16T13:46:39Z) - Transferable Student Performance Modeling for Intelligent Tutoring
Systems [24.118429574890055]
We consider transfer learning techniques as a way to provide accurate performance predictions for new courses by leveraging log data from existing courses.
We evaluate the proposed techniques using student interaction sequence data from 5 different mathematics courses containing data from over 47,000 students in a real world large-scale ITS.
arXiv Detail & Related papers (2022-02-08T16:36:27Z) - Ex-Model: Continual Learning from a Stream of Trained Models [12.27992745065497]
We argue that continual learning systems should exploit the availability of compressed information in the form of trained models.
We introduce and formalize a new paradigm named "Ex-Model Continual Learning" (ExML), where an agent learns from a sequence of previously trained models instead of raw data.
arXiv Detail & Related papers (2021-12-13T09:46:16Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Distill on the Go: Online knowledge distillation in self-supervised
learning [1.1470070927586016]
Recent works have shown that wider and deeper models benefit more from self-supervised learning than smaller models.
We propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation.
Our results show significant performance gain in the presence of noisy and limited labels.
arXiv Detail & Related papers (2021-04-20T09:59:23Z) - Do we need to go Deep? Knowledge Tracing with Big Data [5.218882272051637]
We use EdNet, the largest student interaction dataset publicly available in the education domain, to understand how accurately both deep and traditional models predict future student performances.
Our work observes that logistic regression models with carefully engineered features outperformed deep models from extensive experimentation.
arXiv Detail & Related papers (2021-01-20T22:40:38Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.