A Deep Learning Approach to Behavior-Based Learner Modeling
- URL: http://arxiv.org/abs/2001.08328v1
- Date: Thu, 23 Jan 2020 01:26:52 GMT
- Title: A Deep Learning Approach to Behavior-Based Learner Modeling
- Authors: Yuwei Tu, Weiyu Chen, Christopher G. Brinton
- Abstract summary: We study learner outcome predictions, i.e., predictions of how they will perform at the end of a course.
We propose a novel Two Branch Decision Network for performance prediction that incorporates two important factors: how learners progress through the course and how the content progresses through the course.
Our proposed algorithm achieves 95.7% accuracy and 0.958 AUC score, which outperforms all other models.
- Score: 11.899303239960412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing popularity of e-learning has created demand for improving
online education through techniques such as predictive analytics and content
recommendations. In this paper, we study learner outcome predictions, i.e.,
predictions of how they will perform at the end of a course. We propose a novel
Two Branch Decision Network for performance prediction that incorporates two
important factors: how learners progress through the course and how the content
progresses through the course. We combine clickstream features which log every
action the learner takes while learning, and textual features which are
generated through pre-trained GloVe word embeddings. To assess the performance
of our proposed network, we collect data from a short online course designed
for corporate training and evaluate both neural network and non-neural network
based algorithms on it. Our proposed algorithm achieves 95.7% accuracy and
0.958 AUC score, which outperforms all other models. The results also indicate
the combination of behavior features and text features are more predictive than
behavior features only and neural network models are powerful in capturing the
joint relationship between user behavior and course content.
Related papers
- Artificial Neural Network and Deep Learning: Fundamentals and Theory [0.0]
This book lays a solid groundwork for understanding data and probability distributions.
The book delves into multilayer feed-forward neural networks, explaining their architecture, training processes, and the backpropagation algorithm.
The text covers various learning rate schedules and adaptive algorithms, providing strategies to optimize the training process.
arXiv Detail & Related papers (2024-08-12T21:06:59Z) - Towards Diverse Evaluation of Class Incremental Learning: A Representation Learning Perspective [67.45111837188685]
Class incremental learning (CIL) algorithms aim to continually learn new object classes from incrementally arriving data.
We experimentally analyze neural network models trained by CIL algorithms using various evaluation protocols in representation learning.
arXiv Detail & Related papers (2022-06-16T11:44:11Z) - Learning Predictions for Algorithms with Predictions [49.341241064279714]
We introduce a general design approach for algorithms that learn predictors.
We apply techniques from online learning to learn against adversarial instances, tune robustness-consistency trade-offs, and obtain new statistical guarantees.
We demonstrate the effectiveness of our approach at deriving learning algorithms by analyzing methods for bipartite matching, page migration, ski-rental, and job scheduling.
arXiv Detail & Related papers (2022-02-18T17:25:43Z) - Click-Based Student Performance Prediction: A Clustering Guided
Meta-Learning Approach [10.962724342736042]
We study the problem of predicting student knowledge acquisition in online courses from clickstream behavior.
Our methodology for predicting in-video quiz performance is based on three key ideas we develop.
arXiv Detail & Related papers (2021-10-28T14:03:29Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Graph-based Exercise- and Knowledge-Aware Learning Network for Student
Performance Prediction [8.21303828329009]
We propose a Graph-based Exercise- and Knowledge-Aware Learning Network for accurate student score prediction.
We learn students' mastery of exercises and knowledge concepts respectively to model the two-fold effects of exercises and knowledge concepts.
arXiv Detail & Related papers (2021-06-01T06:53:17Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - A framework for predicting, interpreting, and improving Learning
Outcomes [0.0]
We develop an Embibe Score Quotient model (ESQ) to predict test scores based on observed academic, behavioral and test-taking features of a student.
ESQ can be used to predict the future scoring potential of a student as well as offer personalized learning nudges.
arXiv Detail & Related papers (2020-10-06T11:22:27Z) - Tighter risk certificates for neural networks [10.462889461373226]
We present two training objectives, used here for the first time in connection with training neural networks.
We also re-implement a previously used training objective based on a classical PAC-Bayes bound.
We compute risk certificates for the learnt predictors, based on part of the data used to learn the predictors.
arXiv Detail & Related papers (2020-07-25T11:02:16Z) - EPARS: Early Prediction of At-risk Students with Online and Offline
Learning Behaviors [55.33024245762306]
Early prediction of students at risk (STAR) is an effective and significant means to provide timely intervention for dropout and suicide.
Existing works mostly rely on either online or offline learning behaviors which are not comprehensive enough to capture the whole learning processes.
We propose a novel algorithm (EPARS) that could early predict STAR in a semester by modeling online and offline learning behaviors.
arXiv Detail & Related papers (2020-06-06T12:56:26Z) - The large learning rate phase of deep learning: the catapult mechanism [50.23041928811575]
We present a class of neural networks with solvable training dynamics.
We find good agreement between our model's predictions and training dynamics in realistic deep learning settings.
We believe our results shed light on characteristics of models trained at different learning rates.
arXiv Detail & Related papers (2020-03-04T17:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.