Click-Based Student Performance Prediction: A Clustering Guided
Meta-Learning Approach
- URL: http://arxiv.org/abs/2111.00901v1
- Date: Thu, 28 Oct 2021 14:03:29 GMT
- Title: Click-Based Student Performance Prediction: A Clustering Guided
Meta-Learning Approach
- Authors: Yun-Wei Chu, Elizabeth Tenorio, Laura Cruz, Kerrie Douglas, Andrew S.
Lan, Christopher G. Brinton
- Abstract summary: We study the problem of predicting student knowledge acquisition in online courses from clickstream behavior.
Our methodology for predicting in-video quiz performance is based on three key ideas we develop.
- Score: 10.962724342736042
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of predicting student knowledge acquisition in online
courses from clickstream behavior. Motivated by the proliferation of eLearning
lecture delivery, we specifically focus on student in-video activity in
lectures videos, which consist of content and in-video quizzes. Our methodology
for predicting in-video quiz performance is based on three key ideas we
develop. First, we model students' clicking behavior via time-series learning
architectures operating on raw event data, rather than defining hand-crafted
features as in existing approaches that may lose important information embedded
within the click sequences. Second, we develop a self-supervised clickstream
pre-training to learn informative representations of clickstream events that
can initialize the prediction model effectively. Third, we propose a clustering
guided meta-learning-based training that optimizes the prediction model to
exploit clusters of frequent patterns in student clickstream sequences. Through
experiments on three real-world datasets, we demonstrate that our method
obtains substantial improvements over two baseline models in predicting
students' in-video quiz performance. Further, we validate the importance of the
pre-training and meta-learning components of our framework through ablation
studies. Finally, we show how our methodology reveals insights on
video-watching behavior associated with knowledge acquisition for useful
learning analytics.
Related papers
- Learning from One Continuous Video Stream [70.30084026960819]
We introduce a framework for online learning from a single continuous video stream.
This poses great challenges given the high correlation between consecutive video frames.
We employ pixel-to-pixel modelling as a practical and flexible way to switch between pre-training and single-stream evaluation.
arXiv Detail & Related papers (2023-12-01T14:03:30Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Predicting student performance using sequence classification with
time-based windows [1.5836913530330787]
We show that accurate predictive models can be built based on sequential patterns derived from students' behavioral data.
We present a methodology for capturing temporal aspects in behavioral data and analyze its influence on the predictive performance of the models.
The results of our improved sequence classification technique are capable of predicting student performance with high levels of accuracy, reaching 90 percent for course-specific models.
arXiv Detail & Related papers (2022-08-16T13:46:39Z) - Reinforcement Learning with Action-Free Pre-Training from Videos [95.25074614579646]
We introduce a framework that learns representations useful for understanding the dynamics via generative pre-training on videos.
Our framework significantly improves both final performances and sample-efficiency of vision-based reinforcement learning.
arXiv Detail & Related papers (2022-03-25T19:44:09Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Learning by Distillation: A Self-Supervised Learning Framework for
Optical Flow Estimation [71.76008290101214]
DistillFlow is a knowledge distillation approach to learning optical flow.
It achieves state-of-the-art unsupervised learning performance on both KITTI and Sintel datasets.
Our models ranked 1st among all monocular methods on the KITTI 2015 benchmark, and outperform all published methods on the Sintel Final benchmark.
arXiv Detail & Related papers (2021-06-08T09:13:34Z) - Learning Actor-centered Representations for Action Localization in
Streaming Videos using Predictive Learning [18.757368441841123]
Event perception tasks such as recognizing and localizing actions in streaming videos are essential for tackling visual understanding tasks.
We tackle the problem of learning textitactor-centered representations through the notion of continual hierarchical predictive learning.
Inspired by cognitive theories of event perception, we propose a novel, self-supervised framework.
arXiv Detail & Related papers (2021-04-29T06:06:58Z) - Dropout Prediction over Weeks in MOOCs by Learning Representations of
Clicks and Videos [6.030785848148107]
We develop a method to learn representation for videos and the correlation between videos and clicks.
The results indicate that modeling videos and their correlation with clicks bring statistically significant improvements in predicting dropout.
arXiv Detail & Related papers (2020-02-05T19:10:01Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z) - A Deep Learning Approach to Behavior-Based Learner Modeling [11.899303239960412]
We study learner outcome predictions, i.e., predictions of how they will perform at the end of a course.
We propose a novel Two Branch Decision Network for performance prediction that incorporates two important factors: how learners progress through the course and how the content progresses through the course.
Our proposed algorithm achieves 95.7% accuracy and 0.958 AUC score, which outperforms all other models.
arXiv Detail & Related papers (2020-01-23T01:26:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.