Understanding Self-Regulated Learning Behavior Among High and Low Dropout Risk Students During CS1: Combining Trace Logs, Dropout Prediction and Self-Reports
- URL: http://arxiv.org/abs/2506.09178v1
- Date: Tue, 10 Jun 2025 18:46:45 GMT
- Title: Understanding Self-Regulated Learning Behavior Among High and Low Dropout Risk Students During CS1: Combining Trace Logs, Dropout Prediction and Self-Reports
- Authors: Denis Zhidkikh, Ville Isomöttönen, Toni Taipalus,
- Abstract summary: This study explores the behavioral patterns of Computer Science students at varying dropout risks.<n>Using learning analytics, we analyzed trace logs and task performance data from a virtual learning environment.<n>The findings reveal distinct weekly learning strategy types and categorize course behavior.
- Score: 8.138288420049127
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The introductory programming course (CS1) at the university level is often perceived as particularly challenging, contributing to high dropout rates among Computer Science students. Identifying when and how students encounter difficulties in this course is critical for providing targeted support. This study explores the behavioral patterns of CS1 students at varying dropout risks using self-regulated learning (SRL) as the theoretical framework. Using learning analytics, we analyzed trace logs and task performance data from a virtual learning environment to map resource usage patterns and used student dropout prediction to distinguish between low and high dropout risk behaviors. Data from 47 consenting students were used to carry out the analysis. Additionally, self-report questionnaires from 29 participants enriched the interpretation of observed patterns. The findings reveal distinct weekly learning strategy types and categorize course behavior. Among low dropout risk students, three learning strategies were identified that different in how students prioritized completing tasks and reading course materials. High dropout risk students exhibited nine different strategies, some representing temporary unsuccessful strategies that can be recovered from, while others indicating behaviors of students on the verge of dropping out. This study highlights the value of combining student behavior profiling with predictive learning analytics to explain dropout predictions and devise targeted interventions. Practical findings of the study can in turn be used to help teachers, teaching assistants and other practitioners to better recognize and address students at the verge of dropping out.
Related papers
- Beyond classical and contemporary models: a transformative AI framework for student dropout prediction in distance learning using RAG, Prompt engineering, and Cross-modal fusion [0.4369550829556578]
This paper introduces a transformative AI framework that redefines dropout prediction.<n>The framework achieves 89% accuracy and an F1-score of 0.88, outperforming conventional models by 7% and reducing false negatives by 21%.
arXiv Detail & Related papers (2025-07-04T21:41:43Z) - Predicting Student Dropout Risk With A Dual-Modal Abrupt Behavioral Changes Approach [11.034576265432168]
The Dual-Modal Multiscale Sliding Window (DMSW) Model integrates academic performance and behavioral data to capture behavior patterns using minimal data.<n>The DMSW model improves prediction accuracy by 15% compared to traditional methods, enabling educators to identify high-risk students earlier.<n>These findings bridge the gap between theory and practice in dropout prediction, giving educators an innovative tool to enhance student retention and outcomes.
arXiv Detail & Related papers (2025-05-16T11:02:55Z) - Leveraging Knowledge Graphs and Large Language Models to Track and Analyze Learning Trajectories [0.0]
The study proposes a knowledge graph construction method based on large language models (LLMs)<n>It transforms learning materials into structured data and generates personalized learning trajectory graphs by analyzing students' test data.
arXiv Detail & Related papers (2025-04-13T16:27:15Z) - DASKT: A Dynamic Affect Simulation Method for Knowledge Tracing [51.665582274736785]
Knowledge Tracing (KT) predicts future performance by students' historical computation, and understanding students' affective states can enhance the effectiveness of KT.<n>We propose Affect Dynamic Knowledge Tracing (DASKT) to explore the impact of various student affective states on their knowledge states.<n>Our research highlights a promising avenue for future studies, focusing on achieving high interpretability and accuracy.
arXiv Detail & Related papers (2025-01-18T10:02:10Z) - Early Detection of At-Risk Students Using Machine Learning [0.0]
We aim to tackle the persistent challenges of higher education retention and student dropout rates by screening for at-risk students.<n>This work considers several machine learning models, including Support Vector Machines (SVM), Naive Bayes, K-nearest neighbors (KNN), Decision Trees, Logistic Regression, and Random Forest.<n>Our analysis indicates that all algorithms generate an acceptable outcome for at-risk student predictions, while Naive Bayes performs best overall.
arXiv Detail & Related papers (2024-12-12T17:33:06Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - Distantly-Supervised Named Entity Recognition with Adaptive Teacher
Learning and Fine-grained Student Ensemble [56.705249154629264]
Self-training teacher-student frameworks are proposed to improve the robustness of NER models.
In this paper, we propose an adaptive teacher learning comprised of two teacher-student networks.
Fine-grained student ensemble updates each fragment of the teacher model with a temporal moving average of the corresponding fragment of the student, which enhances consistent predictions on each model fragment against noise.
arXiv Detail & Related papers (2022-12-13T12:14:09Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Student-centric Model of Learning Management System Activity and
Academic Performance: from Correlation to Causation [2.169383034643496]
In recent years, there is a lot of interest in modeling students' digital traces in Learning Management System (LMS) to understand students' learning behavior patterns.
This paper explores a student-centric analytical framework for LMS activity data that can provide not only correlational but causal insights mined from observational data.
We envision that those insights will provide convincing evidence for college student support groups to launch student-centered and targeted interventions.
arXiv Detail & Related papers (2022-10-27T14:08:25Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Knowledge-driven Active Learning [70.37119719069499]
Active learning strategies aim at minimizing the amount of labelled data required to train a Deep Learning model.
Most active strategies are based on uncertain sample selection, and even often restricted to samples lying close to the decision boundary.
Here we propose to take into consideration common domain-knowledge and enable non-expert users to train a model with fewer samples.
arXiv Detail & Related papers (2021-10-15T06:11:53Z) - On the Loss Landscape of Adversarial Training: Identifying Challenges
and How to Overcome Them [57.957466608543676]
We analyze the influence of adversarial training on the loss landscape of machine learning models.
We show that the adversarial loss landscape is less favorable to optimization, due to increased curvature and more scattered gradients.
arXiv Detail & Related papers (2020-06-15T13:50:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.