Integrating behavior analysis with machine learning to predict online learning performance: A scientometric review and empirical study
- URL: http://arxiv.org/abs/2406.11847v1
- Date: Thu, 28 Mar 2024 03:29:02 GMT
- Title: Integrating behavior analysis with machine learning to predict online learning performance: A scientometric review and empirical study
- Authors: Jin Yuan, Xuelan Qiu, Jinran Wu, Jiesi Guo, Weide Li, You-Gan Wang,
- Abstract summary: This study proposes an integration framework that blends learning behavior analysis with ML algorithms to enhance the prediction accuracy of students' online learning performance.
Results show that the framework yields nearly perfect prediction performance for autonomous students and satisfactory performance for motivated students.
- Score: 6.133369217932887
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The interest in predicting online learning performance using ML algorithms has been steadily increasing. We first conducted a scientometric analysis to provide a systematic review of research in this area. The findings show that most existing studies apply the ML methods without considering learning behavior patterns, which may compromise the prediction accuracy and precision of the ML methods. This study proposes an integration framework that blends learning behavior analysis with ML algorithms to enhance the prediction accuracy of students' online learning performance. Specifically, the framework identifies distinct learning patterns among students by employing clustering analysis and implements various ML algorithms to predict performance within each pattern. For demonstration, the integration framework is applied to a real dataset from edX and distinguishes two learning patterns, as in, low autonomy students and motivated students. The results show that the framework yields nearly perfect prediction performance for autonomous students and satisfactory performance for motivated students. Additionally, this study compares the prediction performance of the integration framework to that of directly applying ML methods without learning behavior analysis using comprehensive evaluation metrics. The results consistently demonstrate the superiority of the integration framework over the direct approach, particularly when integrated with the best-performing XGBoosting method. Moreover, the framework significantly improves prediction accuracy for the motivated students and for the worst-performing random forest method. This study also evaluates the importance of various learning behaviors within each pattern using LightGBM with SHAP values. The implications of the integration framework and the results for online education practice and future research are discussed.
Related papers
- On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning [85.75164588939185]
We study the discriminative probabilistic modeling problem on a continuous domain for (multimodal) self-supervised representation learning.
We conduct generalization error analysis to reveal the limitation of current InfoNCE-based contrastive loss for self-supervised representation learning.
arXiv Detail & Related papers (2024-10-11T18:02:46Z) - Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning [97.2995389188179]
Recent research has begun to approach large language models (LLMs) unlearning via gradient ascent (GA)
Despite their simplicity and efficiency, we suggest that GA-based methods face the propensity towards excessive unlearning.
We propose several controlling methods that can regulate the extent of excessive unlearning.
arXiv Detail & Related papers (2024-06-13T14:41:00Z) - Machine Unlearning of Pre-trained Large Language Models [17.40601262379265]
This study investigates the concept of the right to be forgotten' within the context of large language models (LLMs)
We explore machine unlearning as a pivotal solution, with a focus on pre-trained models.
arXiv Detail & Related papers (2024-02-23T07:43:26Z) - Analyzing the Capabilities of Nature-inspired Feature Selection
Algorithms in Predicting Student Performance [0.0]
In this paper, an analysis was conducted to determine the relative performance of a suite of nature-inspired algorithms in the feature-selection portion of ensemble algorithms used to predict student performance.
It was found that leveraging an ensemble approach using nature-inspired algorithms for feature selection and traditional ML algorithms for classification significantly increased predictive accuracy while also reducing feature set size by up to 65 percent.
arXiv Detail & Related papers (2023-08-15T21:18:52Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Mixed Semi-Supervised Generalized-Linear-Regression with applications to Deep-Learning and Interpolators [6.537685198688539]
We present a methodology for using unlabeled data to design semi supervised learning (SSL) methods.
We include in each of them a mixing parameter $alpha$, controlling the weight given to the unlabeled data.
We demonstrate the effectiveness of our methodology in delivering substantial improvement compared to the standard supervised models.
arXiv Detail & Related papers (2023-02-19T09:55:18Z) - From Mimicking to Integrating: Knowledge Integration for Pre-Trained
Language Models [55.137869702763375]
This paper explores a novel PLM reuse paradigm, Knowledge Integration (KI)
KI aims to merge the knowledge from different teacher-PLMs, each of which specializes in a different classification problem, into a versatile student model.
We then design a Model Uncertainty--aware Knowledge Integration (MUKI) framework to recover the golden supervision for the student.
arXiv Detail & Related papers (2022-10-11T07:59:08Z) - Predicting student performance using sequence classification with
time-based windows [1.5836913530330787]
We show that accurate predictive models can be built based on sequential patterns derived from students' behavioral data.
We present a methodology for capturing temporal aspects in behavioral data and analyze its influence on the predictive performance of the models.
The results of our improved sequence classification technique are capable of predicting student performance with high levels of accuracy, reaching 90 percent for course-specific models.
arXiv Detail & Related papers (2022-08-16T13:46:39Z) - Graph-based Ensemble Machine Learning for Student Performance Prediction [0.7874708385247353]
We propose a graph-based ensemble machine learning method to improve the stability of single machine learning methods.
Our model outperforms the best traditional machine learning algorithms by up to 14.8% in prediction accuracy.
arXiv Detail & Related papers (2021-12-15T05:19:46Z) - MAML is a Noisy Contrastive Learner [72.04430033118426]
Model-agnostic meta-learning (MAML) is one of the most popular and widely-adopted meta-learning algorithms nowadays.
We provide a new perspective to the working mechanism of MAML and discover that: MAML is analogous to a meta-learner using a supervised contrastive objective function.
We propose a simple but effective technique, zeroing trick, to alleviate such interference.
arXiv Detail & Related papers (2021-06-29T12:52:26Z) - How Fine-Tuning Allows for Effective Meta-Learning [50.17896588738377]
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm.
We provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure.
This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.
arXiv Detail & Related papers (2021-05-05T17:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.