A Deep Learning Approach Towards Student Performance Prediction in
Online Courses: Challenges Based on a Global Perspective
- URL: http://arxiv.org/abs/2402.01655v1
- Date: Wed, 10 Jan 2024 19:13:19 GMT
- Title: A Deep Learning Approach Towards Student Performance Prediction in
Online Courses: Challenges Based on a Global Perspective
- Authors: Abdallah Moubayed, MohammadNoor Injadat, Nouh Alhindawi, Ghassan
Samara, Sara Abuasal, Raed Alazaidah
- Abstract summary: This work proposes the use of deep learning techniques (CNN and RNN-LSTM) to predict the students' performance at the midpoint stage of the online course delivery.
Experimental results show that deep learning models have promising performance as they outperform other optimized traditional ML models.
- Score: 0.6058427379240696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Analyzing and evaluating students' progress in any learning environment is
stressful and time consuming if done using traditional analysis methods. This
is further exasperated by the increasing number of students due to the shift of
focus toward integrating the Internet technologies in education and the focus
of academic institutions on moving toward e-Learning, blended, or online
learning models. As a result, the topic of student performance prediction has
become a vibrant research area in recent years. To address this, machine
learning and data mining techniques have emerged as a viable solution. To that
end, this work proposes the use of deep learning techniques (CNN and RNN-LSTM)
to predict the students' performance at the midpoint stage of the online course
delivery using three distinct datasets collected from three different regions
of the world. Experimental results show that deep learning models have
promising performance as they outperform other optimized traditional ML models
in two of the three considered datasets while also having comparable
performance for the third dataset.
Related papers
- Research on Education Big Data for Students Academic Performance Analysis based on Machine Learning [8.556825982336807]
In this work, a machine learning model based on Long Short-Term Memory Network (LSTM) was used to conduct an in-depth analysis of educational big data.
The LSTM model efficiently processes time series data, allowing us to capture time-dependent and long-term trends in students' learning activities.
This approach is particularly useful for analyzing student progress, engagement, and other behavioral patterns to support personalized education.
arXiv Detail & Related papers (2024-06-25T01:19:22Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Predicting Infant Brain Connectivity with Federated Multi-Trajectory
GNNs using Scarce Data [54.55126643084341]
Existing deep learning solutions suffer from three major limitations.
We introduce FedGmTE-Net++, a federated graph-based multi-trajectory evolution network.
Using the power of federation, we aggregate local learnings among diverse hospitals with limited datasets.
arXiv Detail & Related papers (2024-01-01T10:20:01Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Learning Objective-Specific Active Learning Strategies with Attentive
Neural Processes [72.75421975804132]
Learning Active Learning (LAL) suggests to learn the active learning strategy itself, allowing it to adapt to the given setting.
We propose a novel LAL method for classification that exploits symmetry and independence properties of the active learning problem.
Our approach is based on learning from a myopic oracle, which gives our model the ability to adapt to non-standard objectives.
arXiv Detail & Related papers (2023-09-11T14:16:37Z) - A Survey on Few-Shot Class-Incremental Learning [11.68962265057818]
Few-shot class-incremental learning (FSCIL) poses a significant challenge for deep neural networks to learn new tasks.
This paper provides a comprehensive survey on FSCIL.
FSCIL has achieved impressive achievements in various fields of computer vision.
arXiv Detail & Related papers (2023-04-17T10:15:08Z) - Click-Based Student Performance Prediction: A Clustering Guided
Meta-Learning Approach [10.962724342736042]
We study the problem of predicting student knowledge acquisition in online courses from clickstream behavior.
Our methodology for predicting in-video quiz performance is based on three key ideas we develop.
arXiv Detail & Related papers (2021-10-28T14:03:29Z) - Learning Deep Representation with Energy-Based Self-Expressiveness for
Subspace Clustering [24.311754971064303]
We propose a new deep subspace clustering framework, motivated by the energy-based models.
Considering the powerful representation ability of the recently popular self-supervised learning, we attempt to leverage self-supervised representation learning to learn the dictionary.
arXiv Detail & Related papers (2021-10-28T11:51:08Z) - Assessing the Knowledge State of Online Students -- New Data, New
Approaches, Improved Accuracy [28.719009375724028]
Student performance (SP) modeling is a critical step for building adaptive online teaching systems.
This study is the first to use four very large datasets made available recently from four distinct intelligent tutoring systems.
arXiv Detail & Related papers (2021-09-04T00:08:59Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Peer-inspired Student Performance Prediction in Interactive Online
Question Pools with Graph Neural Network [56.62345811216183]
We propose a novel approach using Graph Neural Networks (GNNs) to achieve better student performance prediction in interactive online question pools.
Specifically, we model the relationship between students and questions using student interactions to construct the student-interaction-question network.
We evaluate the effectiveness of our approach on a real-world dataset consisting of 104,113 mouse trajectories generated in the problem-solving process of over 4000 students on 1631 questions.
arXiv Detail & Related papers (2020-08-04T14:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.