Predicting MOOCs Dropout Using Only Two Easily Obtainable Features from
the First Week's Activities
- URL: http://arxiv.org/abs/2008.05849v1
- Date: Wed, 12 Aug 2020 10:44:49 GMT
- Title: Predicting MOOCs Dropout Using Only Two Easily Obtainable Features from
the First Week's Activities
- Authors: Ahmed Alamri, Mohammad Alshehri, Alexandra I. Cristea, Filipe D.
Pereira, Elaine Oliveira, Lei Shi, Craig Stewart
- Abstract summary: Several features are considered to contribute towards learner attrition or lack of interest, which may lead to disengagement or total dropout.
This study aims to predict dropout early-on, from the first week, by comparing several machine-learning approaches.
- Score: 56.1344233010643
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: While Massive Open Online Course (MOOCs) platforms provide knowledge in a new
and unique way, the very high number of dropouts is a significant drawback.
Several features are considered to contribute towards learner attrition or lack
of interest, which may lead to disengagement or total dropout. The jury is
still out on which factors are the most appropriate predictors. However, the
literature agrees that early prediction is vital to allow for a timely
intervention. Whilst feature-rich predictors may have the best chance for high
accuracy, they may be unwieldy. This study aims to predict learner dropout
early-on, from the first week, by comparing several machine-learning
approaches, including Random Forest, Adaptive Boost, XGBoost and GradientBoost
Classifiers. The results show promising accuracies (82%-94%) using as little as
2 features. We show that the accuracies obtained outperform state of the art
approaches, even when the latter deploy several features.
Related papers
- Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - Tracking Changing Probabilities via Dynamic Learners [0.18648070031379424]
We develop sparse multiclass moving average techniques to respond to non-stationarities in a timely manner.
One technique is based on the exponentiated moving average (EMA) and another is based on queuing a few count snapshots.
arXiv Detail & Related papers (2024-02-15T17:48:58Z) - Performative Time-Series Forecasting [71.18553214204978]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.
We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.
We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - How many Observations are Enough? Knowledge Distillation for Trajectory
Forecasting [31.57539055861249]
Current state-of-the-art models usually rely on a "history" of past tracked locations to predict a plausible sequence of future locations.
We conceive a novel distillation strategy that allows a knowledge transfer from a teacher network to a student one.
We show that a properly defined teacher supervision allows a student network to perform comparably to state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-09T15:05:39Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Taming Overconfident Prediction on Unlabeled Data from Hindsight [50.9088560433925]
Minimizing prediction uncertainty on unlabeled data is a key factor to achieve good performance in semi-supervised learning.
This paper proposes a dual mechanism, named ADaptive Sharpening (ADS), which first applies a soft-threshold to adaptively mask out determinate and negligible predictions.
ADS significantly improves the state-of-the-art SSL methods by making it a plug-in.
arXiv Detail & Related papers (2021-12-15T15:17:02Z) - Efficient Action Recognition Using Confidence Distillation [9.028144245738247]
We propose a confidence distillation framework to teach a representation of uncertainty of the teacher to the student sampler.
We conduct extensive experiments on three action recognition datasets and demonstrate that our framework achieves significant improvements in action recognition accuracy (up to 20%) and computational efficiency (more than 40%)
arXiv Detail & Related papers (2021-09-05T18:25:49Z) - A framework for predicting, interpreting, and improving Learning
Outcomes [0.0]
We develop an Embibe Score Quotient model (ESQ) to predict test scores based on observed academic, behavioral and test-taking features of a student.
ESQ can be used to predict the future scoring potential of a student as well as offer personalized learning nudges.
arXiv Detail & Related papers (2020-10-06T11:22:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.