A framework for predicting, interpreting, and improving Learning
Outcomes
- URL: http://arxiv.org/abs/2010.02629v2
- Date: Mon, 12 Oct 2020 04:54:56 GMT
- Title: A framework for predicting, interpreting, and improving Learning
Outcomes
- Authors: Chintan Donda, Sayan Dasgupta, Soma S Dhavala, Keyur Faldu, Aditi
Avasthi
- Abstract summary: We develop an Embibe Score Quotient model (ESQ) to predict test scores based on observed academic, behavioral and test-taking features of a student.
ESQ can be used to predict the future scoring potential of a student as well as offer personalized learning nudges.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It has long been recognized that academic success is a result of both
cognitive and non-cognitive dimensions acting together. Consequently, any
intelligent learning platform designed to improve learning outcomes (LOs) must
provide actionable inputs to the learner in these dimensions. However,
operationalizing such inputs in a production setting that is scalable is not
trivial. We develop an Embibe Score Quotient model (ESQ) to predict test scores
based on observed academic, behavioral and test-taking features of a student.
ESQ can be used to predict the future scoring potential of a student as well as
offer personalized learning nudges, both critical to improving LOs. Multiple
machine learning models are evaluated for the prediction task. In order to
provide meaningful feedback to the learner, individualized Shapley feature
attributions for each feature are computed. Prediction intervals are obtained
by applying non-parametric quantile regression, in an attempt to quantify the
uncertainty in the predictions. We apply the above modelling strategy on a
dataset consisting of more than a hundred million learner interactions on the
Embibe learning platform. We observe that the Median Absolute Error between the
observed and predicted scores is 4.58% across several user segments, and the
correlation between predicted and observed responses is 0.93. Game-like what-if
scenarios are played out to see the changes in LOs, on counterfactual examples.
We briefly discuss how a rational agent can then apply an optimal policy to
affect the learning outcomes by treating the above model like an Oracle.
Related papers
- Beyond Text: Leveraging Multi-Task Learning and Cognitive Appraisal Theory for Post-Purchase Intention Analysis [10.014248704653]
This study evaluates multi-task learning frameworks grounded in Cognitive Appraisal Theory to predict user behavior.
Our experiments show that users' language and traits improve predictions above and beyond models predicting only from text.
arXiv Detail & Related papers (2024-07-11T04:57:52Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - The Untold Impact of Learning Approaches on Software Fault-Proneness
Predictions [2.01747440427135]
This paper explores the effects of two learning approaches, useAllPredictAll and usePrePredictPost, on the performance of software fault-proneness prediction.
Using useAllPredictAll leads to significantly better performance than using usePrePredictPost, both within-release and across-releases.
arXiv Detail & Related papers (2022-07-12T17:31:55Z) - Prediction of Dilatory Behavior in eLearning: A Comparison of Multiple
Machine Learning Models [0.2963240482383777]
Procrastination, the irrational delay of tasks, is a common occurrence in online learning.
Research focusing on such predictions is scarce.
Studies involving different types of predictors and comparisons between the predictive performance of various methods are virtually non-existent.
arXiv Detail & Related papers (2022-06-30T07:24:08Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z) - Value-driven Hindsight Modelling [68.658900923595]
Value estimation is a critical component of the reinforcement learning (RL) paradigm.
Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function.
We develop an approach for representation learning in RL that sits in between these two extremes.
This provides tractable prediction targets that are directly relevant for a task, and can thus accelerate learning the value function.
arXiv Detail & Related papers (2020-02-19T18:10:20Z) - A Deep Learning Approach to Behavior-Based Learner Modeling [11.899303239960412]
We study learner outcome predictions, i.e., predictions of how they will perform at the end of a course.
We propose a novel Two Branch Decision Network for performance prediction that incorporates two important factors: how learners progress through the course and how the content progresses through the course.
Our proposed algorithm achieves 95.7% accuracy and 0.958 AUC score, which outperforms all other models.
arXiv Detail & Related papers (2020-01-23T01:26:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.