Exploring Bayesian Deep Learning for Urgent Instructor Intervention Need
in MOOC Forums
- URL: http://arxiv.org/abs/2104.12643v1
- Date: Mon, 26 Apr 2021 15:12:13 GMT
- Title: Exploring Bayesian Deep Learning for Urgent Instructor Intervention Need
in MOOC Forums
- Authors: Jialin Yu, Laila Alrajhi, Anoushka Harit, Zhongtian Sun, Alexandra I.
Cristea, Lei Shi
- Abstract summary: Massive Open Online Courses (MOOCs) have become a popular choice for e-learning thanks to their great flexibility.
Due to large numbers of learners and their diverse backgrounds, it is taxing to offer real-time support.
With the large volume of posts and high workloads for MOOC instructors, it is unlikely that the instructors can identify all learners requiring intervention.
This paper explores for the first time Bayesian deep learning on learner-based text posts with two methods: Monte Carlo Dropout and Variational Inference.
- Score: 58.221459787471254
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Massive Open Online Courses (MOOCs) have become a popular choice for
e-learning thanks to their great flexibility. However, due to large numbers of
learners and their diverse backgrounds, it is taxing to offer real-time
support. Learners may post their feelings of confusion and struggle in the
respective MOOC forums, but with the large volume of posts and high workloads
for MOOC instructors, it is unlikely that the instructors can identify all
learners requiring intervention. This problem has been studied as a Natural
Language Processing (NLP) problem recently, and is known to be challenging, due
to the imbalance of the data and the complex nature of the task. In this paper,
we explore for the first time Bayesian deep learning on learner-based text
posts with two methods: Monte Carlo Dropout and Variational Inference, as a new
solution to assessing the need of instructor interventions for a learner's
post. We compare models based on our proposed methods with probabilistic
modelling to its baseline non-Bayesian models under similar circumstances, for
different cases of applying prediction. The results suggest that Bayesian deep
learning offers a critical uncertainty measure that is not supplied by
traditional neural networks. This adds more explainability, trust and
robustness to AI, which is crucial in education-based applications.
Additionally, it can achieve similar or better performance compared to
non-probabilistic neural networks, as well as grant lower variance.
Related papers
- A Question-centric Multi-experts Contrastive Learning Framework for Improving the Accuracy and Interpretability of Deep Sequential Knowledge Tracing Models [26.294808618068146]
Knowledge tracing plays a crucial role in predicting students' future performance.
Deep neural networks (DNNs) have shown great potential in solving the KT problem.
However, there still exist some important challenges when applying deep learning techniques to model the KT process.
arXiv Detail & Related papers (2024-03-12T05:15:42Z) - A Conceptual Model for End-to-End Causal Discovery in Knowledge Tracing [8.049552839071918]
We take a preliminary step towards solving the problem of causal discovery in knowledge tracing.
Our solution placed among the top entries in Task 3 of the NeurIPS 2022 Challenge on Causal Insights for Learning Paths in Education.
arXiv Detail & Related papers (2023-05-11T21:20:29Z) - Online Deep Learning from Doubly-Streaming Data [17.119725174036653]
This paper investigates a new online learning problem with doubly-streaming data, where the data streams are described by feature spaces that constantly evolve.
A plausible idea to overcome the challenges is to establish relationship between the pre-and-post evolving feature spaces.
We propose a novel OLD3S paradigm, where a shared latent subspace is discovered to summarize information from the old and new feature spaces.
arXiv Detail & Related papers (2022-04-25T17:06:39Z) - Knowledge-driven Active Learning [70.37119719069499]
Active learning strategies aim at minimizing the amount of labelled data required to train a Deep Learning model.
Most active strategies are based on uncertain sample selection, and even often restricted to samples lying close to the decision boundary.
Here we propose to take into consideration common domain-knowledge and enable non-expert users to train a model with fewer samples.
arXiv Detail & Related papers (2021-10-15T06:11:53Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Distill on the Go: Online knowledge distillation in self-supervised
learning [1.1470070927586016]
Recent works have shown that wider and deeper models benefit more from self-supervised learning than smaller models.
We propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation.
Our results show significant performance gain in the presence of noisy and limited labels.
arXiv Detail & Related papers (2021-04-20T09:59:23Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.