Detecting Disengagement in Virtual Learning as an Anomaly
- URL: http://arxiv.org/abs/2211.06870v1
- Date: Sun, 13 Nov 2022 10:29:25 GMT
- Title: Detecting Disengagement in Virtual Learning as an Anomaly
- Authors: Ali Abedi and Shehroz S. Khan
- Abstract summary: Student engagement is an important factor in meeting the goals of virtual learning programs.
In this paper, we formulate detecting disengagement in virtual learning as an anomaly detection problem.
We design various autoencoders, including temporal convolutional network autoencoder, long-short-term memory autoencoder.
- Score: 4.706263507340607
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Student engagement is an important factor in meeting the goals of virtual
learning programs. Automatic measurement of student engagement provides helpful
information for instructors to meet learning program objectives and
individualize program delivery. Many existing approaches solve video-based
engagement measurement using the traditional frameworks of binary
classification (classifying video snippets into engaged or disengaged classes),
multi-class classification (classifying video snippets into multiple classes
corresponding to different levels of engagement), or regression (estimating a
continuous value corresponding to the level of engagement). However, we observe
that while the engagement behaviour is mostly well-defined (e.g., focused, not
distracted), disengagement can be expressed in various ways. In addition, in
some cases, the data for disengaged classes may not be sufficient to train
generalizable binary or multi-class classifiers. To handle this situation, in
this paper, for the first time, we formulate detecting disengagement in virtual
learning as an anomaly detection problem. We design various autoencoders,
including temporal convolutional network autoencoder, long-short-term memory
autoencoder, and feedforward autoencoder using different behavioral and affect
features for video-based student disengagement detection. The result of our
experiments on two publicly available student engagement datasets, DAiSEE and
EmotiW, shows the superiority of the proposed approach for disengagement
detection as an anomaly compared to binary classifiers for classifying videos
into engaged versus disengaged classes (with an average improvement of 9% on
the area under the curve of the receiver operating characteristic curve and 22%
on the area under the curve of the precision-recall curve).
Related papers
- Prior Knowledge Guided Network for Video Anomaly Detection [1.389970629097429]
Video Anomaly Detection (VAD) involves detecting anomalous events in videos.
We propose a Prior Knowledge Guided Network(PKG-Net) for the VAD task.
arXiv Detail & Related papers (2023-09-04T15:57:07Z) - Self-Regulated Learning for Egocentric Video Activity Anticipation [147.9783215348252]
Self-Regulated Learning (SRL) aims to regulate the intermediate representation consecutively to produce representation that emphasizes the novel information in the frame of the current time-stamp.
SRL sharply outperforms existing state-of-the-art in most cases on two egocentric video datasets and two third-person video datasets.
arXiv Detail & Related papers (2021-11-23T03:29:18Z) - Joint Inductive and Transductive Learning for Video Object Segmentation [107.32760625159301]
Semi-supervised object segmentation is a task of segmenting the target object in a video sequence given only a mask in the first frame.
Most previous best-performing methods adopt matching-based transductive reasoning or online inductive learning.
We propose to integrate transductive and inductive learning into a unified framework to exploit complement between them for accurate and robust video object segmentation.
arXiv Detail & Related papers (2021-08-08T16:25:48Z) - Affect-driven Engagement Measurement from Videos [0.8545305424564517]
We present a novel approach for video-based engagement measurement in virtual learning programs.
Deep learning-based temporal, and traditional machine-learning-based non-temporal models are trained and validated.
Our experiments show a state-of-the-art engagement level classification accuracy of 63.3% and correctly classifying disengagement videos.
arXiv Detail & Related papers (2021-06-21T06:49:17Z) - ASCNet: Self-supervised Video Representation Learning with
Appearance-Speed Consistency [62.38914747727636]
We study self-supervised video representation learning, which is a challenging task due to 1) a lack of labels for explicit supervision and 2) unstructured and noisy visual information.
Existing methods mainly use contrastive loss with video clips as the instances and learn visual representation by discriminating instances from each other.
In this paper, we observe that the consistency between positive samples is the key to learn robust video representations.
arXiv Detail & Related papers (2021-06-04T08:44:50Z) - CoCon: Cooperative-Contrastive Learning [52.342936645996765]
Self-supervised visual representation learning is key for efficient video analysis.
Recent success in learning image representations suggests contrastive learning is a promising framework to tackle this challenge.
We introduce a cooperative variant of contrastive learning to utilize complementary information across views.
arXiv Detail & Related papers (2021-04-30T05:46:02Z) - Incremental Learning from Low-labelled Stream Data in Open-Set Video
Face Recognition [0.0]
We propose a novel incremental learning approach which combines a deep features encoder with an Open-Set Dynamic Ensembles of SVM.
Our method can use unsupervised operational data to enhance recognition.
Results show a benefit of up to 15% F1-score increase respect to non-adaptive state-of-the-art methods.
arXiv Detail & Related papers (2020-12-17T13:28:13Z) - Memory-augmented Dense Predictive Coding for Video Representation
Learning [103.69904379356413]
We propose a new architecture and learning framework Memory-augmented Predictive Coding (MemDPC) for the task.
We investigate visual-only self-supervised video representation learning from RGB frames, or from unsupervised optical flow, or both.
In all cases, we demonstrate state-of-the-art or comparable performance over other approaches with orders of magnitude fewer training data.
arXiv Detail & Related papers (2020-08-03T17:57:01Z) - Self-trained Deep Ordinal Regression for End-to-End Video Anomaly
Detection [114.9714355807607]
We show that applying self-trained deep ordinal regression to video anomaly detection overcomes two key limitations of existing methods.
We devise an end-to-end trainable video anomaly detection approach that enables joint representation learning and anomaly scoring without manually labeled normal/abnormal data.
arXiv Detail & Related papers (2020-03-15T08:44:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.