IMPROVE: Impact of Mobile Phones on Remote Online Virtual Education
- URL: http://arxiv.org/abs/2412.14195v1
- Date: Fri, 13 Dec 2024 11:29:05 GMT
- Title: IMPROVE: Impact of Mobile Phones on Remote Online Virtual Education
- Authors: Roberto Daza, Alvaro Becerra, Ruth Cobos, Julian Fierrez, Aythami Morales,
- Abstract summary: This work presents the IMPROVE dataset, designed to evaluate the effects of mobile phone usage on learners during online education.
The dataset not only assesses academic performance and subjective learner feedback but also captures biometric, behavioral, and physiological signals.
- Score: 13.616038134322435
- License:
- Abstract: This work presents the IMPROVE dataset, designed to evaluate the effects of mobile phone usage on learners during online education. The dataset not only assesses academic performance and subjective learner feedback but also captures biometric, behavioral, and physiological signals, providing a comprehensive analysis of the impact of mobile phone use on learning. Multimodal data were collected from 120 learners in three groups with different phone interaction levels. A setup involving 16 sensors was implemented to collect data that have proven to be effective indicators for understanding learner behavior and cognition, including electroencephalography waves, videos, eye tracker, etc. The dataset includes metadata from the processed videos like face bounding boxes, facial landmarks, and Euler angles for head pose estimation. In addition, learner performance data and self-reported forms are included. Phone usage events were labeled, covering both supervisor-triggered and uncontrolled events. A semi-manual re-labeling system, using head pose and eye tracker data, is proposed to improve labeling accuracy. Technical validation confirmed signal quality, with statistical analyses revealing biometric changes during phone use.
Related papers
- Representation Learning for Wearable-Based Applications in the Case of
Missing Data [20.37256375888501]
multimodal sensor data in real-world environments is still challenging due to low data quality and limited data annotations.
We investigate representation learning for imputing missing wearable data and compare it with state-of-the-art statistical approaches.
Our study provides insights for the design and development of masking-based self-supervised learning tasks.
arXiv Detail & Related papers (2024-01-08T08:21:37Z) - Incremental Semi-supervised Federated Learning for Health Inference via
Mobile Sensing [5.434366992553875]
We propose FedMobile, an incremental semi-supervised federated learning algorithm.
We evaluate FedMobile using a real-world mobile sensing dataset for influenza-like symptom recognition.
arXiv Detail & Related papers (2023-12-19T23:39:33Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - Mobile Behavioral Biometrics for Passive Authentication [65.94403066225384]
This work carries out a comparative analysis of unimodal and multimodal behavioral biometric traits.
Experiments are performed over HuMIdb, one of the largest and most comprehensive freely available mobile user interaction databases.
In our experiments, the most discriminative background sensor is the magnetometer, whereas among touch tasks the best results are achieved with keystroke.
arXiv Detail & Related papers (2022-03-14T17:05:59Z) - Federated Learning: A Signal Processing Perspective [144.63726413692876]
Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data.
This article provides a unified systematic framework for federated learning in a manner that encapsulates and highlights the main challenges that are natural to treat using signal processing tools.
arXiv Detail & Related papers (2021-03-31T15:14:39Z) - EaZy Learning: An Adaptive Variant of Ensemble Learning for Fingerprint
Liveness Detection [14.99677459192122]
Fingerprint liveness detection mechanisms perform well under the within-dataset environment but fail miserably under cross-sensor and cross-dataset settings.
To enhance the generalization abilities, robustness and the interoperability of the fingerprint spoof detectors, the learning models need to be adaptive towards the data.
We propose a generic model, EaZy learning which can be considered as an adaptive midway between eager and lazy learning.
arXiv Detail & Related papers (2021-03-03T06:40:19Z) - TapNet: The Design, Training, Implementation, and Applications of a
Multi-Task Learning CNN for Off-Screen Mobile Input [75.05709030478073]
We present the design, training, implementation and applications of TapNet, a multi-task network that detects tapping on the smartphone.
TapNet can jointly learn from data across devices and simultaneously recognize multiple tap properties, including tap direction and tap location.
arXiv Detail & Related papers (2021-02-18T00:45:41Z) - CLRGaze: Contrastive Learning of Representations for Eye Movement
Signals [0.0]
We learn feature vectors of eye movements in a self-supervised manner.
We adopt a contrastive learning approach and propose a set of data transformations that encourage a deep neural network to discern salient and granular gaze patterns.
arXiv Detail & Related papers (2020-10-25T06:12:06Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - A Framework for Behavioral Biometric Authentication using Deep Metric
Learning on Mobile Devices [17.905483523678964]
We present a new framework to incorporate training on battery-powered mobile devices, so private data never leaves the device and training can be flexibly scheduled to adapt the behavioral patterns at runtime.
Experiments demonstrate authentication accuracy over 95% on three public datasets, a sheer 15% gain from multi-class classification with less data and robustness against brute-force and side-channel attacks with 99% and 90% success, respectively.
Our results indicate that training consumes lower energy than watching videos and slightly higher energy than playing games.
arXiv Detail & Related papers (2020-05-26T17:56:20Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.