A multimodal dataset for understanding the impact of mobile phones on remote online virtual education
- URL: http://arxiv.org/abs/2412.14195v2
- Date: Thu, 19 Jun 2025 15:11:05 GMT
- Title: A multimodal dataset for understanding the impact of mobile phones on remote online virtual education
- Authors: Roberto Daza, Alvaro Becerra, Ruth Cobos, Julian Fierrez, Aythami Morales,
- Abstract summary: The IMPROVE dataset is a multimodal resource designed to evaluate the effects of mobile phone usage on learners during online education.<n>It includes behavioral, biometric, physiological, and academic performance data collected from 120 learners.<n>The dataset is publicly available for research through GitHub and Science Data Bank.
- Score: 13.616038134322435
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presents the IMPROVE dataset, a multimodal resource designed to evaluate the effects of mobile phone usage on learners during online education. It includes behavioral, biometric, physiological, and academic performance data collected from 120 learners divided into three groups with different levels of phone interaction, enabling the analysis of the impact of mobile phone usage and related phenomena such as nomophobia. A setup involving 16 synchronized sensors -- including EEG, eye tracking, video cameras, smartwatches, and keystroke dynamics -- was used to monitor learner activity during 30-minute sessions involving educational videos, document reading, and multiple-choice tests. Mobile phone usage events, including both controlled interventions and uncontrolled interactions, were labeled by supervisors and refined through a semi-supervised re-labeling process. Technical validation confirmed signal quality, and statistical analyses revealed biometric changes associated with phone usage. The dataset is publicly available for research through GitHub and Science Data Bank, with synchronized recordings from three platforms (edBB, edX, and LOGGE), provided in standard formats (.csv, .mp4, .wav, and .tsv), and accompanied by a detailed guide.
Related papers
- CADDI: An in-Class Activity Detection Dataset using IMU data from low-cost sensors [3.3860149185538613]
We present a novel dataset for in-class activity detection using affordable IMU sensors.
The dataset comprises 19 diverse activities, both instantaneous and continuous, performed by 12 participants in typical classroom scenarios.
It includes accelerometer, gyroscope, rotation vector data, and synchronized stereo images, offering a comprehensive resource for developing multimodal algorithms using sensor and visual data.
arXiv Detail & Related papers (2025-03-04T18:29:57Z) - Representation Learning for Wearable-Based Applications in the Case of
Missing Data [20.37256375888501]
multimodal sensor data in real-world environments is still challenging due to low data quality and limited data annotations.
We investigate representation learning for imputing missing wearable data and compare it with state-of-the-art statistical approaches.
Our study provides insights for the design and development of masking-based self-supervised learning tasks.
arXiv Detail & Related papers (2024-01-08T08:21:37Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - ContextLabeler Dataset: physical and virtual sensors data collected from
smartphone usage in-the-wild [7.310043452300736]
This paper describes a data collection campaign and the resulting dataset derived from smartphone sensors.
The collected dataset represents a useful source of real data to both define and evaluate a broad set of novel context-aware solutions.
arXiv Detail & Related papers (2023-07-07T13:28:29Z) - Your Identity is Your Behavior -- Continuous User Authentication based
on Machine Learning and Touch Dynamics [0.0]
This research used a dataset of touch dynamics collected from 40 subjects using the LG V30+.
The participants played four mobile games, Diep.io, Slither, and Minecraft, for 10 minutes each game.
The results of the research showed that all three algorithms were able to effectively classify users based on their individual touch dynamics.
arXiv Detail & Related papers (2023-04-24T13:45:25Z) - Digital Fingerprinting of Microstructures [44.139970905896504]
Finding efficient means of fingerprinting microstructural information is a critical step towards harnessing data-centric machine learning approaches.
Here, we consider microstructure classification and utilise the resulting features over a range of related machine learning tasks.
In particular, methods that leverage transfer learning with convolutional neural networks (CNNs), pretrained on the ImageNet dataset, are generally shown to outperform other methods.
arXiv Detail & Related papers (2022-03-25T15:40:44Z) - Mobile Behavioral Biometrics for Passive Authentication [65.94403066225384]
This work carries out a comparative analysis of unimodal and multimodal behavioral biometric traits.
Experiments are performed over HuMIdb, one of the largest and most comprehensive freely available mobile user interaction databases.
In our experiments, the most discriminative background sensor is the magnetometer, whereas among touch tasks the best results are achieved with keystroke.
arXiv Detail & Related papers (2022-03-14T17:05:59Z) - 2021 BEETL Competition: Advancing Transfer Learning for Subject
Independence & Heterogenous EEG Data Sets [89.84774119537087]
We design two transfer learning challenges around diagnostics and Brain-Computer-Interfacing (BCI)
Task 1 is centred on medical diagnostics, addressing automatic sleep stage annotation across subjects.
Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets.
arXiv Detail & Related papers (2022-02-14T12:12:20Z) - Motivating Learners in Multi-Orchestrator Mobile Edge Learning: A
Stackelberg Game Approach [54.28419430315478]
Mobile Edge Learning enables distributed training of Machine Learning models over heterogeneous edge devices.
In MEL, the training performance deteriorates without the availability of sufficient training data or computing resources.
We propose an incentive mechanism, where we formulate the orchestrators-learners interactions as a 2-round Stackelberg game.
arXiv Detail & Related papers (2021-09-25T17:27:48Z) - Federated Learning: A Signal Processing Perspective [144.63726413692876]
Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data.
This article provides a unified systematic framework for federated learning in a manner that encapsulates and highlights the main challenges that are natural to treat using signal processing tools.
arXiv Detail & Related papers (2021-03-31T15:14:39Z) - EaZy Learning: An Adaptive Variant of Ensemble Learning for Fingerprint
Liveness Detection [14.99677459192122]
Fingerprint liveness detection mechanisms perform well under the within-dataset environment but fail miserably under cross-sensor and cross-dataset settings.
To enhance the generalization abilities, robustness and the interoperability of the fingerprint spoof detectors, the learning models need to be adaptive towards the data.
We propose a generic model, EaZy learning which can be considered as an adaptive midway between eager and lazy learning.
arXiv Detail & Related papers (2021-03-03T06:40:19Z) - TapNet: The Design, Training, Implementation, and Applications of a
Multi-Task Learning CNN for Off-Screen Mobile Input [75.05709030478073]
We present the design, training, implementation and applications of TapNet, a multi-task network that detects tapping on the smartphone.
TapNet can jointly learn from data across devices and simultaneously recognize multiple tap properties, including tap direction and tap location.
arXiv Detail & Related papers (2021-02-18T00:45:41Z) - Anomaly Detection in Video via Self-Supervised and Multi-Task Learning [113.81927544121625]
Anomaly detection in video is a challenging computer vision problem.
In this paper, we approach anomalous event detection in video through self-supervised and multi-task learning at the object level.
arXiv Detail & Related papers (2020-11-15T10:21:28Z) - Resource-Constrained Federated Learning with Heterogeneous Labels and
Models [1.4824891788575418]
We propose a framework with simple $alpha$-weighted federated aggregation of scores which leverages overlapping information gain across labels.
We also demonstrate the on-device capabilities of our proposed framework by experimenting with federated learning and inference across different iterations on a Raspberry Pi 2.
arXiv Detail & Related papers (2020-11-06T06:23:47Z) - CLRGaze: Contrastive Learning of Representations for Eye Movement
Signals [0.0]
We learn feature vectors of eye movements in a self-supervised manner.
We adopt a contrastive learning approach and propose a set of data transformations that encourage a deep neural network to discern salient and granular gaze patterns.
arXiv Detail & Related papers (2020-10-25T06:12:06Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - A Framework for Behavioral Biometric Authentication using Deep Metric
Learning on Mobile Devices [17.905483523678964]
We present a new framework to incorporate training on battery-powered mobile devices, so private data never leaves the device and training can be flexibly scheduled to adapt the behavioral patterns at runtime.
Experiments demonstrate authentication accuracy over 95% on three public datasets, a sheer 15% gain from multi-class classification with less data and robustness against brute-force and side-channel attacks with 99% and 90% success, respectively.
Our results indicate that training consumes lower energy than watching videos and slightly higher energy than playing games.
arXiv Detail & Related papers (2020-05-26T17:56:20Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.