Biometrics and Behavior Analysis for Detecting Distractions in e-Learning
- URL: http://arxiv.org/abs/2405.15434v3
- Date: Mon, 2 Sep 2024 07:18:16 GMT
- Title: Biometrics and Behavior Analysis for Detecting Distractions in e-Learning
- Authors: Álvaro Becerra, Javier Irigoyen, Roberto Daza, Ruth Cobos, Aythami Morales, Julian Fierrez, Mutlu Cukurova,
- Abstract summary: This article explores computer vision approaches to detect abnormal head pose during e-learning sessions.
We propose an approach designed to detect deviations in head posture from the average observed during a learner's session.
- Score: 12.49745170391342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article, we explore computer vision approaches to detect abnormal head pose during e-learning sessions and we introduce a study on the effects of mobile phone usage during these sessions. We utilize behavioral data collected from 120 learners monitored while participating in a MOOC learning sessions. Our study focuses on the influence of phone-usage events on behavior and physiological responses, specifically attention, heart rate, and meditation, before, during, and after phone usage. Additionally, we propose an approach for estimating head pose events using images taken by the webcam during the MOOC learning sessions to detect phone-usage events. Our hypothesis suggests that head posture undergoes significant changes when learners interact with a mobile phone, contrasting with the typical behavior seen when learners face a computer during e-learning sessions. We propose an approach designed to detect deviations in head posture from the average observed during a learner's session, operating as a semi-supervised method. This system flags events indicating alterations in head posture for subsequent human review and selection of mobile phone usage occurrences with a sensitivity over 90%.
Related papers
- Screen Matters: Cognitive and Behavioral Divergence Between Smartphone-Native and Computer-Native Youth [0.0]
We analyzed data from a diverse sample of 824 students aged 11-17.<n>Results suggest moderate but statistically significant differences in sustained attention, perceived frustration, and creative output.
arXiv Detail & Related papers (2025-07-20T20:04:44Z) - AI-based Multimodal Biometrics for Detecting Smartphone Distractions: Application to Online Learning [13.124145425838792]
We propose an AI-based approach that leverages physiological signals and head pose data to detect phone use.<n>Our results show that single biometric signals, such as brain waves or heart rate, offer limited accuracy, while head pose alone achieves 87%.<n>A multimodal model combining all signals reaches 91% accuracy, highlighting the benefits of integration.
arXiv Detail & Related papers (2025-06-20T11:37:19Z) - Emergent Active Perception and Dexterity of Simulated Humanoids from Visual Reinforcement Learning [69.71072181304066]
We introduce Perceptive Dexterous Control (PDC), a framework for vision-driven whole-body control with simulated humanoids.<n>PDC operates solely on egocentric vision for task specification, enabling object search, target placement, and skill selection through visual cues.<n>We show that training from scratch with reinforcement learning can produce emergent behaviors such as active search.
arXiv Detail & Related papers (2025-05-18T07:33:31Z) - IMPROVE: Impact of Mobile Phones on Remote Online Virtual Education [13.616038134322435]
This work presents the IMPROVE dataset, designed to evaluate the effects of mobile phone usage on learners during online education.
The dataset not only assesses academic performance and subjective learner feedback but also captures biometric, behavioral, and physiological signals.
arXiv Detail & Related papers (2024-12-13T11:29:05Z) - Auto Detecting Cognitive Events Using Machine Learning on Pupillary Data [0.0]
Pupil size is a valuable indicator of cognitive workload, reflecting changes in attention and arousal governed by the autonomic nervous system.
This study explores the potential of using machine learning to automatically detect cognitive events experienced using individuals.
arXiv Detail & Related papers (2024-10-18T04:54:46Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Improving automatic detection of driver fatigue and distraction using
machine learning [0.0]
Driver fatigue and distracted driving are important factors in traffic accidents.
We present techniques for simultaneously detecting fatigue and distracted driving behaviors using vision-based and machine learning-based approaches.
arXiv Detail & Related papers (2024-01-04T06:33:46Z) - Assessing cognitive function among older adults using machine learning and wearable device data: a feasibility study [3.0872517448897465]
We developed prediction models to differentiate older adults with normal cognition from those with poor cognition.
Activity and sleep parameters were also more strongly associated with processing speed, working memory, and attention compared to other cognitive fluency.
arXiv Detail & Related papers (2023-08-28T00:07:55Z) - Rare Life Event Detection via Mobile Sensing Using Multi-Task Learning [1.0995444037562332]
Rare life events significantly impact mental health, and their detection in behavioral studies is a crucial step towards health-based interventions.
We envision that mobile sensing data can be used to detect these anomalies.
In this paper, we first investigate granger-causality between life events and human behavior using sensing data.
arXiv Detail & Related papers (2023-05-31T17:29:24Z) - BI AVAN: Brain inspired Adversarial Visual Attention Network [67.05560966998559]
We propose a brain-inspired adversarial visual attention network (BI-AVAN) to characterize human visual attention directly from functional brain activity.
Our model imitates the biased competition process between attention-related/neglected objects to identify and locate the visual objects in a movie frame the human brain focuses on in an unsupervised manner.
arXiv Detail & Related papers (2022-10-27T22:20:36Z) - Learning Language and Multimodal Privacy-Preserving Markers of Mood from
Mobile Data [74.60507696087966]
Mental health conditions remain underdiagnosed even in countries with common access to advanced medical care.
One promising data source to help monitor human behavior is daily smartphone usage.
We study behavioral markers of daily mood using a recent dataset of mobile behaviors from adolescent populations at high risk of suicidal behaviors.
arXiv Detail & Related papers (2021-06-24T17:46:03Z) - Onfocus Detection: Identifying Individual-Camera Eye Contact from
Unconstrained Images [81.64699115587167]
Onfocus detection aims at identifying whether the focus of the individual captured by a camera is on the camera or not.
We build a large-scale onfocus detection dataset, named as the OnFocus Detection In the Wild (OFDIW)
We propose a novel end-to-end deep model, i.e., the eye-context interaction inferring network (ECIIN) for onfocus detection.
arXiv Detail & Related papers (2021-03-29T03:29:09Z) - What Can You Learn from Your Muscles? Learning Visual Representation
from Human Interactions [50.435861435121915]
We use human interaction and attention cues to investigate whether we can learn better representations compared to visual-only representations.
Our experiments show that our "muscly-supervised" representation outperforms a visual-only state-of-the-art method MoCo.
arXiv Detail & Related papers (2020-10-16T17:46:53Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.