Deep Learning-Based Visual Fatigue Detection Using Eye Gaze Patterns in VR
- URL: http://arxiv.org/abs/2510.12994v1
- Date: Tue, 14 Oct 2025 21:13:10 GMT
- Title: Deep Learning-Based Visual Fatigue Detection Using Eye Gaze Patterns in VR
- Authors: Numan Zafar, Johnathan Locke, Shafique Ahmad Chaudhry,
- Abstract summary: Prolonged exposure to virtual reality (VR) systems leads to visual fatigue, impairing user comfort, performance, and safety.<n>Existing fatigue detection approaches rely on subjective questionnaires or intrusive physiological signals, such as EEG, heart rate, or eye-blink count.<n>This paper introduces a deep learning-based study for detecting visual fatigue using continuous eye-gaze trajectories recorded in VR.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prolonged exposure to virtual reality (VR) systems leads to visual fatigue, impairs user comfort, performance, and safety, particularly in high-stakes or long-duration applications. Existing fatigue detection approaches rely on subjective questionnaires or intrusive physiological signals, such as EEG, heart rate, or eye-blink count, which limit their scalability and real-time applicability. This paper introduces a deep learning-based study for detecting visual fatigue using continuous eye-gaze trajectories recorded in VR. We use the GazeBaseVR dataset comprising binocular eye-tracking data from 407 participants across five immersive tasks, extract cyclopean eye-gaze angles, and evaluate six deep classifiers. Our results demonstrate that EKYT achieves up to 94% accuracy, particularly in tasks demanding high visual attention, such as video viewing and text reading. We further analyze gaze variance and subjective fatigue measures, indicating significant behavioral differences between fatigued and non-fatigued conditions. These findings establish eye-gaze dynamics as a reliable and nonintrusive modality for continuous fatigue detection in immersive VR, offering practical implications for adaptive human-computer interactions.
Related papers
- EyeSeg: An Uncertainty-Aware Eye Segmentation Framework for AR/VR [58.33693755009173]
EyeSeg is an uncertainty-aware eye segmentation framework for augmented reality (AR) and virtual reality (VR)<n>We show that EyeSeg achieves segmentation improvements of MIoU, E1, F1, and ACC surpassing previous approaches.
arXiv Detail & Related papers (2025-07-13T14:33:10Z) - Non-Contact Health Monitoring During Daily Personal Care Routines [33.93756501373886]
Remote photoplethysmography (r) enables non-contact, continuous monitoring of physiological signals.<n>We present the first long-term r learning dataset containing 240 synchronized RGB and infrared (IR) facial videos from 21 participants.<n>Experiments demonstrate that combining RGB and IR video inputs improves the accuracy and robustness of non-contact physiological monitoring.
arXiv Detail & Related papers (2025-06-11T13:29:21Z) - Imagine, Verify, Execute: Memory-guided Agentic Exploration with Vision-Language Models [81.08295968057453]
We present IVE, an agentic exploration framework inspired by human curiosity.<n>We evaluate IVE in both simulated and real-world tabletop environments.
arXiv Detail & Related papers (2025-05-12T17:59:11Z) - Cracking the Code of Hallucination in LVLMs with Vision-aware Head Divergence [69.86946427928511]
We investigate the internal mechanisms driving hallucination in large vision-language models (LVLMs)<n>We introduce Vision-aware Head Divergence (VHD), a metric that quantifies the sensitivity of attention head outputs to visual context.<n>We propose Vision-aware Head Reinforcement (VHR), a training-free approach to mitigate hallucination by enhancing the role of vision-aware attention heads.
arXiv Detail & Related papers (2024-12-18T15:29:30Z) - Exploring Eye Tracking to Detect Cognitive Load in Complex Virtual Reality Training [11.83314968015781]
We present an ongoing study to detect users' cognitive load using an eye-tracking-based machine learning approach.
We developed a VR training system for cold spray and tested it with 22 participants.
Preliminary analysis demonstrates the feasibility of using eye-tracking to detect cognitive load in complex VR experiences.
arXiv Detail & Related papers (2024-11-18T16:44:19Z) - Real-Time Drowsiness Detection Using Eye Aspect Ratio and Facial Landmark Detection [0.0]
This study presents a real-time system designed to detect drowsiness using the Eye Aspect Ratio (EAR) and facial landmark detection techniques.
By establishing a threshold for the EAR, the system identifies when eyes are closed, indicating potential drowsiness.
Experiments show that the system reliably detects drowsiness with high accuracy while maintaining low computational demands.
arXiv Detail & Related papers (2024-08-11T17:34:24Z) - Analyzing Participants' Engagement during Online Meetings Using Unsupervised Remote Photoplethysmography with Behavioral Features [50.82725748981231]
Engagement measurement finds application in healthcare, education, services.
Use of physiological and behavioral features is viable, but impracticality of traditional physiological measurement arises due to the need for contact sensors.
We demonstrate the feasibility of the unsupervised photoplethysmography (rmography) as an alternative for contact sensors.
arXiv Detail & Related papers (2024-04-05T20:39:16Z) - DeepMetricEye: Metric Depth Estimation in Periocular VR Imagery [4.940128337433944]
We propose a lightweight framework derived from the U-Net 3+ deep learning backbone to estimate measurable periocular depth maps.
Our method reconstructs three-dimensional periocular regions, providing a metric basis for related light stimulus calculation protocols and medical guidelines.
Evaluated on a sample of 36 participants, our method exhibited notable efficacy in the periocular global precision evaluation experiment, and the pupil diameter measurement.
arXiv Detail & Related papers (2023-11-13T10:55:05Z) - Virtual-Reality based Vestibular Ocular Motor Screening for Concussion
Detection using Machine-Learning [0.0]
Sport-related concussion (SRC) depends on sensory information from visual, vestibular, and somatosensory systems.
Current clinical administration of Vestibular/Ocular Motor Screening (VOMS) is subjective and deviates among administrators.
With the advancement of technology, virtual reality (VR) can be utilized to advance the standardization of the VOMS.
arXiv Detail & Related papers (2022-10-13T02:09:21Z) - A Deep Learning Approach for the Segmentation of Electroencephalography
Data in Eye Tracking Applications [56.458448869572294]
We introduce DETRtime, a novel framework for time-series segmentation of EEG data.
Our end-to-end deep learning-based framework brings advances in Computer Vision to the forefront.
Our model generalizes well in the task of EEG sleep stage segmentation.
arXiv Detail & Related papers (2022-06-17T10:17:24Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.