Towards Cybersickness Severity Classification from VR Gameplay Videos Using Transfer Learning and Temporal Modeling
- URL: http://arxiv.org/abs/2510.10422v1
- Date: Sun, 12 Oct 2025 03:10:05 GMT
- Title: Towards Cybersickness Severity Classification from VR Gameplay Videos Using Transfer Learning and Temporal Modeling
- Authors: Jyotirmay Nag Setu, Kevin Desai, John Quarles,
- Abstract summary: Cybersickness, marked by symptoms resembling motion sickness, continues to hinder widespread acceptance of virtual reality (VR)<n>In this study, we utilize transfer learning to extract high-level visual features from VR gameplay videos using the InceptionV3 model pretrained on the ImageNet dataset.<n>Our approach effectively leverages the time-series nature of video data, achieving a 68.4% classification accuracy for cybersickness severity.
- Score: 9.488297561764211
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rapid advancement of virtual reality (VR) technology, its adoption across domains such as healthcare, education, and entertainment has grown significantly. However, the persistent issue of cybersickness, marked by symptoms resembling motion sickness, continues to hinder widespread acceptance of VR. While recent research has explored multimodal deep learning approaches leveraging data from integrated VR sensors like eye and head tracking, there remains limited investigation into the use of video-based features for predicting cybersickness. In this study, we address this gap by utilizing transfer learning to extract high-level visual features from VR gameplay videos using the InceptionV3 model pretrained on the ImageNet dataset. These features are then passed to a Long Short-Term Memory (LSTM) network to capture the temporal dynamics of the VR experience and predict cybersickness severity over time. Our approach effectively leverages the time-series nature of video data, achieving a 68.4% classification accuracy for cybersickness severity. This surpasses the performance of existing models trained solely on video data, providing a practical tool for VR developers to evaluate and mitigate cybersickness in virtual environments. Furthermore, this work lays the foundation for future research on video-based temporal modeling for enhancing user comfort in VR applications.
Related papers
- False Reality: Uncovering Sensor-induced Human-VR Interaction Vulnerability [15.246996684892348]
False Reality is a new attack threat to VR devices without requiring access to or modification of their software.<n>We formalize these threats through an attack pathway framework and validate three representative pathways.<n>Our findings shall provide valuable insights for enhancing the security and resilience of future VR systems.
arXiv Detail & Related papers (2025-08-11T14:47:23Z) - SeedVR2: One-Step Video Restoration via Diffusion Adversarial Post-Training [82.68200031146299]
We propose a one-step diffusion-based VR model, termed as SeedVR2, which performs adversarial VR training against real data.<n>To handle the challenging high-resolution VR within a single step, we introduce several enhancements to both model architecture and training procedures.
arXiv Detail & Related papers (2025-06-05T17:51:05Z) - Towards Consumer-Grade Cybersickness Prediction: Multi-Model Alignment for Real-Time Vision-Only Inference [3.4667973471411853]
Cybersickness is a major obstacle to the widespread adoption of immersive virtual reality (VR)<n>We propose a scalable, deployable framework for personalized cybersickness prediction.<n>Our framework supports real-time applications, ideal for integration into consumer-grade VR platforms.
arXiv Detail & Related papers (2025-01-02T11:41:43Z) - Mazed and Confused: A Dataset of Cybersickness, Working Memory, Mental Load, Physical Load, and Attention During a Real Walking Task in VR [11.021668923244803]
Relationship between cognitive activities, physical activities, and familiar feelings of cybersickness is not well understood.
We collected head orientation, head position, eye tracking, images, physiological readings from external sensors, and self-reported cybersickness severity, physical load, and mental load in VR.
arXiv Detail & Related papers (2024-09-10T22:41:14Z) - Cybersickness Detection through Head Movement Patterns: A Promising
Approach [1.1562071835482226]
This research investigates head movement patterns as a novel physiological marker for cybersickness detection.
Head movements provide a continuous, non-invasive measure that can be easily captured through the sensors embedded in all commercial VR headsets.
arXiv Detail & Related papers (2024-02-05T04:49:59Z) - Deep Motion Masking for Secure, Usable, and Scalable Real-Time Anonymization of Virtual Reality Motion Data [49.68609500290361]
Recent studies have demonstrated that the motion tracking "telemetry" data used by nearly all VR applications is as uniquely identifiable as a fingerprint scan.
We present in this paper a state-of-the-art VR identification model that can convincingly bypass known defensive countermeasures.
arXiv Detail & Related papers (2023-11-09T01:34:22Z) - VR.net: A Real-world Dataset for Virtual Reality Motion Sickness
Research [33.092692299254814]
We introduce VR.net', a dataset offering approximately 12-hour gameplay videos from ten real-world games in 10 diverse genres.
For each video frame, a rich set of motion sickness-related labels, such as camera/object movement, depth field, and motion flow, are accurately assigned.
We utilize a tool to automatically and precisely extract ground truth data from 3D engines' rendering pipelines without accessing VR games' source code.
arXiv Detail & Related papers (2023-06-06T03:43:11Z) - Towards a Pipeline for Real-Time Visualization of Faces for VR-based
Telepresence and Live Broadcasting Utilizing Neural Rendering [58.720142291102135]
Head-mounted displays (HMDs) for Virtual Reality pose a considerable obstacle for a realistic face-to-face conversation in VR.
We present an approach that focuses on low-cost hardware and can be used on a commodity gaming computer with a single GPU.
arXiv Detail & Related papers (2023-01-04T08:49:51Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Learning Effect of Lay People in Gesture-Based Locomotion in Virtual
Reality [81.5101473684021]
Some of the most promising methods are gesture-based and do not require additional handheld hardware.
Recent work focused mostly on user preference and performance of the different locomotion techniques.
This work is investigated whether and how quickly users can adapt to a hand gesture-based locomotion system in VR.
arXiv Detail & Related papers (2022-06-16T10:44:16Z) - Wireless Edge-Empowered Metaverse: A Learning-Based Incentive Mechanism
for Virtual Reality [102.4151387131726]
We propose a learning-based Incentive Mechanism framework for VR services in the Metaverse.
First, we propose the quality of perception as the metric for VR users in the virtual world.
Second, for quick trading of VR services between VR users (i.e., buyers) and VR SPs (i.e., sellers), we design a double Dutch auction mechanism.
Third, for auction communication reduction, we design a deep reinforcement learning-based auctioneer to accelerate this auction process.
arXiv Detail & Related papers (2021-11-07T13:02:52Z) - Guidelines for the Development of Immersive Virtual Reality Software for
Cognitive Neuroscience and Neuropsychology: The Development of Virtual
Reality Everyday Assessment Lab (VR-EAL) [0.0]
This study offers guidelines for the development of VR software in cognitive neuroscience and neuropsychology.
Twenty-five participants aged between 20 and 45 years with 12-16 years of full-time education evaluated various versions of VR-EAL.
The final version of VR-EAL achieved high scores in every sub-score of the VRNQ and exceeded its parsimonious cut-offs.
arXiv Detail & Related papers (2021-01-20T14:55:57Z) - Learning-based Prediction and Uplink Retransmission for Wireless Virtual
Reality (VR) Network [29.640073851481066]
In this paper, we use offline and online learning algorithms to predict viewpoint of the VR user using real VR dataset.
Our proposed online learning algorithm for uplink wireless VR network with the proactive retransmission scheme only exhibits about 5% prediction error.
arXiv Detail & Related papers (2020-12-16T18:31:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.