Cybersickness Detection through Head Movement Patterns: A Promising
Approach
- URL: http://arxiv.org/abs/2402.02725v2
- Date: Mon, 26 Feb 2024 22:49:36 GMT
- Title: Cybersickness Detection through Head Movement Patterns: A Promising
Approach
- Authors: Masoud Salehi, Nikoo Javadpour, Brietta Beisner, Mohammadamin Sanaei,
Stephen B. Gilbert
- Abstract summary: This research investigates head movement patterns as a novel physiological marker for cybersickness detection.
Head movements provide a continuous, non-invasive measure that can be easily captured through the sensors embedded in all commercial VR headsets.
- Score: 1.1562071835482226
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the widespread adoption of Virtual Reality (VR) technology,
cybersickness remains a barrier for some users. This research investigates head
movement patterns as a novel physiological marker for cybersickness detection.
Unlike traditional markers, head movements provide a continuous, non-invasive
measure that can be easily captured through the sensors embedded in all
commercial VR headsets. We used a publicly available dataset from a VR
experiment involving 75 participants and analyzed head movements across six
axes. An extensive feature extraction process was then performed on the head
movement dataset and its derivatives, including velocity, acceleration, and
jerk. Three categories of features were extracted, encompassing statistical,
temporal, and spectral features. Subsequently, we employed the Recursive
Feature Elimination method to select the most important and effective features.
In a series of experiments, we trained a variety of machine learning
algorithms. The results demonstrate a 76% accuracy and 83% precision in
predicting cybersickness in the subjects based on the head movements. This
study contribution to the cybersickness literature lies in offering a
preliminary analysis of a new source of data and providing insight into the
relationship of head movements and cybersickness.
Related papers
- Real-time Cross-modal Cybersickness Prediction in Virtual Reality [2.865152517440773]
Cybersickness remains a significant barrier to the widespread adoption of immersive virtual reality (VR) experiences.
We propose a lightweight model that processes bio-signal features and a PP-TSN network for video feature extraction.
Our model, trained with a lightweight framework, was validated on a public dataset containing eye and head tracking data, physiological data, and VR video, and demonstrated state-of-the-art performance in cybersickness prediction.
arXiv Detail & Related papers (2025-01-02T11:41:43Z) - Mazed and Confused: A Dataset of Cybersickness, Working Memory, Mental Load, Physical Load, and Attention During a Real Walking Task in VR [11.021668923244803]
Relationship between cognitive activities, physical activities, and familiar feelings of cybersickness is not well understood.
We collected head orientation, head position, eye tracking, images, physiological readings from external sensors, and self-reported cybersickness severity, physical load, and mental load in VR.
arXiv Detail & Related papers (2024-09-10T22:41:14Z) - Thelxinoƫ: Recognizing Human Emotions Using Pupillometry and Machine Learning [0.0]
This research contributes significantly to the Thelxino"e framework, aiming to enhance VR experiences by integrating multiple sensor data for realistic and emotionally resonant touch interactions.
Our findings open new avenues for developing more immersive and interactive VR environments, paving the way for future advancements in virtual touch technology.
arXiv Detail & Related papers (2024-03-27T21:14:17Z) - Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation [57.60490773016364]
We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation.
Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem.
Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation.
arXiv Detail & Related papers (2023-12-20T22:36:37Z) - VR-LENS: Super Learning-based Cybersickness Detection and Explainable
AI-Guided Deployment in Virtual Reality [1.9642496463491053]
This work presents an explainable artificial intelligence (XAI)-based framework VR-LENS for developing cybersickness detection ML models.
We first develop a novel super learning-based ensemble ML model for cybersickness detection.
Our proposed method identified eye tracking, player position, and galvanic skin/heart rate response as the most dominant features for the integrated sensor, gameplay, and bio-physiological datasets.
arXiv Detail & Related papers (2023-02-03T20:15:51Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Differentiable Frequency-based Disentanglement for Aerial Video Action
Recognition [56.91538445510214]
We present a learning algorithm for human activity recognition in videos.
Our approach is designed for UAV videos, which are mainly acquired from obliquely placed dynamic cameras.
We conduct extensive experiments on the UAV Human dataset and the NEC Drone dataset.
arXiv Detail & Related papers (2022-09-15T22:16:52Z) - TruVR: Trustworthy Cybersickness Detection using Explainable Machine
Learning [1.9642496463491053]
Cybersickness can be characterized by nausea, vertigo, headache, eye strain, and other discomforts when using virtual reality (VR) systems.
The previously reported machine learning (ML) and deep learning (DL) algorithms for detecting (classification) and predicting (regression) VR cybersickness use black-box models.
We present three explainable machine learning (xML) models to detect and predict cybersickness.
arXiv Detail & Related papers (2022-09-12T13:55:13Z) - Overcoming the Domain Gap in Neural Action Representations [60.47807856873544]
3D pose data can now be reliably extracted from multi-view video sequences without manual intervention.
We propose to use it to guide the encoding of neural action representations together with a set of neural and behavioral augmentations.
To reduce the domain gap, during training, we swap neural and behavioral data across animals that seem to be performing similar actions.
arXiv Detail & Related papers (2021-12-02T12:45:46Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Deep Recurrent Encoder: A scalable end-to-end network to model brain
signals [122.1055193683784]
We propose an end-to-end deep learning architecture trained to predict the brain responses of multiple subjects at once.
We successfully test this approach on a large cohort of magnetoencephalography (MEG) recordings acquired during a one-hour reading task.
arXiv Detail & Related papers (2021-03-03T11:39:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.