Mazed and Confused: A Dataset of Cybersickness, Working Memory, Mental Load, Physical Load, and Attention During a Real Walking Task in VR
- URL: http://arxiv.org/abs/2409.06898v1
- Date: Tue, 10 Sep 2024 22:41:14 GMT
- Title: Mazed and Confused: A Dataset of Cybersickness, Working Memory, Mental Load, Physical Load, and Attention During a Real Walking Task in VR
- Authors: Jyotirmay Nag Setu, Joshua M Le, Ripan Kumar Kundu, Barry Giesbrecht, Tobias Höllerer, Khaza Anuarul Hoque, Kevin Desai, John Quarles,
- Abstract summary: Relationship between cognitive activities, physical activities, and familiar feelings of cybersickness is not well understood.
We collected head orientation, head position, eye tracking, images, physiological readings from external sensors, and self-reported cybersickness severity, physical load, and mental load in VR.
- Score: 11.021668923244803
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Virtual Reality (VR) is quickly establishing itself in various industries, including training, education, medicine, and entertainment, in which users are frequently required to carry out multiple complex cognitive and physical activities. However, the relationship between cognitive activities, physical activities, and familiar feelings of cybersickness is not well understood and thus can be unpredictable for developers. Researchers have previously provided labeled datasets for predicting cybersickness while users are stationary, but there have been few labeled datasets on cybersickness while users are physically walking. Thus, from 39 participants, we collected head orientation, head position, eye tracking, images, physiological readings from external sensors, and the self-reported cybersickness severity, physical load, and mental load in VR. Throughout the data collection, participants navigated mazes via real walking and performed tasks challenging their attention and working memory. To demonstrate the dataset's utility, we conducted a case study of training classifiers in which we achieved 95% accuracy for cybersickness severity classification. The noteworthy performance of the straightforward classifiers makes this dataset ideal for future researchers to develop cybersickness detection and reduction models. To better understand the features that helped with classification, we performed SHAP(SHapley Additive exPlanations) analysis, highlighting the importance of eye tracking and physiological measures for cybersickness prediction while walking. This open dataset can allow future researchers to study the connection between cybersickness and cognitive loads and develop prediction models. This dataset will empower future VR developers to design efficient and effective Virtual Environments by improving cognitive load management and minimizing cybersickness.
Related papers
- Exploring Eye Tracking to Detect Cognitive Load in Complex Virtual Reality Training [11.83314968015781]
We present an ongoing study to detect users' cognitive load using an eye-tracking-based machine learning approach.
We developed a VR training system for cold spray and tested it with 22 participants.
Preliminary analysis demonstrates the feasibility of using eye-tracking to detect cognitive load in complex VR experiences.
arXiv Detail & Related papers (2024-11-18T16:44:19Z) - Cybersickness Detection through Head Movement Patterns: A Promising
Approach [1.1562071835482226]
This research investigates head movement patterns as a novel physiological marker for cybersickness detection.
Head movements provide a continuous, non-invasive measure that can be easily captured through the sensors embedded in all commercial VR headsets.
arXiv Detail & Related papers (2024-02-05T04:49:59Z) - Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation [57.60490773016364]
We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation.
Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem.
Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation.
arXiv Detail & Related papers (2023-12-20T22:36:37Z) - TruVR: Trustworthy Cybersickness Detection using Explainable Machine
Learning [1.9642496463491053]
Cybersickness can be characterized by nausea, vertigo, headache, eye strain, and other discomforts when using virtual reality (VR) systems.
The previously reported machine learning (ML) and deep learning (DL) algorithms for detecting (classification) and predicting (regression) VR cybersickness use black-box models.
We present three explainable machine learning (xML) models to detect and predict cybersickness.
arXiv Detail & Related papers (2022-09-12T13:55:13Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - Predicting the Future from First Person (Egocentric) Vision: A Survey [18.07516837332113]
This survey summarises the evolution of studies in the context of future prediction from egocentric vision.
It makes an overview of applications, devices, existing problems, commonly used datasets, models and input modalities.
Our analysis highlights that methods for future prediction from egocentric vision can have a significant impact in a range of applications.
arXiv Detail & Related papers (2021-07-28T14:58:13Z) - Physion: Evaluating Physical Prediction from Vision in Humans and
Machines [46.19008633309041]
We present a visual and physical prediction benchmark that precisely measures this capability.
We compare an array of algorithms on their ability to make diverse physical predictions.
We find that graph neural networks with access to the physical state best capture human behavior.
arXiv Detail & Related papers (2021-06-15T16:13:39Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Task-relevant Representation Learning for Networked Robotic Perception [74.0215744125845]
This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model's ultimate objective.
Our algorithm aggressively compresses robotic sensory data by up to 11x more than competing methods.
arXiv Detail & Related papers (2020-11-06T07:39:08Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.