Less Cybersickness, Please: Demystifying and Detecting Stereoscopic Visual Inconsistencies in VR Apps
- URL: http://arxiv.org/abs/2406.09313v1
- Date: Thu, 13 Jun 2024 16:48:48 GMT
- Title: Less Cybersickness, Please: Demystifying and Detecting Stereoscopic Visual Inconsistencies in VR Apps
- Authors: Shuqing Li, Cuiyun Gao, Jianping Zhang, Yujia Zhang, Yepang Liu, Jiazhen Gu, Yun Peng, Michael R. Lyu,
- Abstract summary: Stereoscopic visual inconsistency (denoted as "SVI") issues undermine the rendering process of the user's brain.
We propose an unsupervised black-box testing framework named StereoID to identify the stereoscopic visual inconsistencies.
We build a large-scale unlabeled VR stereo screenshot dataset with larger than 171K images from 288 real-world VR apps for experiments.
- Score: 46.63489566687515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The quality of Virtual Reality (VR) apps is vital, particularly the rendering quality of the VR Graphical User Interface (GUI). Different from traditional 2D apps, VR apps create a 3D digital scene for users, by rendering two distinct 2D images for the user's left and right eyes, respectively. Stereoscopic visual inconsistency (denoted as "SVI") issues, however, undermine the rendering process of the user's brain, leading to user discomfort and even adverse health effects. Such issues commonly exist but remain underexplored. We conduct an empirical analysis on 282 SVI bug reports from 15 VR platforms, summarizing 15 types of manifestations. The empirical analysis reveals that automatically detecting SVI issues is challenging, mainly because: (1) lack of training data; (2) the manifestations of SVI issues are diverse, complicated, and often application-specific; (3) most accessible VR apps are closed-source commercial software. Existing pattern-based supervised classification approaches may be inapplicable or ineffective in detecting the SVI issues. To counter these challenges, we propose an unsupervised black-box testing framework named StereoID to identify the stereoscopic visual inconsistencies, based only on the rendered GUI states. StereoID generates a synthetic right-eye image based on the actual left-eye image and computes distances between the synthetic right-eye image and the actual right-eye image to detect SVI issues. We propose a depth-aware conditional stereo image translator to power the image generation process, which captures the expected perspective shifts between left-eye and right-eye images. We build a large-scale unlabeled VR stereo screenshot dataset with larger than 171K images from 288 real-world VR apps for experiments. After substantial experiments, StereoID demonstrates superior performance for detecting SVI issues in both user reports and wild VR apps.
Related papers
- Grounded GUI Understanding for Vision Based Spatial Intelligent Agent: Exemplified by Virtual Reality Apps [41.601579396549404]
We propose the first zero-shot cOntext-sensitive inteRactable GUI ElemeNT dEtection framework for virtual Reality apps, named Orienter.
By imitating human behaviors, Orienter observes and understands the semantic contexts of VR app scenes first, before performing the detection.
arXiv Detail & Related papers (2024-09-17T00:58:00Z) - Multisensory extended reality applications offer benefits for volumetric biomedical image analysis in research and medicine [2.46537907738351]
3D data from high-resolution volumetric imaging is a central resource for diagnosis and treatment in modern medicine.
Recent research used extended reality (XR) for perceiving 3D images with visual depth perception and touch but used restrictive haptic devices.
In this study, 24 experts for biomedical images in research and medicine explored 3D medical shapes with 3 applications.
arXiv Detail & Related papers (2023-11-07T13:37:47Z) - ChromaCorrect: Prescription Correction in Virtual Reality Headsets
through Perceptual Guidance [3.365646526465954]
eyeglasses causes additional bulk and discomfort when used with augmented and virtual reality headsets.
We propose a prescription-aware rendering approach for providing sharper and immersive VR imagery.
We evaluate our approach on various displays, including desktops and VR headsets, and show significant quality and contrast improvements for users with vision impairments.
arXiv Detail & Related papers (2022-12-08T13:30:17Z) - Perceptual Quality Assessment of Omnidirectional Images [81.76416696753947]
We first establish an omnidirectional IQA (OIQA) database, which includes 16 source images and 320 distorted images degraded by 4 commonly encountered distortion types.
Then a subjective quality evaluation study is conducted on the OIQA database in the VR environment.
The original and distorted omnidirectional images, subjective quality ratings, and the head and eye movement data together constitute the OIQA database.
arXiv Detail & Related papers (2022-07-06T13:40:38Z) - NeuralPassthrough: Learned Real-Time View Synthesis for VR [3.907767419763815]
We propose the first learned passthrough method and assess its performance using a custom VR headset with a stereo pair of RGB cameras.
We demonstrate that our learned passthrough method delivers superior image quality compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-07-05T17:39:22Z) - Perceptual Quality Assessment of Virtual Reality Videos in the Wild [53.94620993606658]
Existing panoramic video databases only consider synthetic distortions, assume fixed viewing conditions, and are limited in size.
We construct the VR Video Quality in the Wild (VRVQW) database, containing $502$ user-generated videos with diverse content and distortion characteristics.
We conduct a formal psychophysical experiment to record the scanpaths and perceived quality scores from $139$ participants under two different viewing conditions.
arXiv Detail & Related papers (2022-06-13T02:22:57Z) - Unmasking Communication Partners: A Low-Cost AI Solution for Digitally
Removing Head-Mounted Displays in VR-Based Telepresence [62.997667081978825]
Face-to-face conversation in Virtual Reality (VR) is a challenge when participants wear head-mounted displays (HMD)
Past research has shown that high-fidelity face reconstruction with personal avatars in VR is possible under laboratory conditions with high-cost hardware.
We propose one of the first low-cost systems for this task which uses only open source, free software and affordable hardware.
arXiv Detail & Related papers (2020-11-06T23:17:12Z) - Facial Expression Recognition Under Partial Occlusion from Virtual
Reality Headsets based on Transfer Learning [0.0]
convolutional neural network based approaches has become widely adopted due to their proven applicability to Facial Expression Recognition task.
However, recognizing facial expression while wearing a head-mounted VR headset is a challenging task due to the upper half of the face being completely occluded.
We propose a geometric model to simulate occlusion resulting from a Samsung Gear VR headset that can be applied to existing FER datasets.
arXiv Detail & Related papers (2020-08-12T20:25:07Z) - Automatic Recommendation of Strategies for Minimizing Discomfort in
Virtual Environments [58.720142291102135]
In this work, we first present a detailed review about possible causes of Cybersickness (CS)
Our system is able to suggest if the user may be entering in the next moments of the application into an illness situation.
The CSPQ (Cybersickness Profile Questionnaire) is also proposed, which is used to identify the player's susceptibility to CS.
arXiv Detail & Related papers (2020-06-27T19:28:48Z) - Perceptual Quality Assessment of Omnidirectional Images as Moving Camera
Videos [49.217528156417906]
Two types of VR viewing conditions are crucial in determining the viewing behaviors of users and the perceived quality of the panorama.
We first transform an omnidirectional image to several video representations using different user viewing behaviors under different viewing conditions.
We then leverage advanced 2D full-reference video quality models to compute the perceived quality.
arXiv Detail & Related papers (2020-05-21T10:03:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.