ChromaCorrect: Prescription Correction in Virtual Reality Headsets
through Perceptual Guidance
- URL: http://arxiv.org/abs/2212.04264v1
- Date: Thu, 8 Dec 2022 13:30:17 GMT
- Title: ChromaCorrect: Prescription Correction in Virtual Reality Headsets
through Perceptual Guidance
- Authors: Ahmet G\"uzel, Jeanne Beyazian, Praneeth Chakravarthula and Kaan
Ak\c{s}it
- Abstract summary: eyeglasses causes additional bulk and discomfort when used with augmented and virtual reality headsets.
We propose a prescription-aware rendering approach for providing sharper and immersive VR imagery.
We evaluate our approach on various displays, including desktops and VR headsets, and show significant quality and contrast improvements for users with vision impairments.
- Score: 3.365646526465954
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A large portion of today's world population suffer from vision impairments
and wear prescription eyeglasses. However, eyeglasses causes additional bulk
and discomfort when used with augmented and virtual reality headsets, thereby
negatively impacting the viewer's visual experience. In this work, we remedy
the usage of prescription eyeglasses in Virtual Reality (VR) headsets by
shifting the optical complexity completely into software and propose a
prescription-aware rendering approach for providing sharper and immersive VR
imagery. To this end, we develop a differentiable display and visual perception
model encapsulating display-specific parameters, color and visual acuity of
human visual system and the user-specific refractive errors. Using this
differentiable visual perception model, we optimize the rendered imagery in the
display using stochastic gradient-descent solvers. This way, we provide
prescription glasses-free sharper images for a person with vision impairments.
We evaluate our approach on various displays, including desktops and VR
headsets, and show significant quality and contrast improvements for users with
vision impairments.
Related papers
- When Does Perceptual Alignment Benefit Vision Representations? [76.32336818860965]
We investigate how aligning vision model representations to human perceptual judgments impacts their usability.
We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks.
Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
arXiv Detail & Related papers (2024-10-14T17:59:58Z) - Universal Facial Encoding of Codec Avatars from VR Headsets [32.60236093340087]
We present a method that can animate a photorealistic avatar in realtime from head-mounted cameras (HMCs) on a consumer VR headset.
We present a lightweight expression calibration mechanism that increases accuracy with minimal additional cost to run-time efficiency.
arXiv Detail & Related papers (2024-07-17T22:08:15Z) - Less Cybersickness, Please: Demystifying and Detecting Stereoscopic Visual Inconsistencies in VR Apps [46.63489566687515]
Stereoscopic visual inconsistency (denoted as "SVI") issues undermine the rendering process of the user's brain.
We propose an unsupervised black-box testing framework named StereoID to identify the stereoscopic visual inconsistencies.
We build a large-scale unlabeled VR stereo screenshot dataset with larger than 171K images from 288 real-world VR apps for experiments.
arXiv Detail & Related papers (2024-06-13T16:48:48Z) - Simulation of a Vision Correction Display System [0.0]
This paper focuses on simulating a Vision Correction Display (VCD) to enhance the visual experience of individuals with various visual impairments.
With these simulations we can see potential improvements in visual acuity and comfort.
arXiv Detail & Related papers (2024-04-12T04:45:51Z) - Plausible May Not Be Faithful: Probing Object Hallucination in
Vision-Language Pre-training [66.0036211069513]
Large-scale vision-language pre-trained models are prone to hallucinate non-existent visual objects when generating text.
We show that models achieving better scores on standard metrics could hallucinate objects more frequently.
Surprisingly, we find that patch-based features perform the best and smaller patch resolution yields a non-trivial reduction in object hallucination.
arXiv Detail & Related papers (2022-10-14T10:27:22Z) - NeuralPassthrough: Learned Real-Time View Synthesis for VR [3.907767419763815]
We propose the first learned passthrough method and assess its performance using a custom VR headset with a stereo pair of RGB cameras.
We demonstrate that our learned passthrough method delivers superior image quality compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-07-05T17:39:22Z) - Assessing visual acuity in visual prostheses through a virtual-reality
system [7.529227133770206]
Current visual implants still provide very low resolution and limited field of view, thus limiting visual acuity in implanted patients.
We take advantage of virtual-reality software paired with a portable head-mounted display to evaluate the performance of normally sighted participants under simulated prosthetic vision.
Our results showed that of all conditions tested, a field of view of 20deg and 1000 phosphenes of resolution proved the best, with a visual acuity of 1.3 logMAR.
arXiv Detail & Related papers (2022-05-20T18:24:15Z) - Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual
Reality [68.18446501943585]
Social presence will fuel the next generation of communication systems driven by digital humans in virtual reality (VR)
The best 3D video-realistic VR avatars that minimize the uncanny effect rely on person-specific (PS) models.
This paper makes progress in overcoming these limitations by proposing an end-to-end multi-identity architecture.
arXiv Detail & Related papers (2021-04-10T15:48:53Z) - SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera [97.0162841635425]
We present a solution to egocentric 3D body pose estimation from monocular images captured from downward looking fish-eye cameras installed on the rim of a head mounted VR device.
This unusual viewpoint leads to images with unique visual appearance, with severe self-occlusions and perspective distortions.
We propose an encoder-decoder architecture with a novel multi-branch decoder designed to account for the varying uncertainty in 2D predictions.
arXiv Detail & Related papers (2020-11-02T16:18:06Z) - Perceptual Quality Assessment of Omnidirectional Images as Moving Camera
Videos [49.217528156417906]
Two types of VR viewing conditions are crucial in determining the viewing behaviors of users and the perceived quality of the panorama.
We first transform an omnidirectional image to several video representations using different user viewing behaviors under different viewing conditions.
We then leverage advanced 2D full-reference video quality models to compute the perceived quality.
arXiv Detail & Related papers (2020-05-21T10:03:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.