Towards Understanding Depth Perception in Foveated Rendering
- URL: http://arxiv.org/abs/2501.18635v1
- Date: Tue, 28 Jan 2025 16:06:29 GMT
- Title: Towards Understanding Depth Perception in Foveated Rendering
- Authors: Sophie Kergaßner, Taimoor Tariq, Piotr Didyk,
- Abstract summary: We present the first evaluation exploring the effects of foveated rendering on stereoscopic depth perception.
Our analysis demonstrates that stereoscopic acuity remains unaffected (or even improves) by high levels of peripheral blur.
The findings indicate that foveated rendering does not impact stereoscopic depth perception, and stereoacuity remains unaffected up to 2x stronger foveation than commonly used.
- Score: 8.442383621450247
- License:
- Abstract: The true vision for real-time virtual and augmented reality is reproducing our visual reality in its entirety on immersive displays. To this end, foveated rendering leverages the limitations of spatial acuity in human peripheral vision to allocate computational resources to the fovea while reducing quality in the periphery. Such methods are often derived from studies on the spatial resolution of the human visual system and its ability to perceive blur in the periphery, enabling the potential for high spatial quality in real-time. However, the effects of blur on other visual cues that depend on luminance contrast, such as depth, remain largely unexplored. It is critical to understand this interplay, as accurate depth representation is a fundamental aspect of visual realism. In this paper, we present the first evaluation exploring the effects of foveated rendering on stereoscopic depth perception. We design a psychovisual experiment to quantitatively study the effects of peripheral blur on depth perception. Our analysis demonstrates that stereoscopic acuity remains unaffected (or even improves) by high levels of peripheral blur. Based on our studies, we derive a simple perceptual model that determines the amount of foveation that does not affect stereoacuity. Furthermore, we analyze the model in the context of common foveation practices reported in literature. The findings indicate that foveated rendering does not impact stereoscopic depth perception, and stereoacuity remains unaffected up to 2x stronger foveation than commonly used. Finally, we conduct a validation experiment and show that our findings hold for complex natural stimuli.
Related papers
- Influence of field of view in visual prostheses design: Analysis with a VR system [3.9998518782208783]
We evaluate the influence of field of view with respect to spatial resolution in visual prostheses.
Twenty-four normally sighted participants were asked to find and recognize usual objects.
Results show that the accuracy and response time decrease when the field of view is increased.
arXiv Detail & Related papers (2025-01-28T22:25:22Z) - Cracking the Code of Hallucination in LVLMs with Vision-aware Head Divergence [69.86946427928511]
We investigate the internal mechanisms driving hallucination in large vision-language models (LVLMs)
We introduce Vision-aware Head Divergence (VHD), a metric that quantifies the sensitivity of attention head outputs to visual context.
We propose Vision-aware Head Reinforcement (VHR), a training-free approach to mitigate hallucination by enhancing the role of vision-aware attention heads.
arXiv Detail & Related papers (2024-12-18T15:29:30Z) - When Does Perceptual Alignment Benefit Vision Representations? [76.32336818860965]
We investigate how aligning vision model representations to human perceptual judgments impacts their usability.
We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks.
Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
arXiv Detail & Related papers (2024-10-14T17:59:58Z) - Pupil-Adaptive 3D Holography Beyond Coherent Depth-of-Field [42.427021878005405]
We propose a framework that bridges the gap between the coherent depth-of-field of holographic displays and what is seen in the real world due to incoherent light.
We introduce a learning framework that adjusts the receptive fields on-the-go based on the current state of the observer's eye pupil to produce image effects that otherwise are not possible in current computer-generated holography approaches.
arXiv Detail & Related papers (2024-08-17T11:01:54Z) - Enhancing Perception and Immersion in Pre-Captured Environments through
Learning-Based Eye Height Adaptation [19.959897524064353]
We propose a learning-based approach for novel views for omnidirectional images with altered eye heights.
With the improved omnidirectional-aware layered depth image, our approach synthesizes natural and realistic visuals for eye height adaptation.
arXiv Detail & Related papers (2023-08-24T19:14:28Z) - Assessing visual acuity in visual prostheses through a virtual-reality
system [7.529227133770206]
Current visual implants still provide very low resolution and limited field of view, thus limiting visual acuity in implanted patients.
We take advantage of virtual-reality software paired with a portable head-mounted display to evaluate the performance of normally sighted participants under simulated prosthetic vision.
Our results showed that of all conditions tested, a field of view of 20deg and 1000 phosphenes of resolution proved the best, with a visual acuity of 1.3 logMAR.
arXiv Detail & Related papers (2022-05-20T18:24:15Z) - On the Sins of Image Synthesis Loss for Self-supervised Depth Estimation [60.780823530087446]
We show that improvements in image synthesis do not necessitate improvement in depth estimation.
We attribute this diverging phenomenon to aleatoric uncertainties, which originate from data.
This observed divergence has not been previously reported or studied in depth.
arXiv Detail & Related papers (2021-09-13T17:57:24Z) - StereoPIFu: Depth Aware Clothed Human Digitization via Stereo Vision [54.920605385622274]
We propose StereoPIFu, which integrates the geometric constraints of stereo vision with implicit function representation of PIFu, to recover the 3D shape of the clothed human.
Compared with previous works, our StereoPIFu significantly improves the robustness, completeness, and accuracy of the clothed human reconstruction.
arXiv Detail & Related papers (2021-04-12T08:41:54Z) - Deep Learning--Based Scene Simplification for Bionic Vision [0.0]
We show that object segmentation may better support scene understanding than models based on visual saliency and monocular depth estimation.
This work has the potential to drastically improve the utility of prosthetic vision for people blinded from retinal degenerative diseases.
arXiv Detail & Related papers (2021-01-30T19:35:33Z) - SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera [97.0162841635425]
We present a solution to egocentric 3D body pose estimation from monocular images captured from downward looking fish-eye cameras installed on the rim of a head mounted VR device.
This unusual viewpoint leads to images with unique visual appearance, with severe self-occlusions and perspective distortions.
We propose an encoder-decoder architecture with a novel multi-branch decoder designed to account for the varying uncertainty in 2D predictions.
arXiv Detail & Related papers (2020-11-02T16:18:06Z) - VisualEchoes: Spatial Image Representation Learning through Echolocation [97.23789910400387]
Several animal species (e.g., bats, dolphins, and whales) and even visually impaired humans have the remarkable ability to perform echolocation.
We propose a novel interaction-based representation learning framework that learns useful visual features via echolocation.
Our work opens a new path for representation learning for embodied agents, where supervision comes from interacting with the physical world.
arXiv Detail & Related papers (2020-05-04T16:16:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.