Visual Acuity Consistent Foveated Rendering towards Retinal Resolution
- URL: http://arxiv.org/abs/2503.23410v1
- Date: Sun, 30 Mar 2025 12:09:12 GMT
- Title: Visual Acuity Consistent Foveated Rendering towards Retinal Resolution
- Authors: Zhi Zhang, Meng Gai, Sheng Li,
- Abstract summary: We present visual acuity-consistent foveated rendering (VaFR), aiming to achieve exceptional rendering performance at retinal-level resolutions.<n>We propose a method with a novel log-polar mapping function derived from the human visual acuity model, which accommodates the natural bandwidth of the visual system.<n>Our approach significantly enhances the rendering performance of binocular 8K path tracing, achieving smooth frame rates.
- Score: 11.230872127138548
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prior foveated rendering methods often suffer from a limitation where the shading load escalates with increasing display resolution, leading to decreased efficiency, particularly when dealing with retinal-level resolutions. To tackle this challenge, we begin with the essence of the human visual system (HVS) perception and present visual acuity-consistent foveated rendering (VaFR), aiming to achieve exceptional rendering performance at retinal-level resolutions. Specifically, we propose a method with a novel log-polar mapping function derived from the human visual acuity model, which accommodates the natural bandwidth of the visual system. This mapping function and its associated shading rate guarantee a consistent output of rendering information, regardless of variations in the display resolution of the VR HMD. Consequently, our VaFR outperforms alternative methods, improving rendering speed while preserving perceptual visual quality, particularly when operating at retinal resolutions. We validate our approach using both the rasterization and ray-casting rendering pipelines. We also validate our approach using different binocular rendering strategies for HMD devices. In diverse testing scenarios, our approach delivers better perceptual visual quality than prior foveated rendering while achieving an impressive speedup of 6.5$\times$-9.29$\times$ for deferred rendering of 3D scenarios and an even more powerful speedup of 10.4$\times$-16.4$\times$ for ray-casting at retinal resolution. Additionally, our approach significantly enhances the rendering performance of binocular 8K path tracing, achieving smooth frame rates.
Related papers
- Decoupling Appearance Variations with 3D Consistent Features in Gaussian Splatting [50.98884579463359]
We propose DAVIGS, a method that decouples appearance variations in a plug-and-play manner.
By transforming the rendering results at the image level instead of the Gaussian level, our approach can model appearance variations with minimal optimization time and memory overhead.
We validate our method on several appearance-variant scenes, and demonstrate that it achieves state-of-the-art rendering quality with minimal training time and memory usage.
arXiv Detail & Related papers (2025-01-18T14:55:58Z) - FovealNet: Advancing AI-Driven Gaze Tracking Solutions for Optimized Foveated Rendering System Performance in Virtual Reality [23.188267849124706]
This paper introduces textitFovealNet, an advanced AI-driven gaze tracking framework designed to optimize system performance.<n>FovealNet achieves at least $1.42times$ speed up compared to previous methods and 13% increase in perceptual quality for foveated output.
arXiv Detail & Related papers (2024-12-12T08:03:54Z) - Perceptually Optimized Super Resolution [7.728090438152828]
We propose a perceptually inspired and architecture-agnostic approach for controlling the visual quality and efficiency of super-resolution techniques.
The core is a perceptual model that dynamically guides super-resolution methods according to the human's sensitivity to image details.
We demonstrate the application of our proposed model in combination with network branching, and network complexity reduction to improve the computational efficiency of super-resolution methods without visible quality loss.
arXiv Detail & Related papers (2024-11-26T15:24:45Z) - Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering [62.92985004295714]
We present a method that avoids approximations that introduce bias into the renderings and, more importantly, the gradients used for optimization.
We show that by removing these biases our approach improves the generality of radiance cache based inverse rendering, as well as increasing quality in the presence of challenging light transport effects such as specular reflections.
arXiv Detail & Related papers (2024-09-09T17:59:57Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z) - Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering [84.37776381343662]
Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information.
We propose mip voxel grids (Mip-VoG), an explicit multiscale representation for real-time anti-aliasing rendering.
Our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously.
arXiv Detail & Related papers (2023-04-20T04:05:22Z) - Real-time volumetric rendering of dynamic humans [83.08068677139822]
We present a method for fast 3D reconstruction and real-time rendering of dynamic humans from monocular videos.
Our method can reconstruct a dynamic human in less than 3h using a single GPU, compared to recent state-of-the-art alternatives that take up to 72h.
A novel local ray marching rendering allows visualizing the neural human on a mobile VR device at 40 frames per second with minimal loss of visual quality.
arXiv Detail & Related papers (2023-03-21T14:41:25Z) - SteerNeRF: Accelerating NeRF Rendering via Smooth Viewpoint Trajectory [20.798605661240355]
We propose a new way to speed up rendering using 2D neural networks.
A low-resolution feature map is rendered first by volume rendering, then a lightweight 2D neural is applied to generate the image at target resolution.
We show that the proposed method can achieve competitive rendering quality while reducing the rendering time with little memory overhead, enabling 30FPS at 1080P image resolution with a low memory footprint.
arXiv Detail & Related papers (2022-12-15T00:02:36Z) - Deep Learning based Super-Resolution for Medical Volume Visualization
with Direct Volume Rendering [0.0]
Recent advances in deep learning-based image and video super-resolution techniques motivate us to investigate such networks for high-fidelity upscaling of frames rendered at a lower resolution to a higher resolution.
We propose a technique where our proposed system uses color information along with other medical features gathered from our volume to learn efficient upscaling of a low-resolution rendering to a higher-resolution space.
arXiv Detail & Related papers (2022-10-14T19:58:59Z) - Differentiable Rendering with Perturbed Optimizers [85.66675707599782]
Reasoning about 3D scenes from their 2D image projections is one of the core problems in computer vision.
Our work highlights the link between some well-known differentiable formulations and randomly smoothed renderings.
We apply our method to 3D scene reconstruction and demonstrate its advantages on the tasks of 6D pose estimation and 3D mesh reconstruction.
arXiv Detail & Related papers (2021-10-18T08:56:23Z) - Monocular Real-Time Volumetric Performance Capture [28.481131687883256]
We present the first approach to volumetric performance capture and novel-view rendering at real-time speed from monocular video.
Our system reconstructs a fully textured 3D human from each frame by leveraging Pixel-Aligned Implicit Function (PIFu)
We also introduce an Online Hard Example Mining (OHEM) technique that effectively suppresses failure modes due to the rare occurrence of challenging examples.
arXiv Detail & Related papers (2020-07-28T04:45:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.