Foveated Neural Radiance Fields for Real-Time and Egocentric Virtual
Reality
- URL: http://arxiv.org/abs/2103.16365v1
- Date: Tue, 30 Mar 2021 14:05:47 GMT
- Title: Foveated Neural Radiance Fields for Real-Time and Egocentric Virtual
Reality
- Authors: Nianchen Deng and Zhenyi He and Jiannan Ye and Praneeth Chakravarthula
and Xubo Yang and Qi Sun
- Abstract summary: High-quality 3D graphics requires large volumes of fine-detailed scene data for rendering.
Recent approaches to combat this problem include remote rendering/streaming and neural representations of 3D assets.
We present the first gaze-contingent 3D neural representation and view synthesis method.
- Score: 11.969281058344581
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional high-quality 3D graphics requires large volumes of fine-detailed
scene data for rendering. This demand compromises computational efficiency and
local storage resources. Specifically, it becomes more concerning for future
wearable and portable virtual and augmented reality (VR/AR) displays. Recent
approaches to combat this problem include remote rendering/streaming and neural
representations of 3D assets. These approaches have redefined the traditional
local storage-rendering pipeline by distributed computing or compression of
large data. However, these methods typically suffer from high latency or low
quality for practical visualization of large immersive virtual scenes, notably
with extra high resolution and refresh rate requirements for VR applications
such as gaming and design.
Tailored for the future portable, low-storage, and energy-efficient VR
platforms, we present the first gaze-contingent 3D neural representation and
view synthesis method. We incorporate the human psychophysics of visual- and
stereo-acuity into an egocentric neural representation of 3D scenery.
Furthermore, we jointly optimize the latency/performance and visual quality,
while mutually bridging human perception and neural scene synthesis, to achieve
perceptually high-quality immersive interaction. Both objective analysis and
subjective study demonstrate the effectiveness of our approach in significantly
reducing local storage volume and synthesis latency (up to 99% reduction in
both data size and computational time), while simultaneously presenting
high-fidelity rendering, with perceptual quality identical to that of fully
locally stored and rendered high-quality imagery.
Related papers
- VR-Splatting: Foveated Radiance Field Rendering via 3D Gaussian Splatting and Neural Points [4.962171160815189]
High-performance demands of virtual reality systems present challenges in utilizing fast-to-render scene representations like 3DGS.
We propose foveated rendering as a promising solution to these obstacles.
Our approach introduces a novel foveated rendering approach for Virtual Reality, that leverages the sharp, detailed output of neural point rendering for the foveal region, fused with a smooth rendering of 3DGS for the peripheral vision.
arXiv Detail & Related papers (2024-10-23T14:54:48Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - VR-NeRF: High-Fidelity Virtualized Walkable Spaces [55.51127858816994]
We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields.
arXiv Detail & Related papers (2023-11-05T02:03:14Z) - 3D Gaussian Splatting for Real-Time Radiance Field Rendering [4.320393382724066]
We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times.
We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.
arXiv Detail & Related papers (2023-08-08T06:37:06Z) - Real-time volumetric rendering of dynamic humans [83.08068677139822]
We present a method for fast 3D reconstruction and real-time rendering of dynamic humans from monocular videos.
Our method can reconstruct a dynamic human in less than 3h using a single GPU, compared to recent state-of-the-art alternatives that take up to 72h.
A novel local ray marching rendering allows visualizing the neural human on a mobile VR device at 40 frames per second with minimal loss of visual quality.
arXiv Detail & Related papers (2023-03-21T14:41:25Z) - NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed
Neural Radiance Fields [99.57774680640581]
We present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering.
We propose to decompose the 4D space according to temporal characteristics. Points in the 4D space are associated with probabilities belonging to three categories: static, deforming, and new areas.
arXiv Detail & Related papers (2022-10-28T07:11:05Z) - Human Performance Modeling and Rendering via Neural Animated Mesh [40.25449482006199]
We bridge the traditional mesh with a new class of neural rendering.
In this paper, we present a novel approach for rendering human views from video.
We demonstrate our approach on various platforms, inserting virtual human performances into AR headsets.
arXiv Detail & Related papers (2022-09-18T03:58:00Z) - HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars [65.82222842213577]
We propose a novel neural rendering pipeline, which synthesizes virtual human avatars from arbitrary poses efficiently and at high quality.
First, we learn to encode articulated human motions on a dense UV manifold of the human body surface.
We then leverage the encoded information on the UV manifold to construct a 3D volumetric representation.
arXiv Detail & Related papers (2021-12-19T17:34:15Z) - Neural Lumigraph Rendering [33.676795978166375]
State-of-the-art (SOTA) neural volume rendering approaches are slow to train and require minutes of inference (i.e., rendering) time for high image resolutions.
We adopt high-capacity neural scene representations with periodic activations for jointly optimizing an implicit surface and a radiance field of a scene supervised exclusively with posed 2D images.
Our neural rendering pipeline accelerates SOTA neural volume rendering by about two orders of magnitude and our implicit surface representation is unique in allowing us to export a mesh with view-dependent texture information.
arXiv Detail & Related papers (2021-03-22T03:46:05Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.