VR-NeRF: High-Fidelity Virtualized Walkable Spaces
- URL: http://arxiv.org/abs/2311.02542v1
- Date: Sun, 5 Nov 2023 02:03:14 GMT
- Title: VR-NeRF: High-Fidelity Virtualized Walkable Spaces
- Authors: Linning Xu, Vasu Agrawal, William Laney, Tony Garcia, Aayush Bansal,
Changil Kim, Samuel Rota Bul\`o, Lorenzo Porzi, Peter Kontschieder, Alja\v{z}
Bo\v{z}i\v{c}, Dahua Lin, Michael Zollh\"ofer, Christian Richardt
- Abstract summary: We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields.
- Score: 55.51127858816994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an end-to-end system for the high-fidelity capture, model
reconstruction, and real-time rendering of walkable spaces in virtual reality
using neural radiance fields. To this end, we designed and built a custom
multi-camera rig to densely capture walkable spaces in high fidelity and with
multi-view high dynamic range images in unprecedented quality and density. We
extend instant neural graphics primitives with a novel perceptual color space
for learning accurate HDR appearance, and an efficient mip-mapping mechanism
for level-of-detail rendering with anti-aliasing, while carefully optimizing
the trade-off between quality and speed. Our multi-GPU renderer enables
high-fidelity volume rendering of our neural radiance field model at the full
VR resolution of dual 2K$\times$2K at 36 Hz on our custom demo machine. We
demonstrate the quality of our results on our challenging high-fidelity
datasets, and compare our method and datasets to existing baselines. We release
our dataset on our project website.
Related papers
- HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - VideoRF: Rendering Dynamic Radiance Fields as 2D Feature Video Streams [56.00479598817949]
VideoRF is the first approach to enable real-time streaming and rendering of dynamic radiance fields on mobile platforms.
We show that the feature image stream can be efficiently compressed by 2D video codecs.
We have developed a real-time interactive player that enables online streaming and rendering of dynamic scenes.
arXiv Detail & Related papers (2023-12-03T14:14:35Z) - EvaSurf: Efficient View-Aware Implicit Textured Surface Reconstruction on Mobile Devices [53.28220984270622]
We present an implicit textured $textbfSurf$ace reconstruction method on mobile devices.
Our method can reconstruct high-quality appearance and accurate mesh on both synthetic and real-world datasets.
Our method can be trained in just 1-2 hours using a single GPU and run on mobile devices at over 40 FPS (Frames Per Second)
arXiv Detail & Related papers (2023-11-16T11:30:56Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - NeuralField-LDM: Scene Generation with Hierarchical Latent Diffusion
Models [85.20004959780132]
We introduce NeuralField-LDM, a generative model capable of synthesizing complex 3D environments.
We show how NeuralField-LDM can be used for a variety of 3D content creation applications, including conditional scene generation, scene inpainting and scene style manipulation.
arXiv Detail & Related papers (2023-04-19T16:13:21Z) - Immersive Neural Graphics Primitives [13.48024951446282]
We present and evaluate a NeRF-based framework that is capable of rendering scenes in immersive VR.
Our approach can yield a frame rate of 30 frames per second with a resolution of 1280x720 pixels per eye.
arXiv Detail & Related papers (2022-11-24T09:33:38Z) - NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed
Neural Radiance Fields [99.57774680640581]
We present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering.
We propose to decompose the 4D space according to temporal characteristics. Points in the 4D space are associated with probabilities belonging to three categories: static, deforming, and new areas.
arXiv Detail & Related papers (2022-10-28T07:11:05Z) - Human Performance Modeling and Rendering via Neural Animated Mesh [40.25449482006199]
We bridge the traditional mesh with a new class of neural rendering.
In this paper, we present a novel approach for rendering human views from video.
We demonstrate our approach on various platforms, inserting virtual human performances into AR headsets.
arXiv Detail & Related papers (2022-09-18T03:58:00Z) - Foveated Neural Radiance Fields for Real-Time and Egocentric Virtual
Reality [11.969281058344581]
High-quality 3D graphics requires large volumes of fine-detailed scene data for rendering.
Recent approaches to combat this problem include remote rendering/streaming and neural representations of 3D assets.
We present the first gaze-contingent 3D neural representation and view synthesis method.
arXiv Detail & Related papers (2021-03-30T14:05:47Z) - Neural Lumigraph Rendering [33.676795978166375]
State-of-the-art (SOTA) neural volume rendering approaches are slow to train and require minutes of inference (i.e., rendering) time for high image resolutions.
We adopt high-capacity neural scene representations with periodic activations for jointly optimizing an implicit surface and a radiance field of a scene supervised exclusively with posed 2D images.
Our neural rendering pipeline accelerates SOTA neural volume rendering by about two orders of magnitude and our implicit surface representation is unique in allowing us to export a mesh with view-dependent texture information.
arXiv Detail & Related papers (2021-03-22T03:46:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.