NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed
Neural Radiance Fields
- URL: http://arxiv.org/abs/2210.15947v1
- Date: Fri, 28 Oct 2022 07:11:05 GMT
- Title: NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed
Neural Radiance Fields
- Authors: Liangchen Song, Anpei Chen, Zhong Li, Zhang Chen, Lele Chen, Junsong
Yuan, Yi Xu, Andreas Geiger
- Abstract summary: We present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering.
We propose to decompose the 4D space according to temporal characteristics. Points in the 4D space are associated with probabilities belonging to three categories: static, deforming, and new areas.
- Score: 99.57774680640581
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Visually exploring in a real-world 4D spatiotemporal space freely in VR has
been a long-term quest. The task is especially appealing when only a few or
even single RGB cameras are used for capturing the dynamic scene. To this end,
we present an efficient framework capable of fast reconstruction, compact
modeling, and streamable rendering. First, we propose to decompose the 4D
spatiotemporal space according to temporal characteristics. Points in the 4D
space are associated with probabilities of belonging to three categories:
static, deforming, and new areas. Each area is represented and regularized by a
separate neural field. Second, we propose a hybrid representations based
feature streaming scheme for efficiently modeling the neural fields. Our
approach, coined NeRFPlayer, is evaluated on dynamic scenes captured by single
hand-held cameras and multi-camera arrays, achieving comparable or superior
rendering performance in terms of quality and speed comparable to recent
state-of-the-art methods, achieving reconstruction in 10 seconds per frame and
real-time rendering.
Related papers
- VideoRF: Rendering Dynamic Radiance Fields as 2D Feature Video Streams [56.00479598817949]
VideoRF is the first approach to enable real-time streaming and rendering of dynamic radiance fields on mobile platforms.
We show that the feature image stream can be efficiently compressed by 2D video codecs.
We have developed a real-time interactive player that enables online streaming and rendering of dynamic scenes.
arXiv Detail & Related papers (2023-12-03T14:14:35Z) - UE4-NeRF:Neural Radiance Field for Real-Time Rendering of Large-Scale
Scene [52.21184153832739]
We propose a novel neural rendering system called UE4-NeRF, specifically designed for real-time rendering of large-scale scenes.
Our approach combines with the Unrealization pipeline in Unreal Engine 4 (UE4), achieving real-time rendering of large-scale scenes at 4K resolution with a frame rate of up to 43 FPS.
arXiv Detail & Related papers (2023-10-20T04:01:35Z) - 3D Gaussian Splatting for Real-Time Radiance Field Rendering [4.320393382724066]
We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times.
We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.
arXiv Detail & Related papers (2023-08-08T06:37:06Z) - Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos [69.22032459870242]
We present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time free-view rendering on long-duration dynamic scenes.
We show such a strategy can handle large motions without sacrificing quality.
Based on ReRF, we design a special FVV that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes.
arXiv Detail & Related papers (2023-04-10T08:36:00Z) - DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes [27.37830742693236]
We present DeVRF, a novel representation to accelerate learning dynamic radiance fields.
Experiments demonstrate that DeVRF achieves two orders of magnitude speedup with on-par high-fidelity results.
arXiv Detail & Related papers (2022-05-31T12:13:54Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - Neural Radiance Flow for 4D View Synthesis and Video Processing [59.9116932930108]
We present a method to learn a 4D spatial-temporal representation of a dynamic scene from a set of RGB images.
Key to our approach is the use of a neural implicit representation that learns to capture the 3D occupancy, radiance, and dynamics of the scene.
arXiv Detail & Related papers (2020-12-17T17:54:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.