Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time
- URL: http://arxiv.org/abs/2202.08614v1
- Date: Thu, 17 Feb 2022 11:57:01 GMT
- Title: Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time
- Authors: Liao Wang, Jiakai Zhang, Xinhang Liu, Fuqiang Zhao, Yanshun Zhang,
Yingliang Zhang, Minye Wu, Lan Xu and Jingyi Yu
- Abstract summary: Implicit neural representations such as Neural Radiance Field (NeRF) have focused mainly on modeling static objects captured under multi-view settings.
We present a novel Fourier PlenOctree (FPO) technique to tackle efficient neural modeling and real-time rendering of dynamic scenes captured under the free-view video (FVV) setting.
We show that the proposed method is 3000 times faster than the original NeRF and over an order of magnitude acceleration over SOTA.
- Score: 43.0484840009621
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implicit neural representations such as Neural Radiance Field (NeRF) have
focused mainly on modeling static objects captured under multi-view settings
where real-time rendering can be achieved with smart data structures, e.g.,
PlenOctree. In this paper, we present a novel Fourier PlenOctree (FPO)
technique to tackle efficient neural modeling and real-time rendering of
dynamic scenes captured under the free-view video (FVV) setting. The key idea
in our FPO is a novel combination of generalized NeRF, PlenOctree
representation, volumetric fusion and Fourier transform. To accelerate FPO
construction, we present a novel coarse-to-fine fusion scheme that leverages
the generalizable NeRF technique to generate the tree via spatial blending. To
tackle dynamic scenes, we tailor the implicit network to model the Fourier
coefficients of timevarying density and color attributes. Finally, we construct
the FPO and train the Fourier coefficients directly on the leaves of a union
PlenOctree structure of the dynamic sequence. We show that the resulting FPO
enables compact memory overload to handle dynamic objects and supports
efficient fine-tuning. Extensive experiments show that the proposed method is
3000 times faster than the original NeRF and achieves over an order of
magnitude acceleration over SOTA while preserving high visual quality for the
free-viewpoint rendering of unseen dynamic scenes.
Related papers
- D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - Anisotropic Neural Representation Learning for High-Quality Neural
Rendering [0.0]
We propose an anisotropic neural representation learning method that utilizes learnable view-dependent features to improve scene representation and reconstruction.
Our method is flexiable and can be plugged into NeRF-based frameworks.
arXiv Detail & Related papers (2023-11-30T07:29:30Z) - FPO++: Efficient Encoding and Rendering of Dynamic Neural Radiance Fields by Analyzing and Enhancing Fourier PlenOctrees [3.5884936187733403]
Fourier PlenOctrees have shown to be an efficient representation for real-time rendering of dynamic Neural Radiance Fields (NeRF)
In this paper, we perform an in-depth analysis of these artifacts and leverage the resulting insights to propose an improved representation.
arXiv Detail & Related papers (2023-10-31T17:59:58Z) - OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields [63.04781030984006]
Dynamic neural radiance fields (dynamic NeRFs) have demonstrated impressive results in novel view synthesis on 3D dynamic scenes.
We propose OD-NeRF to efficiently train and render dynamic NeRFs on-the-fly which instead is capable of streaming the dynamic scene.
Our algorithm can achieve an interactive speed of 6FPS training and rendering on synthetic dynamic scenes on-the-fly, and a significant speed-up compared to the state-of-the-art on real-world dynamic scenes.
arXiv Detail & Related papers (2023-05-24T07:36:47Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos [69.22032459870242]
We present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time free-view rendering on long-duration dynamic scenes.
We show such a strategy can handle large motions without sacrificing quality.
Based on ReRF, we design a special FVV that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes.
arXiv Detail & Related papers (2023-04-10T08:36:00Z) - DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes [27.37830742693236]
We present DeVRF, a novel representation to accelerate learning dynamic radiance fields.
Experiments demonstrate that DeVRF achieves two orders of magnitude speedup with on-par high-fidelity results.
arXiv Detail & Related papers (2022-05-31T12:13:54Z) - Fast Dynamic Radiance Fields with Time-Aware Neural Voxels [106.69049089979433]
We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox.
Our framework accelerates the optimization of dynamic radiance fields while maintaining high rendering quality.
Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.
arXiv Detail & Related papers (2022-05-30T17:47:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.