Neural Deformable Voxel Grid for Fast Optimization of Dynamic View
Synthesis
- URL: http://arxiv.org/abs/2206.07698v1
- Date: Wed, 15 Jun 2022 17:49:08 GMT
- Title: Neural Deformable Voxel Grid for Fast Optimization of Dynamic View
Synthesis
- Authors: Xiang Guo, Guanying Chen, Yuchao Dai, Xiaoqing Ye, Jiadai Sun, Xiao
Tan and Errui Ding
- Abstract summary: We propose a fast deformable radiance field method to handle dynamic scenes.
Our method achieves comparable performance to D-NeRF using only 20 minutes for training.
- Score: 63.25919018001152
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, Neural Radiance Fields (NeRF) is revolutionizing the task of novel
view synthesis (NVS) for its superior performance. However, NeRF and its
variants generally require a lengthy per-scene training procedure, where a
multi-layer perceptron (MLP) is fitted to the captured images. To remedy the
challenge, the voxel-grid representation has been proposed to significantly
speed up the training. However, these existing methods can only deal with
static scenes. How to develop an efficient and accurate dynamic view synthesis
method remains an open problem. Extending the methods for static scenes to
dynamic scenes is not straightforward as both the scene geometry and appearance
change over time. In this paper, built on top of the recent advances in
voxel-grid optimization, we propose a fast deformable radiance field method to
handle dynamic scenes. Our method consists of two modules. The first module
adopts a deformation grid to store 3D dynamic features, and a light-weight MLP
for decoding the deformation that maps a 3D point in observation space to the
canonical space using the interpolated features. The second module contains a
density and a color grid to model the geometry and density of the scene. The
occlusion is explicitly modeled to further improve the rendering quality.
Experimental results show that our method achieves comparable performance to
D-NeRF using only 20 minutes for training, which is more than 70x faster than
D-NeRF, clearly demonstrating the efficiency of our proposed method.
Related papers
- Dynamic 3D Gaussian Fields for Urban Areas [60.64840836584623]
We present an efficient neural 3D scene representation for novel-view synthesis (NVS) in large-scale, dynamic urban areas.
We propose 4DGF, a neural scene representation that scales to large-scale dynamic urban areas.
arXiv Detail & Related papers (2024-06-05T12:07:39Z) - GauFRe: Gaussian Deformation Fields for Real-time Dynamic Novel View
Synthesis [17.572987038801475]
We propose a method for dynamic scene reconstruction using deformable 3D Gaussians.
The differentiable pipeline is optimized end-to-end with a self-supervised rendering.
Our method results are comparable to state-of-the-art neural radiance field methods.
arXiv Detail & Related papers (2023-12-18T18:59:03Z) - EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via
Self-Supervision [85.17951804790515]
EmerNeRF is a simple yet powerful approach for learning spatial-temporal representations of dynamic driving scenes.
It simultaneously captures scene geometry, appearance, motion, and semantics via self-bootstrapping.
Our method achieves state-of-the-art performance in sensor simulation.
arXiv Detail & Related papers (2023-11-03T17:59:55Z) - OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields [63.04781030984006]
Dynamic neural radiance fields (dynamic NeRFs) have demonstrated impressive results in novel view synthesis on 3D dynamic scenes.
We propose OD-NeRF to efficiently train and render dynamic NeRFs on-the-fly which instead is capable of streaming the dynamic scene.
Our algorithm can achieve an interactive speed of 6FPS training and rendering on synthetic dynamic scenes on-the-fly, and a significant speed-up compared to the state-of-the-art on real-world dynamic scenes.
arXiv Detail & Related papers (2023-05-24T07:36:47Z) - Fast-SNARF: A Fast Deformer for Articulated Neural Fields [92.68788512596254]
We propose a new articulation module for neural fields, Fast-SNARF, which finds accurate correspondences between canonical space and posed space.
Fast-SNARF is a drop-in replacement in to our previous work, SNARF, while significantly improving its computational efficiency.
Because learning of deformation maps is a crucial component in many 3D human avatar methods, we believe that this work represents a significant step towards the practical creation of 3D virtual humans.
arXiv Detail & Related papers (2022-11-28T17:55:34Z) - DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes [27.37830742693236]
We present DeVRF, a novel representation to accelerate learning dynamic radiance fields.
Experiments demonstrate that DeVRF achieves two orders of magnitude speedup with on-par high-fidelity results.
arXiv Detail & Related papers (2022-05-31T12:13:54Z) - Fast Dynamic Radiance Fields with Time-Aware Neural Voxels [106.69049089979433]
We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox.
Our framework accelerates the optimization of dynamic radiance fields while maintaining high rendering quality.
Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.
arXiv Detail & Related papers (2022-05-30T17:47:31Z) - NeuralMVS: Bridging Multi-View Stereo and Novel View Synthesis [28.83180559337126]
We propose a novel network that can recover 3D scene geometry as a distance function, together with high-resolution color images.
Our method uses only a sparse set of images as input and can generalize well to novel scenes.
arXiv Detail & Related papers (2021-08-09T08:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.