OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields
- URL: http://arxiv.org/abs/2305.14831v1
- Date: Wed, 24 May 2023 07:36:47 GMT
- Title: OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields
- Authors: Zhiwen Yan, Chen Li, Gim Hee Lee
- Abstract summary: Dynamic neural radiance fields (dynamic NeRFs) have demonstrated impressive results in novel view synthesis on 3D dynamic scenes.
We propose OD-NeRF to efficiently train and render dynamic NeRFs on-the-fly which instead is capable of streaming the dynamic scene.
Our algorithm can achieve an interactive speed of 6FPS training and rendering on synthetic dynamic scenes on-the-fly, and a significant speed-up compared to the state-of-the-art on real-world dynamic scenes.
- Score: 63.04781030984006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic neural radiance fields (dynamic NeRFs) have demonstrated impressive
results in novel view synthesis on 3D dynamic scenes. However, they often
require complete video sequences for training followed by novel view synthesis,
which is similar to playing back the recording of a dynamic 3D scene. In
contrast, we propose OD-NeRF to efficiently train and render dynamic NeRFs
on-the-fly which instead is capable of streaming the dynamic scene. When
training on-the-fly, the training frames become available sequentially and the
model is trained and rendered frame-by-frame. The key challenge of efficient
on-the-fly training is how to utilize the radiance field estimated from the
previous frames effectively. To tackle this challenge, we propose: 1) a NeRF
model conditioned on the multi-view projected colors to implicitly track
correspondence between the current and previous frames, and 2) a transition and
update algorithm that leverages the occupancy grid from the last frame to
sample efficiently at the current frame. Our algorithm can achieve an
interactive speed of 6FPS training and rendering on synthetic dynamic scenes
on-the-fly, and a significant speed-up compared to the state-of-the-art on
real-world dynamic scenes.
Related papers
- NeVRF: Neural Video-based Radiance Fields for Long-duration Sequences [53.8501224122952]
We propose a novel neural video-based radiance fields (NeVRF) representation.
NeVRF marries neural radiance field with image-based rendering to support photo-realistic novel view synthesis on long-duration dynamic inward-looking scenes.
Our experiments demonstrate the effectiveness of NeVRF in enabling long-duration sequence rendering, sequential data reconstruction, and compact data storage.
arXiv Detail & Related papers (2023-12-10T11:14:30Z) - DynMF: Neural Motion Factorization for Real-time Dynamic View Synthesis
with 3D Gaussian Splatting [35.69069478773709]
We argue that the per-point motions of a dynamic scene can be decomposed into a small set of explicit or learned trajectories.
Our representation is interpretable, efficient, and expressive enough to offer real-time view synthesis of complex dynamic scene motions.
arXiv Detail & Related papers (2023-11-30T18:59:11Z) - EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via
Self-Supervision [85.17951804790515]
EmerNeRF is a simple yet powerful approach for learning spatial-temporal representations of dynamic driving scenes.
It simultaneously captures scene geometry, appearance, motion, and semantics via self-bootstrapping.
Our method achieves state-of-the-art performance in sensor simulation.
arXiv Detail & Related papers (2023-11-03T17:59:55Z) - Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos [69.22032459870242]
We present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time free-view rendering on long-duration dynamic scenes.
We show such a strategy can handle large motions without sacrificing quality.
Based on ReRF, we design a special FVV that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes.
arXiv Detail & Related papers (2023-04-10T08:36:00Z) - Mixed Neural Voxels for Fast Multi-view Video Synthesis [16.25013978657888]
We present a novel method named MixVoxels to better represent the dynamic scenes with fast training speed and competitive rendering qualities.
The proposed MixVoxels represents the 4D dynamic scenes as a mixture of static and dynamic voxels and processes them with different networks.
With 15 minutes of training for dynamic scenes with inputs of 300-frame videos, MixVoxels achieves better PSNR than previous methods.
arXiv Detail & Related papers (2022-12-01T00:26:45Z) - Neural Deformable Voxel Grid for Fast Optimization of Dynamic View
Synthesis [63.25919018001152]
We propose a fast deformable radiance field method to handle dynamic scenes.
Our method achieves comparable performance to D-NeRF using only 20 minutes for training.
arXiv Detail & Related papers (2022-06-15T17:49:08Z) - DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes [27.37830742693236]
We present DeVRF, a novel representation to accelerate learning dynamic radiance fields.
Experiments demonstrate that DeVRF achieves two orders of magnitude speedup with on-par high-fidelity results.
arXiv Detail & Related papers (2022-05-31T12:13:54Z) - Fast Dynamic Radiance Fields with Time-Aware Neural Voxels [106.69049089979433]
We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox.
Our framework accelerates the optimization of dynamic radiance fields while maintaining high rendering quality.
Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.
arXiv Detail & Related papers (2022-05-30T17:47:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.