DaReNeRF: Direction-aware Representation for Dynamic Scenes
- URL: http://arxiv.org/abs/2403.02265v1
- Date: Mon, 4 Mar 2024 17:54:33 GMT
- Title: DaReNeRF: Direction-aware Representation for Dynamic Scenes
- Authors: Ange Lou, Benjamin Planche, Zhongpai Gao, Yamin Li, Tianyu Luan, Hao
Ding, Terrence Chen, Jack Noble, Ziyan Wu
- Abstract summary: We present a novel direction-aware representation (DaRe) approach that captures dynamics from six different directions.
DaReNeRF computes features for each space-time point by fusing from these recovered planes.
DaReNeRF maintains a 2x reduction in training time compared to prior art while delivering superior performance.
- Score: 24.752909256748158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Addressing the intricate challenge of modeling and re-rendering dynamic
scenes, most recent approaches have sought to simplify these complexities using
plane-based explicit representations, overcoming the slow training time issues
associated with methods like Neural Radiance Fields (NeRF) and implicit
representations. However, the straightforward decomposition of 4D dynamic
scenes into multiple 2D plane-based representations proves insufficient for
re-rendering high-fidelity scenes with complex motions. In response, we present
a novel direction-aware representation (DaRe) approach that captures scene
dynamics from six different directions. This learned representation undergoes
an inverse dual-tree complex wavelet transformation (DTCWT) to recover
plane-based information. DaReNeRF computes features for each space-time point
by fusing vectors from these recovered planes. Combining DaReNeRF with a tiny
MLP for color regression and leveraging volume rendering in training yield
state-of-the-art performance in novel view synthesis for complex dynamic
scenes. Notably, to address redundancy introduced by the six real and six
imaginary direction-aware wavelet coefficients, we introduce a trainable
masking approach, mitigating storage issues without significant performance
decline. Moreover, DaReNeRF maintains a 2x reduction in training time compared
to prior art while delivering superior performance.
Related papers
- DaRePlane: Direction-aware Representations for Dynamic Scene Reconstruction [26.39519157164198]
We present DaRePlane, a novel representation approach that captures dynamics from six different directions.
DaRePlane yields state-of-the-art performance in novel view synthesis for various complex dynamic scenes.
arXiv Detail & Related papers (2024-10-18T04:19:10Z) - Motion2VecSets: 4D Latent Vector Set Diffusion for Non-rigid Shape Reconstruction and Tracking [52.393359791978035]
Motion2VecSets is a 4D diffusion model for dynamic surface reconstruction from point cloud sequences.
We parameterize 4D dynamics with latent sets instead of using global latent codes.
For more temporally-coherent object tracking, we synchronously denoise deformation latent sets and exchange information across multiple frames.
arXiv Detail & Related papers (2024-01-12T15:05:08Z) - EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via
Self-Supervision [85.17951804790515]
EmerNeRF is a simple yet powerful approach for learning spatial-temporal representations of dynamic driving scenes.
It simultaneously captures scene geometry, appearance, motion, and semantics via self-bootstrapping.
Our method achieves state-of-the-art performance in sensor simulation.
arXiv Detail & Related papers (2023-11-03T17:59:55Z) - OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields [63.04781030984006]
Dynamic neural radiance fields (dynamic NeRFs) have demonstrated impressive results in novel view synthesis on 3D dynamic scenes.
We propose OD-NeRF to efficiently train and render dynamic NeRFs on-the-fly which instead is capable of streaming the dynamic scene.
Our algorithm can achieve an interactive speed of 6FPS training and rendering on synthetic dynamic scenes on-the-fly, and a significant speed-up compared to the state-of-the-art on real-world dynamic scenes.
arXiv Detail & Related papers (2023-05-24T07:36:47Z) - Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos [69.22032459870242]
We present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time free-view rendering on long-duration dynamic scenes.
We show such a strategy can handle large motions without sacrificing quality.
Based on ReRF, we design a special FVV that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes.
arXiv Detail & Related papers (2023-04-10T08:36:00Z) - Unbiased 4D: Monocular 4D Reconstruction with a Neural Deformation Model [76.64071133839862]
Capturing general deforming scenes from monocular RGB video is crucial for many computer graphics and vision applications.
Our method, Ub4D, handles large deformations, performs shape completion in occluded regions, and can operate on monocular RGB videos directly by using differentiable volume rendering.
Results on our new dataset, which will be made publicly available, demonstrate a clear improvement over the state of the art in terms of surface reconstruction accuracy and robustness to large deformations.
arXiv Detail & Related papers (2022-06-16T17:59:54Z) - DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes [27.37830742693236]
We present DeVRF, a novel representation to accelerate learning dynamic radiance fields.
Experiments demonstrate that DeVRF achieves two orders of magnitude speedup with on-par high-fidelity results.
arXiv Detail & Related papers (2022-05-31T12:13:54Z) - Fast Dynamic Radiance Fields with Time-Aware Neural Voxels [106.69049089979433]
We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox.
Our framework accelerates the optimization of dynamic radiance fields while maintaining high rendering quality.
Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.
arXiv Detail & Related papers (2022-05-30T17:47:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.