D-TensoRF: Tensorial Radiance Fields for Dynamic Scenes
- URL: http://arxiv.org/abs/2212.02375v2
- Date: Tue, 6 Dec 2022 04:15:10 GMT
- Title: D-TensoRF: Tensorial Radiance Fields for Dynamic Scenes
- Authors: Hankyu Jang, Daeyoung Kim
- Abstract summary: We present D-TensoRF, a synthesisial radiance field for dynamic scenes.
We decompose the grid either into rank-one vector components (CP decomposition) or low-rank matrix components (newly proposed MM decomposition)
We show that D-TensoRF with CP decomposition and MM decomposition both have short training times and significantly low memory footprints.
- Score: 2.587781533364185
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural radiance field (NeRF) attracts attention as a promising approach to
reconstructing the 3D scene. As NeRF emerges, subsequent studies have been
conducted to model dynamic scenes, which include motions or topological
changes. However, most of them use an additional deformation network, slowing
down the training and rendering speed. Tensorial radiance field (TensoRF)
recently shows its potential for fast, high-quality reconstruction of static
scenes with compact model size. In this paper, we present D-TensoRF, a
tensorial radiance field for dynamic scenes, enabling novel view synthesis at a
specific time. We consider the radiance field of a dynamic scene as a 5D
tensor. The 5D tensor represents a 4D grid in which each axis corresponds to X,
Y, Z, and time and has 1D multi-channel features per element. Similar to
TensoRF, we decompose the grid either into rank-one vector components (CP
decomposition) or low-rank matrix components (newly proposed MM decomposition).
We also use smoothing regularization to reflect the relationship between
features at different times (temporal dependency). We conduct extensive
evaluations to analyze our models. We show that D-TensoRF with CP decomposition
and MM decomposition both have short training times and significantly low
memory footprints with quantitatively and qualitatively competitive rendering
results in comparison to the state-of-the-art methods in 3D dynamic scene
modeling.
Related papers
- SCARF: Scalable Continual Learning Framework for Memory-efficient Multiple Neural Radiance Fields [9.606992888590757]
We build on Neural Radiance Fields (NeRF), which uses multi-layer perceptron to model the density and radiance field of a scene as the implicit function.
We propose an uncertain surface knowledge distillation strategy to transfer the radiance field knowledge of previous scenes to the new model.
Experiments show that the proposed approach achieves state-of-the-art rendering quality of continual learning NeRF on NeRF-Synthetic, LLFF, and TanksAndTemples datasets.
arXiv Detail & Related papers (2024-09-06T03:36:12Z) - EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via
Self-Supervision [85.17951804790515]
EmerNeRF is a simple yet powerful approach for learning spatial-temporal representations of dynamic driving scenes.
It simultaneously captures scene geometry, appearance, motion, and semantics via self-bootstrapping.
Our method achieves state-of-the-art performance in sensor simulation.
arXiv Detail & Related papers (2023-11-03T17:59:55Z) - Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis [58.5779956899918]
We present a method that simultaneously addresses the tasks of dynamic scene novel-view synthesis and six degree-of-freedom (6-DOF) tracking of all dense scene elements.
We follow an analysis-by-synthesis framework, inspired by recent work that models scenes as a collection of 3D Gaussians.
We demonstrate a large number of downstream applications enabled by our representation, including first-person view synthesis, dynamic compositional scene synthesis, and 4D video editing.
arXiv Detail & Related papers (2023-08-18T17:59:21Z) - SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes [75.9110646062442]
We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner.
Our method takes multi-view RGB videos and background images from static cameras with known camera parameters as input.
We show experimentally that, unlike prior work that only handles small motion, our method enables the reconstruction of studio-scale motions.
arXiv Detail & Related papers (2023-08-16T09:50:35Z) - Template-free Articulated Neural Point Clouds for Reposable View
Synthesis [11.535440791891217]
We present a novel method to jointly learn a Dynamic NeRF and an associated skeletal model from even sparse multi-view video.
Our forward-warping approach achieves state-of-the-art visual fidelity when synthesizing novel views and poses.
arXiv Detail & Related papers (2023-05-30T14:28:08Z) - Detachable Novel Views Synthesis of Dynamic Scenes Using
Distribution-Driven Neural Radiance Fields [19.16403828672949]
Representing and synthesizing novel views in real-world dynamic scenes from casual monocular videos is a long-standing problem.
Our approach $textbfD$etach the background from the entire $textbfD$ynamic scene, which is called $textD4$NeRF.
Our approach outperforms previous methods in rendering texture details and motion areas while also producing a clean static background.
arXiv Detail & Related papers (2023-01-01T14:39:09Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - Fast Dynamic Radiance Fields with Time-Aware Neural Voxels [106.69049089979433]
We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox.
Our framework accelerates the optimization of dynamic radiance fields while maintaining high rendering quality.
Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.
arXiv Detail & Related papers (2022-05-30T17:47:31Z) - TensoRF: Tensorial Radiance Fields [74.16791688888081]
We present TensoRF, a novel approach to model and reconstruct radiance fields.
We model the radiance field of a scene as a 4D tensor, which represents a 3D voxel grid with per-voxel multi-channel features.
We show that TensoRF with CP decomposition achieves fast reconstruction (30 min) with better rendering quality and even a smaller model size (4 MB) compared to NeRF.
arXiv Detail & Related papers (2022-03-17T17:59:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.