Detachable Novel Views Synthesis of Dynamic Scenes Using
Distribution-Driven Neural Radiance Fields
- URL: http://arxiv.org/abs/2301.00411v1
- Date: Sun, 1 Jan 2023 14:39:09 GMT
- Title: Detachable Novel Views Synthesis of Dynamic Scenes Using
Distribution-Driven Neural Radiance Fields
- Authors: Boyu Zhang, Wenbo Xu, Zheng Zhu, Guan Huang
- Abstract summary: Representing and synthesizing novel views in real-world dynamic scenes from casual monocular videos is a long-standing problem.
Our approach $textbfD$etach the background from the entire $textbfD$ynamic scene, which is called $textD4$NeRF.
Our approach outperforms previous methods in rendering texture details and motion areas while also producing a clean static background.
- Score: 19.16403828672949
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Representing and synthesizing novel views in real-world dynamic scenes from
casual monocular videos is a long-standing problem. Existing solutions
typically approach dynamic scenes by applying geometry techniques or utilizing
temporal information between several adjacent frames without considering the
underlying background distribution in the entire scene or the transmittance
over the ray dimension, limiting their performance on static and occlusion
areas. Our approach $\textbf{D}$istribution-$\textbf{D}$riven neural radiance
fields offers high-quality view synthesis and a 3D solution to
$\textbf{D}$etach the background from the entire $\textbf{D}$ynamic scene,
which is called $\text{D}^4$NeRF. Specifically, it employs a neural
representation to capture the scene distribution in the static background and a
6D-input NeRF to represent dynamic objects, respectively. Each ray sample is
given an additional occlusion weight to indicate the transmittance lying in the
static and dynamic components. We evaluate $\text{D}^4$NeRF on public dynamic
scenes and our urban driving scenes acquired from an autonomous-driving
dataset. Extensive experiments demonstrate that our approach outperforms
previous methods in rendering texture details and motion areas while also
producing a clean static background. Our code will be released at
https://github.com/Luciferbobo/D4NeRF.
Related papers
- $\textit{S}^3$Gaussian: Self-Supervised Street Gaussians for Autonomous Driving [82.82048452755394]
Photorealistic 3D reconstruction of street scenes is a critical technique for developing real-world simulators for autonomous driving.
Most existing street 3DGS methods require tracked 3D vehicle bounding boxes to decompose the static and dynamic elements.
We propose a self-supervised street Gaussian ($textitS3$Gaussian) method to decompose dynamic and static elements from 4D consistency.
arXiv Detail & Related papers (2024-05-30T17:57:08Z) - Point-DynRF: Point-based Dynamic Radiance Fields from a Monocular Video [19.0733297053322]
We introduce point-based dynamic radiance fields, where the global geometric information and volume rendering process are trained by neural point clouds and dynamic radiance fields, respectively.
Specifically, we reconstruct neural point clouds directly from geometric proxies and optimize both radiance fields and the geometric proxies using our proposed losses.
We validate the effectiveness of our method with experiments on the NVIDIA Dynamic Scenes dataset and several causally captured monocular video clips.
arXiv Detail & Related papers (2023-10-14T19:27:46Z) - Local Implicit Ray Function for Generalizable Radiance Field
Representation [20.67358742158244]
We propose LIRF (Local Implicit Ray Function), a generalizable neural rendering approach for novel view rendering.
Given 3D positions within conical frustums, LIRF takes 3D coordinates and the features of conical frustums as inputs and predicts a local volumetric radiance field.
Since the coordinates are continuous, LIRF renders high-quality novel views at a continuously-valued scale via volume rendering.
arXiv Detail & Related papers (2023-04-25T11:52:33Z) - D-TensoRF: Tensorial Radiance Fields for Dynamic Scenes [2.587781533364185]
We present D-TensoRF, a synthesisial radiance field for dynamic scenes.
We decompose the grid either into rank-one vector components (CP decomposition) or low-rank matrix components (newly proposed MM decomposition)
We show that D-TensoRF with CP decomposition and MM decomposition both have short training times and significantly low memory footprints.
arXiv Detail & Related papers (2022-12-05T15:57:55Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - Non-Rigid Neural Radiance Fields: Reconstruction and Novel View
Synthesis of a Dynamic Scene From Monocular Video [76.19076002661157]
Non-Rigid Neural Radiance Fields (NR-NeRF) is a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes.
We show that even a single consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views.
arXiv Detail & Related papers (2020-12-22T18:46:12Z) - Neural Radiance Flow for 4D View Synthesis and Video Processing [59.9116932930108]
We present a method to learn a 4D spatial-temporal representation of a dynamic scene from a set of RGB images.
Key to our approach is the use of a neural implicit representation that learns to capture the 3D occupancy, radiance, and dynamics of the scene.
arXiv Detail & Related papers (2020-12-17T17:54:32Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.