Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D
Camera
- URL: http://arxiv.org/abs/2206.15258v1
- Date: Thu, 30 Jun 2022 13:09:39 GMT
- Title: Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D
Camera
- Authors: Hongrui Cai, Wanquan Feng, Xuetao Feng, Yan Wang, Juyong Zhang
- Abstract summary: We propose a template-free method to recover high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D camera.
Experiments on public datasets and our collected dataset demonstrate that NDR outperforms existing monocular dynamic reconstruction methods.
- Score: 26.410460029742456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose Neural-DynamicReconstruction (NDR), a template-free method to
recover high-fidelity geometry and motions of a dynamic scene from a monocular
RGB-D camera. In NDR, we adopt the neural implicit function for surface
representation and rendering such that the captured color and depth can be
fully utilized to jointly optimize the surface and deformations. To represent
and constrain the non-rigid deformations, we propose a novel neural invertible
deforming network such that the cycle consistency between arbitrary two frames
is automatically satisfied. Considering that the surface topology of dynamic
scene might change over time, we employ a topology-aware strategy to construct
the topology-variant correspondence for the fused frames. NDR also further
refines the camera poses in a global optimization manner. Experiments on public
datasets and our collected dataset demonstrate that NDR outperforms existing
monocular dynamic reconstruction methods.
Related papers
- NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - MorpheuS: Neural Dynamic 360° Surface Reconstruction from Monocular RGB-D Video [14.678582015968916]
We introduce MorpheuS, a framework for dynamic 360deg surface reconstruction from a casually captured RGB-D video.
Our approach models the target scene as a canonical field that encodes its geometry and appearance.
We leverage a view-dependent diffusion prior and distill knowledge from it to achieve realistic completion of unobserved regions.
arXiv Detail & Related papers (2023-12-01T18:55:53Z) - DynamicSurf: Dynamic Neural RGB-D Surface Reconstruction with an
Optimizable Feature Grid [7.702806654565181]
DynamicSurf is a model-free neural implicit surface reconstruction method for high-fidelity 3D modelling of non-rigid surfaces from monocular RGB-D video.
We learn a neural deformation field that maps a canonical representation of the surface geometry to the current frame.
We demonstrate it can optimize sequences of varying frames with $6$ speedup over pure-based approaches.
arXiv Detail & Related papers (2023-11-14T13:39:01Z) - DynaMoN: Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields [71.94156412354054]
We propose Dynamic Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields (DynaMoN)
DynaMoN handles dynamic content for initial camera pose estimation and statics-focused ray sampling for fast and accurate novel-view synthesis.
We extensively evaluate our approach on two real-world dynamic datasets, the TUM RGB-D dataset and the BONN RGB-D Dynamic dataset.
arXiv Detail & Related papers (2023-09-16T08:46:59Z) - SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes [75.9110646062442]
We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner.
Our method takes multi-view RGB videos and background images from static cameras with known camera parameters as input.
We show experimentally that, unlike prior work that only handles small motion, our method enables the reconstruction of studio-scale motions.
arXiv Detail & Related papers (2023-08-16T09:50:35Z) - Unbiased 4D: Monocular 4D Reconstruction with a Neural Deformation Model [76.64071133839862]
Capturing general deforming scenes from monocular RGB video is crucial for many computer graphics and vision applications.
Our method, Ub4D, handles large deformations, performs shape completion in occluded regions, and can operate on monocular RGB videos directly by using differentiable volume rendering.
Results on our new dataset, which will be made publicly available, demonstrate a clear improvement over the state of the art in terms of surface reconstruction accuracy and robustness to large deformations.
arXiv Detail & Related papers (2022-06-16T17:59:54Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - NeuS: Learning Neural Implicit Surfaces by Volume Rendering for
Multi-view Reconstruction [88.02850205432763]
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs.
Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision.
We observe that the conventional volume rendering method causes inherent geometric errors for surface reconstruction.
We propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision.
arXiv Detail & Related papers (2021-06-20T12:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.