NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects
- URL: http://arxiv.org/abs/2303.14435v1
- Date: Sat, 25 Mar 2023 11:03:53 GMT
- Title: NeRF-DS: Neural Radiance Fields for Dynamic Specular Objects
- Authors: Zhiwen Yan, Chen Li, Gim Hee Lee
- Abstract summary: Dynamic Neural Radiance Field (NeRF) is a powerful algorithm capable of rendering photo-realistic novel view images from a monocular RGB video of a dynamic scene.
We address the limitation by reformulating the neural radiance field function to be conditioned on surface position and orientation in the observation space.
We evaluate our model based on the novel view synthesis quality with a self-collected dataset of different moving specular objects in realistic environments.
- Score: 63.04781030984006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic Neural Radiance Field (NeRF) is a powerful algorithm capable of
rendering photo-realistic novel view images from a monocular RGB video of a
dynamic scene. Although it warps moving points across frames from the
observation spaces to a common canonical space for rendering, dynamic NeRF does
not model the change of the reflected color during the warping. As a result,
this approach often fails drastically on challenging specular objects in
motion. We address this limitation by reformulating the neural radiance field
function to be conditioned on surface position and orientation in the
observation space. This allows the specular surface at different poses to keep
the different reflected colors when mapped to the common canonical space.
Additionally, we add the mask of moving objects to guide the deformation field.
As the specular surface changes color during motion, the mask mitigates the
problem of failure to find temporal correspondences with only RGB supervision.
We evaluate our model based on the novel view synthesis quality with a
self-collected dataset of different moving specular objects in realistic
environments. The experimental results demonstrate that our method
significantly improves the reconstruction quality of moving specular objects
from monocular RGB videos compared to the existing NeRF models. Our code and
data are available at the project website https://github.com/JokerYan/NeRF-DS.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - MorpheuS: Neural Dynamic 360° Surface Reconstruction from Monocular RGB-D Video [14.678582015968916]
We introduce MorpheuS, a framework for dynamic 360deg surface reconstruction from a casually captured RGB-D video.
Our approach models the target scene as a canonical field that encodes its geometry and appearance.
We leverage a view-dependent diffusion prior and distill knowledge from it to achieve realistic completion of unobserved regions.
arXiv Detail & Related papers (2023-12-01T18:55:53Z) - Animating NeRFs from Texture Space: A Framework for Pose-Dependent
Rendering of Human Performances [11.604386285817302]
We introduce a novel NeRF-based framework for pose-dependent rendering of human performances.
Our approach results in high-quality renderings for novel-view and novel-pose synthesis.
arXiv Detail & Related papers (2023-11-06T14:34:36Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in
the Wild [80.09093712055682]
We introduce a surface analog of implicit models called Neural Reflectance Surfaces (NeRS)
NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions.
arXiv Detail & Related papers (2021-10-14T17:59:58Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.