Dynamic Neural Radiance Field From Defocused Monocular Video
- URL: http://arxiv.org/abs/2407.05586v2
- Date: Wed, 31 Jul 2024 06:04:25 GMT
- Title: Dynamic Neural Radiance Field From Defocused Monocular Video
- Authors: Xianrui Luo, Huiqiang Sun, Juewen Peng, Zhiguo Cao,
- Abstract summary: We propose D2RF, the first dynamic NeRF method designed to restore sharp novel views from defocused monocular videos.
We introduce layered Depth-of-Field (DoF) volume rendering to model the defocus blur and reconstruct a sharp NeRF supervised by defocused views.
Our method outperforms existing approaches in synthesizing all-in-focus novel views from defocus blur while maintaining spatial-temporal consistency in the scene.
- Score: 15.789775912053507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic Neural Radiance Field (NeRF) from monocular videos has recently been explored for space-time novel view synthesis and achieved excellent results. However, defocus blur caused by depth variation often occurs in video capture, compromising the quality of dynamic reconstruction because the lack of sharp details interferes with modeling temporal consistency between input views. To tackle this issue, we propose D2RF, the first dynamic NeRF method designed to restore sharp novel views from defocused monocular videos. We introduce layered Depth-of-Field (DoF) volume rendering to model the defocus blur and reconstruct a sharp NeRF supervised by defocused views. The blur model is inspired by the connection between DoF rendering and volume rendering. The opacity in volume rendering aligns with the layer visibility in DoF rendering. To execute the blurring, we modify the layered blur kernel to the ray-based kernel and employ an optimized sparse kernel to gather the input rays efficiently and render the optimized rays with our layered DoF volume rendering. We synthesize a dataset with defocused dynamic scenes for our task, and extensive experiments on our dataset show that our method outperforms existing approaches in synthesizing all-in-focus novel views from defocus blur while maintaining spatial-temporal consistency in the scene.
Related papers
- D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video [18.424138608823267]
We propose DyBluRF, a dynamic radiance field approach that synthesizes sharp novel views from a monocular video affected by motion blur.
To account for motion blur in input images, we simultaneously capture the camera trajectory and object Discrete Cosine Transform (DCT) trajectories within the scene.
arXiv Detail & Related papers (2024-03-15T08:48:37Z) - DyBluRF: Dynamic Deblurring Neural Radiance Fields for Blurry Monocular Video [25.964642223641057]
We propose a novel dynamic deblurring framework for blurry video monocular video, called DyBluRF.
Our DyBluRF is the first that handles the novel view synthesis for blurry monocular video with a novel two-stage framework.
arXiv Detail & Related papers (2023-12-21T02:01:19Z) - PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields [54.8553158441296]
We propose a novel visual localization framework, ie, PNeRFLoc, based on a unified point-based representation.
On the one hand, PNeRFLoc supports the initial pose estimation by matching 2D and 3D feature points.
On the other hand, it also enables pose refinement with novel view synthesis using rendering-based optimization.
arXiv Detail & Related papers (2023-12-17T08:30:00Z) - NeVRF: Neural Video-based Radiance Fields for Long-duration Sequences [53.8501224122952]
We propose a novel neural video-based radiance fields (NeVRF) representation.
NeVRF marries neural radiance field with image-based rendering to support photo-realistic novel view synthesis on long-duration dynamic inward-looking scenes.
Our experiments demonstrate the effectiveness of NeVRF in enabling long-duration sequence rendering, sequential data reconstruction, and compact data storage.
arXiv Detail & Related papers (2023-12-10T11:14:30Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - T\"oRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis [32.878225196378374]
We introduce a neural representation based on an image formation model for continuous-wave ToF cameras.
We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions.
arXiv Detail & Related papers (2021-09-30T17:12:59Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.