Deblurred Neural Radiance Field with Physical Scene Priors
- URL: http://arxiv.org/abs/2211.12046v1
- Date: Tue, 22 Nov 2022 06:40:53 GMT
- Title: Deblurred Neural Radiance Field with Physical Scene Priors
- Authors: Dogyoon Lee, Minhyeok Lee, Chajin Shin, Sangyoun Lee
- Abstract summary: This paper proposes a DP-NeRF framework for blurred images, which is constrained with two physical priors.
We present extensive experimental results for synthetic and real scenes with two types of blur: camera motion blur and defocus blur.
- Score: 6.128295038453101
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Field(NeRF) has exhibited outstanding three-dimensional(3D)
reconstruction quality via the novel view synthesis from multi-view images and
paired calibrated camera parameters. However, previous NeRF-based systems have
been demonstrated under strictly controlled settings, with little attention
paid to less ideal scenarios, including with the presence of noise such as
exposure, illumination changes, and blur. In particular, though blur frequently
occurs in real situations, NeRF that can handle blurred images has received
little attention. The few studies that have investigated NeRF for blurred
images have not considered geometric and appearance consistency in 3D space,
which is one of the most important factors in 3D reconstruction. This leads to
inconsistency and the degradation of the perceptual quality of the constructed
scene. Hence, this paper proposes a DP-NeRF, a novel clean NeRF framework for
blurred images, which is constrained with two physical priors. These priors are
derived from the actual blurring process during image acquisition by the
camera. DP-NeRF proposes rigid blurring kernel to impose 3D consistency
utilizing the physical priors and adaptive weight proposal to refine the color
composition error in consideration of the relationship between depth and blur.
We present extensive experimental results for synthetic and real scenes with
two types of blur: camera motion blur and defocus blur. The results demonstrate
that DP-NeRF successfully improves the perceptual quality of the constructed
NeRF ensuring 3D geometric and appearance consistency. We further demonstrate
the effectiveness of our model with comprehensive ablation analysis.
Related papers
- Deblurring Neural Radiance Fields with Event-driven Bundle Adjustment [23.15130387716121]
We propose Bundle Adjustment for Deblurring Neural Radiance Fields (EBAD-NeRF) to jointly optimize the learnable poses and NeRF parameters.
EBAD-NeRF can obtain accurate camera trajectory during the exposure time and learn a sharper 3D representations compared to prior works.
arXiv Detail & Related papers (2024-06-20T14:33:51Z) - Taming Latent Diffusion Model for Neural Radiance Field Inpainting [63.297262813285265]
Neural Radiance Field (NeRF) is a representation for 3D reconstruction from multi-view images.
We propose tempering the diffusion model'sity with per-scene customization and mitigating the textural shift with masked training.
Our framework yields state-of-the-art NeRF inpainting results on various real-world scenes.
arXiv Detail & Related papers (2024-04-15T17:59:57Z) - ConsistentNeRF: Enhancing Neural Radiance Fields with 3D Consistency for
Sparse View Synthesis [99.06490355990354]
We propose ConsistentNeRF, a method that leverages depth information to regularize both multi-view and single-view 3D consistency among pixels.
Our approach can considerably enhance model performance in sparse view conditions, achieving improvements of up to 94% in PSNR, in SSIM, and 31% in LPIPS.
arXiv Detail & Related papers (2023-05-18T15:18:01Z) - Dehazing-NeRF: Neural Radiance Fields from Hazy Images [13.92247691561793]
We propose Dehazing-NeRF, a method that can recover clear NeRF from hazy image inputs.
Our method simulates the physical imaging process of hazy images using an atmospheric scattering model.
Our method outperforms the simple combination of single-image dehazing and NeRF on both image dehazing and novel view synthesis.
arXiv Detail & Related papers (2023-04-22T17:09:05Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields [56.30120727729177]
We introduce DehazeNeRF as a framework that robustly operates in hazy conditions.
We demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.
arXiv Detail & Related papers (2023-03-20T18:03:32Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields [9.744593647024253]
We present a novel bundle adjusted deblur Neural Radiance Fields (BAD-NeRF)
BAD-NeRF can be robust to severe motion blurred images and inaccurate camera poses.
Our approach models the physical image formation process of a motion blurred image, and jointly learns the parameters of NeRF.
arXiv Detail & Related papers (2022-11-23T10:53:37Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - T\"oRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis [32.878225196378374]
We introduce a neural representation based on an image formation model for continuous-wave ToF cameras.
We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions.
arXiv Detail & Related papers (2021-09-30T17:12:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.