DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields
- URL: http://arxiv.org/abs/2303.11364v1
- Date: Mon, 20 Mar 2023 18:03:32 GMT
- Title: DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields
- Authors: Wei-Ting Chen, Wang Yifan, Sy-Yen Kuo, Gordon Wetzstein
- Abstract summary: We introduce DehazeNeRF as a framework that robustly operates in hazy conditions.
We demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.
- Score: 56.30120727729177
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural radiance fields (NeRFs) have demonstrated state-of-the-art performance
for 3D computer vision tasks, including novel view synthesis and 3D shape
reconstruction. However, these methods fail in adverse weather conditions. To
address this challenge, we introduce DehazeNeRF as a framework that robustly
operates in hazy conditions. DehazeNeRF extends the volume rendering equation
by adding physically realistic terms that model atmospheric scattering. By
parameterizing these terms using suitable networks that match the physical
properties, we introduce effective inductive biases, which, together with the
proposed regularizations, allow DehazeNeRF to demonstrate successful multi-view
haze removal, novel view synthesis, and 3D shape reconstruction where existing
approaches fail.
Related papers
- AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - Taming Latent Diffusion Model for Neural Radiance Field Inpainting [63.297262813285265]
Neural Radiance Field (NeRF) is a representation for 3D reconstruction from multi-view images.
We propose tempering the diffusion model'sity with per-scene customization and mitigating the textural shift with masked training.
Our framework yields state-of-the-art NeRF inpainting results on various real-world scenes.
arXiv Detail & Related papers (2024-04-15T17:59:57Z) - Dehazing-NeRF: Neural Radiance Fields from Hazy Images [13.92247691561793]
We propose Dehazing-NeRF, a method that can recover clear NeRF from hazy image inputs.
Our method simulates the physical imaging process of hazy images using an atmospheric scattering model.
Our method outperforms the simple combination of single-image dehazing and NeRF on both image dehazing and novel view synthesis.
arXiv Detail & Related papers (2023-04-22T17:09:05Z) - Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and
Reconstruction [77.69363640021503]
3D-aware image synthesis encompasses a variety of tasks, such as scene generation and novel view synthesis from images.
We present SSDNeRF, a unified approach that employs an expressive diffusion model to learn a generalizable prior of neural radiance fields (NeRF) from multi-view images of diverse objects.
arXiv Detail & Related papers (2023-04-13T17:59:01Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - Learning Neural Radiance Fields from Multi-View Geometry [1.1011268090482573]
We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
arXiv Detail & Related papers (2022-10-24T08:53:35Z) - OmniNeRF: Hybriding Omnidirectional Distance and Radiance fields for
Neural Surface Reconstruction [22.994952933576684]
Ground-breaking research in the neural radiance field (NeRF) has dramatically improved the representation quality of 3D objects.
Some later studies improved NeRF by building truncated signed distance fields (TSDFs) but still suffer from the problem of blurred surfaces in 3D reconstruction.
In this work, this surface ambiguity is addressed by proposing a novel way of 3D shape representation, OmniNeRF.
arXiv Detail & Related papers (2022-09-27T14:39:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.