Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations
- URL: http://arxiv.org/abs/2303.14707v1
- Date: Sun, 26 Mar 2023 12:24:31 GMT
- Title: Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations
- Authors: Xinhang Liu, Yu-Wing Tai, Chi-Keung Tang
- Abstract summary: This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
- Score: 67.54358911994967
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While Neural Radiance Fields (NeRFs) had achieved unprecedented novel view
synthesis results, they have been struggling in dealing with large-scale
cluttered scenes with sparse input views and highly view-dependent appearances.
Specifically, existing NeRF-based models tend to produce blurry rendering with
the volumetric reconstruction often inaccurate, where a lot of reconstruction
errors are observed in the form of foggy "floaters" hovering within the entire
volume of an opaque 3D scene. Such inaccuracies impede NeRF's potential for
accurate 3D NeRF registration, object detection, segmentation, etc., which
possibly accounts for only limited significant research effort so far to
directly address these important 3D fundamental computer vision problems to
date. This paper analyzes the NeRF's struggles in such settings and proposes
Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex
scenes. Our key insights consist of enforcing effective appearance and geometry
constraints, which are absent in the conventional NeRF reconstruction, by 1)
automatically detecting and modeling view-dependent appearances in the training
views to prevent them from interfering with density estimation, which is
complete with 2) a geometric correction procedure performed on each traced ray
during inference. Clean-NeRF can be implemented as a plug-in that can
immediately benefit existing NeRF-based methods without additional input. Codes
will be released.
Related papers
- Enhance-NeRF: Multiple Performance Evaluation for Neural Radiance Fields [2.5432277893532116]
Neural Radiance Fields (NeRF) can generate realistic images from any viewpoint.
NeRF-based models are susceptible to interference issues caused by colored "fog" noise.
Our approach, coined Enhance-NeRF, adopts joint color to balance low and high reflectivity objects display.
arXiv Detail & Related papers (2023-06-08T15:49:30Z) - DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields [56.30120727729177]
We introduce DehazeNeRF as a framework that robustly operates in hazy conditions.
We demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.
arXiv Detail & Related papers (2023-03-20T18:03:32Z) - Self-NeRF: A Self-Training Pipeline for Few-Shot Neural Radiance Fields [17.725937326348994]
We propose Self-NeRF, a self-evolved NeRF that iteratively refines the radiance fields with very few number of input views.
In each iteration, we label unseen views with the predicted colors or warped pixels generated by the model from the preceding iteration.
These expanded pseudo-views are afflicted by imprecision in color and warping artifacts, which degrades the performance of NeRF.
arXiv Detail & Related papers (2023-03-10T08:22:36Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.