Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs
- URL: http://arxiv.org/abs/2304.10532v3
- Date: Tue, 17 Oct 2023 18:15:06 GMT
- Title: Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs
- Authors: Frederik Warburg, Ethan Weber, Matthew Tancik, Aleksander Holynski,
Angjoo Kanazawa
- Abstract summary: Casually captured Neural Radiance Fields (NeRFs) suffer from artifacts such as floaters or flawed geometry when rendered outside the camera trajectory.
We propose a new dataset and evaluation procedure, where two camera trajectories are recorded of the scene.
We show that this data-driven prior removes floaters and improves scene geometry for casual captures.
- Score: 78.75872372856597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Casually captured Neural Radiance Fields (NeRFs) suffer from artifacts such
as floaters or flawed geometry when rendered outside the camera trajectory.
Existing evaluation protocols often do not capture these effects, since they
usually only assess image quality at every 8th frame of the training capture.
To push forward progress in novel-view synthesis, we propose a new dataset and
evaluation procedure, where two camera trajectories are recorded of the scene:
one used for training, and the other for evaluation. In this more challenging
in-the-wild setting, we find that existing hand-crafted regularizers do not
remove floaters nor improve scene geometry. Thus, we propose a 3D
diffusion-based method that leverages local 3D priors and a novel density-based
score distillation sampling loss to discourage artifacts during NeRF
optimization. We show that this data-driven prior removes floaters and improves
scene geometry for casual captures.
Related papers
- RoGUENeRF: A Robust Geometry-Consistent Universal Enhancer for NeRF [1.828790674925926]
2D enhancers can be pre-trained to recover some detail but are agnostic to scene geometry.
Existing 3D enhancers are able to transfer detail from nearby training images in a generalizable manner.
We propose a neural rendering enhancer, RoGUENeRF, which exploits the best of both paradigms.
arXiv Detail & Related papers (2024-03-18T16:11:42Z) - PlatoNeRF: 3D Reconstruction in Plato's Cave via Single-View Two-Bounce Lidar [25.332440946211236]
3D reconstruction from a single-view is challenging because of the ambiguity from monocular cues and lack of information about occluded regions.
We propose using time-of-flight data captured by a single-photon avalanche diode to overcome these limitations.
We demonstrate that we can reconstruct visible and occluded geometry without data priors or reliance on controlled ambient lighting or scene albedo.
arXiv Detail & Related papers (2023-12-21T18:59:53Z) - IL-NeRF: Incremental Learning for Neural Radiance Fields with Camera
Pose Alignment [12.580584725107173]
We propose IL-NeRF, a novel framework for incremental NeRF training.
We show that IL-NeRF handles incremental NeRF training and outperforms the baselines by up to $54.04%$ in rendering quality.
arXiv Detail & Related papers (2023-12-10T04:12:27Z) - Re-Nerfing: Improving Novel Views Synthesis through Novel Views Synthesis [80.3686833921072]
Re-Nerfing is a simple and general multi-stage data augmentation approach.
We train a NeRF with the available views, then use the optimized NeRF to synthesize pseudo-views around the original ones.
We also train a second NeRF with both the original images and the pseudo views masking out uncertain regions.
arXiv Detail & Related papers (2023-12-04T18:56:08Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized
Photography [54.36608424943729]
We show that in a ''long-burst'', forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth.
We devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion.
arXiv Detail & Related papers (2022-12-22T18:54:34Z) - Learning to Recover 3D Scene Shape from a Single Image [98.20106822614392]
We propose a two-stage framework that first predicts depth up to an unknown scale and shift from a single monocular image.
We then use 3D point cloud encoders to predict the missing depth shift and focal length that allow us to recover a realistic 3D scene shape.
arXiv Detail & Related papers (2020-12-17T02:35:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.