Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs
- URL: http://arxiv.org/abs/2304.10532v3
- Date: Tue, 17 Oct 2023 18:15:06 GMT
- Title: Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs
- Authors: Frederik Warburg, Ethan Weber, Matthew Tancik, Aleksander Holynski,
Angjoo Kanazawa
- Abstract summary: Casually captured Neural Radiance Fields (NeRFs) suffer from artifacts such as floaters or flawed geometry when rendered outside the camera trajectory.
We propose a new dataset and evaluation procedure, where two camera trajectories are recorded of the scene.
We show that this data-driven prior removes floaters and improves scene geometry for casual captures.
- Score: 78.75872372856597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Casually captured Neural Radiance Fields (NeRFs) suffer from artifacts such
as floaters or flawed geometry when rendered outside the camera trajectory.
Existing evaluation protocols often do not capture these effects, since they
usually only assess image quality at every 8th frame of the training capture.
To push forward progress in novel-view synthesis, we propose a new dataset and
evaluation procedure, where two camera trajectories are recorded of the scene:
one used for training, and the other for evaluation. In this more challenging
in-the-wild setting, we find that existing hand-crafted regularizers do not
remove floaters nor improve scene geometry. Thus, we propose a 3D
diffusion-based method that leverages local 3D priors and a novel density-based
score distillation sampling loss to discourage artifacts during NeRF
optimization. We show that this data-driven prior removes floaters and improves
scene geometry for casual captures.
Related papers
- ZeroGS: Training 3D Gaussian Splatting from Unposed Images [62.34149221132978]
We propose ZeroGS to train 3DGS from hundreds of unposed and unordered images.
Our method leverages a pretrained foundation model as the neural scene representation.
Our method recovers more accurate camera poses than state-of-the-art pose-free NeRF/3DGS methods.
arXiv Detail & Related papers (2024-11-24T11:20:48Z) - WaterSplatting: Fast Underwater 3D Scene Reconstruction Using Gaussian Splatting [39.58317527488534]
We propose a novel approach that fuses volumetric rendering with 3DGS to handle underwater data effectively.
Our method outperforms state-of-the-art NeRF-based methods in rendering quality on the underwater SeaThru-NeRF dataset.
arXiv Detail & Related papers (2024-08-15T15:16:49Z) - RoGUENeRF: A Robust Geometry-Consistent Universal Enhancer for NeRF [1.828790674925926]
2D enhancers can be pre-trained to recover some detail but are agnostic to scene geometry.
Existing 3D enhancers are able to transfer detail from nearby training images in a generalizable manner.
We propose a neural rendering enhancer, RoGUENeRF, which exploits the best of both paradigms.
arXiv Detail & Related papers (2024-03-18T16:11:42Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized
Photography [54.36608424943729]
We show that in a ''long-burst'', forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth.
We devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion.
arXiv Detail & Related papers (2022-12-22T18:54:34Z) - Depth-supervised NeRF: Fewer Views and Faster Training for Free [69.34556647743285]
DS-NeRF (Depth-supervised Neural Radiance Fields) is a loss for learning fields that takes advantage of readily-available depth supervision.
We show that our loss is compatible with other recently proposed NeRF methods, demonstrating that depth is a cheap and easily digestible supervisory signal.
arXiv Detail & Related papers (2021-07-06T17:58:35Z) - Learning to Recover 3D Scene Shape from a Single Image [98.20106822614392]
We propose a two-stage framework that first predicts depth up to an unknown scale and shift from a single monocular image.
We then use 3D point cloud encoders to predict the missing depth shift and focal length that allow us to recover a realistic 3D scene shape.
arXiv Detail & Related papers (2020-12-17T02:35:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.