NeRF On-the-go: Exploiting Uncertainty for Distractor-free NeRFs in the Wild
- URL: http://arxiv.org/abs/2405.18715v2
- Date: Sun, 2 Jun 2024 14:15:27 GMT
- Title: NeRF On-the-go: Exploiting Uncertainty for Distractor-free NeRFs in the Wild
- Authors: Weining Ren, Zihan Zhu, Boyang Sun, Jiaqi Chen, Marc Pollefeys, Songyou Peng,
- Abstract summary: We introduce NeRF On-the-go, a simple yet effective approach that enables the robust synthesis of novel views in complex, in-the-wild scenes.
Our method demonstrates a significant improvement over state-of-the-art techniques.
- Score: 55.154625718222995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Fields (NeRFs) have shown remarkable success in synthesizing photorealistic views from multi-view images of static scenes, but face challenges in dynamic, real-world environments with distractors like moving objects, shadows, and lighting changes. Existing methods manage controlled environments and low occlusion ratios but fall short in render quality, especially under high occlusion scenarios. In this paper, we introduce NeRF On-the-go, a simple yet effective approach that enables the robust synthesis of novel views in complex, in-the-wild scenes from only casually captured image sequences. Delving into uncertainty, our method not only efficiently eliminates distractors, even when they are predominant in captures, but also achieves a notably faster convergence speed. Through comprehensive experiments on various scenes, our method demonstrates a significant improvement over state-of-the-art techniques. This advancement opens new avenues for NeRF in diverse and dynamic real-world applications.
Related papers
- IE-NeRF: Inpainting Enhanced Neural Radiance Fields in the Wild [15.86621086993995]
We present a novel approach for synthesizing realistic novel views using Neural Radiance Fields (NeRF) with uncontrolled photos in the wild.
Our framework called textitInpainting Enhanced NeRF, or ours, enhances the conventional NeRF by drawing inspiration from the technique of image inpainting.
arXiv Detail & Related papers (2024-07-15T13:10:23Z) - Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - NeRF-HuGS: Improved Neural Radiance Fields in Non-static Scenes Using Heuristics-Guided Segmentation [76.02304140027087]
We propose a novel paradigm, namely "Heuristics-Guided harmoniously" (HuGS)
HuGS significantly enhances the separation of static scenes from transient distractors by combining the strengths of hand-crafted synthesiss and state-of-the-art segmentation models.
Experiments demonstrate the superiority and robustness of our method in mitigating transient distractors for NeRFs trained in non-static scenes.
arXiv Detail & Related papers (2024-03-26T09:42:28Z) - DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video [18.424138608823267]
We propose DyBluRF, a dynamic radiance field approach that synthesizes sharp novel views from a monocular video affected by motion blur.
To account for motion blur in input images, we simultaneously capture the camera trajectory and object Discrete Cosine Transform (DCT) trajectories within the scene.
arXiv Detail & Related papers (2024-03-15T08:48:37Z) - Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion [67.15935067326662]
Event cameras offer low power, low latency, high temporal resolution and high dynamic range.
NeRF is seen as the leading candidate for efficient and effective scene representation.
We propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras.
arXiv Detail & Related papers (2023-09-15T17:52:08Z) - Strata-NeRF : Neural Radiance Fields for Stratified Scenes [29.58305675148781]
In the real world, we may capture a scene at multiple levels, resulting in a layered capture.
We propose Strata-NeRF, a single neural radiance field that implicitly captures a scene with multiple levels.
We find that Strata-NeRF effectively captures stratified scenes, minimizes artifacts, and synthesizes high-fidelity views.
arXiv Detail & Related papers (2023-08-20T18:45:43Z) - Lighting up NeRF via Unsupervised Decomposition and Enhancement [40.89359754872889]
We propose a novel approach, called Low-Light NeRF (or LLNeRF), to enhance the scene representation and synthesize normal-light novel views directly from sRGB low-light images.
Our method is able to produce novel view images with proper lighting and vivid colors and details, given a collection of camera-finished low dynamic range (8-bits/channel) images from a low-light scene.
arXiv Detail & Related papers (2023-07-20T07:46:34Z) - E-NeRF: Neural Radiance Fields from a Moving Event Camera [83.91656576631031]
Estimating neural radiance fields (NeRFs) from ideal images has been extensively studied in the computer vision community.
We present E-NeRF, the first method which estimates a volumetric scene representation in the form of a NeRF from a fast-moving event camera.
arXiv Detail & Related papers (2022-08-24T04:53:32Z) - NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo
Collections [47.9463405062868]
We present a learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs.
We build on Neural Radiance Fields (NeRF), which uses the weights of a multilayer perceptron to model the density and color of a scene as a function of 3D coordinates.
We introduce a series of extensions to NeRF to address these issues, thereby enabling accurate reconstructions from unstructured image collections taken from the internet.
arXiv Detail & Related papers (2020-08-05T17:51:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.