RobustNeRF: Ignoring Distractors with Robust Losses
- URL: http://arxiv.org/abs/2302.00833v2
- Date: Fri, 26 Jul 2024 19:34:31 GMT
- Title: RobustNeRF: Ignoring Distractors with Robust Losses
- Authors: Sara Sabour, Suhani Vora, Daniel Duckworth, Ivan Krasin, David J. Fleet, Andrea Tagliasacchi,
- Abstract summary: We advocate a form of robust estimation for NeRF training, modeling distractors in training data as outliers of an optimization problem.
Our method successfully removes outliers from a scene and improves upon our baselines, on synthetic and real-world scenes.
- Score: 32.2329459013342
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural radiance fields (NeRF) excel at synthesizing new views given multi-view, calibrated images of a static scene. When scenes include distractors, which are not persistent during image capture (moving objects, lighting variations, shadows), artifacts appear as view-dependent effects or 'floaters'. To cope with distractors, we advocate a form of robust estimation for NeRF training, modeling distractors in training data as outliers of an optimization problem. Our method successfully removes outliers from a scene and improves upon our baselines, on synthetic and real-world scenes. Our technique is simple to incorporate in modern NeRF frameworks, with few hyper-parameters. It does not assume a priori knowledge of the types of distractors, and is instead focused on the optimization problem rather than pre-processing or modeling transient objects. More results on our page https://robustnerf.github.io.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - NeRF-HuGS: Improved Neural Radiance Fields in Non-static Scenes Using Heuristics-Guided Segmentation [76.02304140027087]
We propose a novel paradigm, namely "Heuristics-Guided harmoniously" (HuGS)
HuGS significantly enhances the separation of static scenes from transient distractors by combining the strengths of hand-crafted synthesiss and state-of-the-art segmentation models.
Experiments demonstrate the superiority and robustness of our method in mitigating transient distractors for NeRFs trained in non-static scenes.
arXiv Detail & Related papers (2024-03-26T09:42:28Z) - Sharp-NeRF: Grid-based Fast Deblurring Neural Radiance Fields Using
Sharpness Prior [4.602333448154979]
Sharp-NeRF is a technique that renders clean and sharp images from the input blurry images within half an hour of training.
We have conducted experiments on the benchmarks consisting of blurry images and have evaluated full-reference and non-reference metrics.
Our approach renders the sharp novel views with vivid colors and fine details, and it has considerably faster training time than the previous works.
arXiv Detail & Related papers (2024-01-01T17:48:38Z) - Self-NeRF: A Self-Training Pipeline for Few-Shot Neural Radiance Fields [17.725937326348994]
We propose Self-NeRF, a self-evolved NeRF that iteratively refines the radiance fields with very few number of input views.
In each iteration, we label unseen views with the predicted colors or warped pixels generated by the model from the preceding iteration.
These expanded pseudo-views are afflicted by imprecision in color and warping artifacts, which degrades the performance of NeRF.
arXiv Detail & Related papers (2023-03-10T08:22:36Z) - Deep Dynamic Scene Deblurring from Optical Flow [53.625999196063574]
Deblurring can provide visually more pleasant pictures and make photography more convenient.
It is difficult to model the non-uniform blur mathematically.
We develop a convolutional neural network (CNN) to restore the sharp images from the deblurred features.
arXiv Detail & Related papers (2023-01-18T06:37:21Z) - BAD-NeRF: Bundle Adjusted Deblur Neural Radiance Fields [9.744593647024253]
We present a novel bundle adjusted deblur Neural Radiance Fields (BAD-NeRF)
BAD-NeRF can be robust to severe motion blurred images and inaccurate camera poses.
Our approach models the physical image formation process of a motion blurred image, and jointly learns the parameters of NeRF.
arXiv Detail & Related papers (2022-11-23T10:53:37Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from
Sparse Inputs [79.00855490550367]
We show that NeRF can produce photorealistic renderings of unseen viewpoints when many input views are available.
We address this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints.
Our model outperforms not only other methods that optimize over a single scene, but also conditional models that are extensively pre-trained on large multi-view datasets.
arXiv Detail & Related papers (2021-12-01T18:59:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.