Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields
- URL: http://arxiv.org/abs/2111.12077v2
- Date: Wed, 24 Nov 2021 18:51:06 GMT
- Title: Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields
- Authors: Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan,
Peter Hedman
- Abstract summary: We present an extension of mip-NeRF that uses a non-linear scene parameterization, online distillation, and a novel distortion-based regularizer to overcome the challenges presented by unbounded scenes.
Our model, which we dub "mip-NeRF 360," reduces mean-squared error by 54% compared to mip-NeRF, and is able to produce realistic synthesized views and detailed depth maps.
- Score: 43.69542675078766
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Though neural radiance fields (NeRF) have demonstrated impressive view
synthesis results on objects and small bounded regions of space, they struggle
on "unbounded" scenes, where the camera may point in any direction and content
may exist at any distance. In this setting, existing NeRF-like models often
produce blurry or low-resolution renderings (due to the unbalanced detail and
scale of nearby and distant objects), are slow to train, and may exhibit
artifacts due to the inherent ambiguity of the task of reconstructing a large
scene from a small set of images. We present an extension of mip-NeRF (a NeRF
variant that addresses sampling and aliasing) that uses a non-linear scene
parameterization, online distillation, and a novel distortion-based regularizer
to overcome the challenges presented by unbounded scenes. Our model, which we
dub "mip-NeRF 360" as we target scenes in which the camera rotates 360 degrees
around a point, reduces mean-squared error by 54% compared to mip-NeRF, and is
able to produce realistic synthesized views and detailed depth maps for highly
intricate, unbounded real-world scenes.
Related papers
- Neural Light Spheres for Implicit Image Stitching and View Synthesis [32.396278546192995]
Spherical neural light field model for implicit panoramic image stitching and re-rendering.
We show improved reconstruction quality over traditional image stitching and radiance field methods.
arXiv Detail & Related papers (2024-09-26T15:05:29Z) - NeRF On-the-go: Exploiting Uncertainty for Distractor-free NeRFs in the Wild [55.154625718222995]
We introduce NeRF On-the-go, a simple yet effective approach that enables the robust synthesis of novel views in complex, in-the-wild scenes.
Our method demonstrates a significant improvement over state-of-the-art techniques.
arXiv Detail & Related papers (2024-05-29T02:53:40Z) - Redefining Recon: Bridging Gaps with UAVs, 360 degree Cameras, and
Neural Radiance Fields [0.0]
We introduce an innovative approach that synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs)
A NeRF, a specialized neural network, can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request.
We have tested our approach through recent post-fire scenario, underlining the efficacy of NeRFs even in challenging outdoor environments.
arXiv Detail & Related papers (2023-11-30T14:21:29Z) - Drone-NeRF: Efficient NeRF Based 3D Scene Reconstruction for Large-Scale
Drone Survey [11.176205645608865]
We propose the Drone-NeRF framework to enhance the efficient reconstruction of large-scale drone photography scenes.
Our approach involves dividing the scene into uniform sub-blocks based on camera position and depth visibility.
Sub-scenes are trained in parallel using NeRF, then merged for a complete scene.
arXiv Detail & Related papers (2023-08-30T03:17:57Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Pre-NeRF 360: Enriching Unbounded Appearances for Neural Radiance Fields [8.634008996263649]
We propose a new framework to boost the performance of NeRF-based architectures.
Our solution overcomes several obstacles that plagued earlier versions of NeRF.
We introduce an updated version of the Nutrition5k dataset, known as the N5k360 dataset.
arXiv Detail & Related papers (2023-03-21T23:29:38Z) - SPARF: Neural Radiance Fields from Sparse and Noisy Poses [58.528358231885846]
We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
arXiv Detail & Related papers (2022-11-21T18:57:47Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.