NeRFuser: Large-Scale Scene Representation by NeRF Fusion
- URL: http://arxiv.org/abs/2305.13307v1
- Date: Mon, 22 May 2023 17:59:05 GMT
- Title: NeRFuser: Large-Scale Scene Representation by NeRF Fusion
- Authors: Jiading Fang, Shengjie Lin, Igor Vasiljevic, Vitor Guizilini, Rares
Ambrus, Adrien Gaidon, Gregory Shakhnarovich, Matthew R. Walter
- Abstract summary: A practical benefit of implicit visual representations like Neural Radiance Fields (NeRFs) is their memory efficiency.
We propose NeRFuser, a novel architecture for NeRF registration and blending that assumes only access to pre-generated NeRFs.
- Score: 35.749208740102546
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: A practical benefit of implicit visual representations like Neural Radiance
Fields (NeRFs) is their memory efficiency: large scenes can be efficiently
stored and shared as small neural nets instead of collections of images.
However, operating on these implicit visual data structures requires extending
classical image-based vision techniques (e.g., registration, blending) from
image sets to neural fields. Towards this goal, we propose NeRFuser, a novel
architecture for NeRF registration and blending that assumes only access to
pre-generated NeRFs, and not the potentially large sets of images used to
generate them. We propose registration from re-rendering, a technique to infer
the transformation between NeRFs based on images synthesized from individual
NeRFs. For blending, we propose sample-based inverse distance weighting to
blend visual information at the ray-sample level. We evaluate NeRFuser on
public benchmarks and a self-collected object-centric indoor dataset, showing
the robustness of our method, including to views that are challenging to render
from the individual source NeRFs.
Related papers
- SimpleNeRF: Regularizing Sparse Input Neural Radiance Fields with
Simpler Solutions [6.9980855647933655]
supervising the depth estimated by the NeRF helps train it effectively with fewer views.
We design augmented models that encourage simpler solutions by exploring the role of positional encoding and view-dependent radiance.
We achieve state-of-the-art view-synthesis performance on two popular datasets by employing the above regularizations.
arXiv Detail & Related papers (2023-09-07T18:02:57Z) - MEIL-NeRF: Memory-Efficient Incremental Learning of Neural Radiance
Fields [49.68916478541697]
We develop a Memory-Efficient Incremental Learning algorithm for NeRF (MEIL-NeRF)
MEIL-NeRF takes inspiration from NeRF itself in that a neural network can serve as a memory that provides the pixel RGB values, given rays as queries.
As a result, MEIL-NeRF demonstrates constant memory consumption and competitive performance.
arXiv Detail & Related papers (2022-12-16T08:04:56Z) - StegaNeRF: Embedding Invisible Information within Neural Radiance Fields [61.653702733061785]
We present StegaNeRF, a method for steganographic information embedding in NeRF renderings.
We design an optimization framework allowing accurate hidden information extractions from images rendered by NeRF.
StegaNeRF signifies an initial exploration into the novel problem of instilling customizable, imperceptible, and recoverable information to NeRF renderings.
arXiv Detail & Related papers (2022-12-03T12:14:19Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - Recursive-NeRF: An Efficient and Dynamically Growing NeRF [34.768382663711705]
Recursive-NeRF is an efficient rendering and training approach for the Neural Radiance Field (NeRF) method.
Recursive-NeRF learns uncertainties for query coordinates, representing the quality of the predicted color and volumetric intensity at each level.
Our evaluation on three public datasets shows that Recursive-NeRF is more efficient than NeRF while providing state-of-the-art quality.
arXiv Detail & Related papers (2021-05-19T12:51:54Z) - BARF: Bundle-Adjusting Neural Radiance Fields [104.97810696435766]
We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
arXiv Detail & Related papers (2021-04-13T17:59:51Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.