Adaptive Voronoi NeRFs
- URL: http://arxiv.org/abs/2303.16001v2
- Date: Thu, 30 Mar 2023 11:20:13 GMT
- Title: Adaptive Voronoi NeRFs
- Authors: Tim Elsner, Victor Czech, Julia Berger, Zain Selman, Isaak Lim, Leif
Kobbelt
- Abstract summary: Neural Radiance Fields learn to represent a 3D scene from just a set of registered images.
We show that a hierarchy of Voronoi diagrams is a suitable choice to partition the scene.
By equipping each Voronoi cell with its own NeRF, our approach is able to quickly learn a scene representation.
- Score: 9.973103531980838
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Fields (NeRFs) learn to represent a 3D scene from just a set
of registered images. Increasing sizes of a scene demands more complex
functions, typically represented by neural networks, to capture all details.
Training and inference then involves querying the neural network millions of
times per image, which becomes impractically slow. Since such complex functions
can be replaced by multiple simpler functions to improve speed, we show that a
hierarchy of Voronoi diagrams is a suitable choice to partition the scene. By
equipping each Voronoi cell with its own NeRF, our approach is able to quickly
learn a scene representation. We propose an intuitive partitioning of the space
that increases quality gains during training by distributing information evenly
among the networks and avoids artifacts through a top-down adaptive refinement.
Our framework is agnostic to the underlying NeRF method and easy to implement,
which allows it to be applied to various NeRF variants for improved learning
and rendering speeds.
Related papers
- HyperPlanes: Hypernetwork Approach to Rapid NeRF Adaptation [4.53411151619456]
We propose a few-shot learning approach based on the hypernetwork paradigm that does not require gradient optimization during inference.
We have developed an efficient method for generating a high-quality 3D object representation from a small number of images in a single step.
arXiv Detail & Related papers (2024-02-02T16:10:29Z) - Adaptive Multi-NeRF: Exploit Efficient Parallelism in Adaptive Multiple
Scale Neural Radiance Field Rendering [3.8200916793910973]
Recent advances in Neural Radiance Fields (NeRF) have demonstrated significant potential for representing 3D scene appearances as implicit neural networks.
However, the lengthy training and rendering process hinders the widespread adoption of this promising technique for real-time rendering applications.
We present an effective adaptive multi-NeRF method designed to accelerate the neural rendering process for large scenes.
arXiv Detail & Related papers (2023-10-03T08:34:49Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level
Physically-Grounded Augmentations [111.08941206369508]
We propose Augmented NeRF (Aug-NeRF), which for the first time brings the power of robust data augmentations into regularizing the NeRF training.
Our proposal learns to seamlessly blend worst-case perturbations into three distinct levels of the NeRF pipeline.
Aug-NeRF effectively boosts NeRF performance in both novel view synthesis and underlying geometry reconstruction.
arXiv Detail & Related papers (2022-07-04T02:27:07Z) - Differentiable Point-Based Radiance Fields for Efficient View Synthesis [57.56579501055479]
We propose a differentiable rendering algorithm for efficient novel view synthesis.
Our method is up to 300x faster than NeRF in both training and inference.
For dynamic scenes, our method trains two orders of magnitude faster than STNeRF and renders at near interactive rate.
arXiv Detail & Related papers (2022-05-28T04:36:13Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - Recursive-NeRF: An Efficient and Dynamically Growing NeRF [34.768382663711705]
Recursive-NeRF is an efficient rendering and training approach for the Neural Radiance Field (NeRF) method.
Recursive-NeRF learns uncertainties for query coordinates, representing the quality of the predicted color and volumetric intensity at each level.
Our evaluation on three public datasets shows that Recursive-NeRF is more efficient than NeRF while providing state-of-the-art quality.
arXiv Detail & Related papers (2021-05-19T12:51:54Z) - BARF: Bundle-Adjusting Neural Radiance Fields [104.97810696435766]
We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
arXiv Detail & Related papers (2021-04-13T17:59:51Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.