VaxNeRF: Revisiting the Classic for Voxel-Accelerated Neural Radiance
Field
- URL: http://arxiv.org/abs/2111.13112v1
- Date: Thu, 25 Nov 2021 14:56:53 GMT
- Title: VaxNeRF: Revisiting the Classic for Voxel-Accelerated Neural Radiance
Field
- Authors: Naruya Kondo, Yuya Ikeda, Andrea Tagliasacchi, Yutaka Matsuo, Yoichi
Ochiai, Shixiang Shane Gu
- Abstract summary: We propose Voxel-Accelearated NeRF (VaxNeRF) to integrate NeRF with visual hull.
VaxNeRF achieves about 2-8x faster learning on top of the highly-performative JaxNeRF.
We hope VaxNeRF can empower and accelerate new NeRF extensions and applications.
- Score: 28.087183395793236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Field (NeRF) is a popular method in data-driven 3D
reconstruction. Given its simplicity and high quality rendering, many NeRF
applications are being developed. However, NeRF's big limitation is its slow
speed. Many attempts are made to speeding up NeRF training and inference,
including intricate code-level optimization and caching, use of sophisticated
data structures, and amortization through multi-task and meta learning. In this
work, we revisit the basic building blocks of NeRF through the lens of classic
techniques before NeRF. We propose Voxel-Accelearated NeRF (VaxNeRF),
integrating NeRF with visual hull, a classic 3D reconstruction technique only
requiring binary foreground-background pixel labels per image. Visual hull,
which can be optimized in about 10 seconds, can provide coarse in-out field
separation to omit substantial amounts of network evaluations in NeRF. We
provide a clean fully-pythonic, JAX-based implementation on the popular JaxNeRF
codebase, consisting of only about 30 lines of code changes and a modular
visual hull subroutine, and achieve about 2-8x faster learning on top of the
highly-performative JaxNeRF baseline with zero degradation in rendering
quality. With sufficient compute, this effectively brings down full NeRF
training from hours to 30 minutes. We hope VaxNeRF -- a careful combination of
a classic technique with a deep method (that arguably replaced it) -- can
empower and accelerate new NeRF extensions and applications, with its
simplicity, portability, and reliable performance gains. Codes are available at
https://github.com/naruya/VaxNeRF .
Related papers
- NeRF-XL: Scaling NeRFs with Multiple GPUs [72.75214892939411]
We present NeRF-XL, a principled method for distributing Neural Radiance Fields (NeRFs) across multiple GPU.
We show improvements in reconstruction quality with larger parameter counts and speed improvements with more GPU.
We demonstrate the effectiveness of NeRF-XL on a wide variety of datasets, including the largest open-source dataset to date, MatrixCity, containing 258K images covering a 25km2 city area.
arXiv Detail & Related papers (2024-04-24T21:43:15Z) - Fast Sparse View Guided NeRF Update for Object Reconfigurations [42.947608325321475]
We develop the first update method for NeRFs to physical changes.
Our method takes only sparse new images as extra inputs and update the pre-trained NeRF in around 1 to 2 minutes.
Our core idea is the use of a second helper NeRF to learn the local geometry and appearance changes.
arXiv Detail & Related papers (2024-03-16T22:00:16Z) - FastSR-NeRF: Improving NeRF Efficiency on Consumer Devices with A Simple
Super-Resolution Pipeline [10.252591107152503]
Super-resolution (SR) techniques have been proposed to upscale the outputs of neural radiance fields (NeRF)
In this paper, we aim to leverage SR for efficiency gains without costly training or architectural changes.
arXiv Detail & Related papers (2023-12-15T21:02:23Z) - ZeroRF: Fast Sparse View 360{\deg} Reconstruction with Zero Pretraining [28.03297623406931]
Current breakthroughs like Neural Radiance Fields (NeRF) have demonstrated high-fidelity image synthesis but struggle with sparse input views.
We propose ZeroRF, whose key idea is to integrate a tailored Deep Image Prior into a factorized NeRF representation.
Unlike traditional methods, ZeroRF parametrizes feature grids with a neural network generator, enabling efficient sparse view 360deg reconstruction.
arXiv Detail & Related papers (2023-12-14T18:59:32Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - DReg-NeRF: Deep Registration for Neural Radiance Fields [66.69049158826677]
We propose DReg-NeRF to solve the NeRF registration problem on object-centric annotated scenes without human intervention.
Our proposed method beats the SOTA point cloud registration methods by a large margin.
arXiv Detail & Related papers (2023-08-18T08:37:49Z) - From NeRFLiX to NeRFLiX++: A General NeRF-Agnostic Restorer Paradigm [57.73868344064043]
We propose NeRFLiX, a general NeRF-agnostic restorer paradigm that learns a degradation-driven inter-viewpoint mixer.
We also present NeRFLiX++ with a stronger two-stage NeRF degradation simulator and a faster inter-viewpoint mixer.
NeRFLiX++ is capable of restoring photo-realistic ultra-high-resolution outputs from noisy low-resolution NeRF-rendered views.
arXiv Detail & Related papers (2023-06-10T09:19:19Z) - FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency
Regularization [32.1581416980828]
We present Frequency regularized NeRF (FreeNeRF), a surprisingly simple baseline that outperforms previous methods.
We analyze the key challenges in few-shot neural rendering and find that frequency plays an important role in NeRF's training.
arXiv Detail & Related papers (2023-03-13T18:59:03Z) - MEIL-NeRF: Memory-Efficient Incremental Learning of Neural Radiance
Fields [49.68916478541697]
We develop a Memory-Efficient Incremental Learning algorithm for NeRF (MEIL-NeRF)
MEIL-NeRF takes inspiration from NeRF itself in that a neural network can serve as a memory that provides the pixel RGB values, given rays as queries.
As a result, MEIL-NeRF demonstrates constant memory consumption and competitive performance.
arXiv Detail & Related papers (2022-12-16T08:04:56Z) - Compressing Explicit Voxel Grid Representations: fast NeRFs become also
small [3.1473798197405944]
Re:NeRF aims to reduce memory storage of NeRF models while maintaining comparable performance.
We benchmark our approach with three different EVG-NeRF architectures on four popular benchmarks.
arXiv Detail & Related papers (2022-10-23T16:42:29Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.