NeRF-XL: Scaling NeRFs with Multiple GPUs
- URL: http://arxiv.org/abs/2404.16221v1
- Date: Wed, 24 Apr 2024 21:43:15 GMT
- Title: NeRF-XL: Scaling NeRFs with Multiple GPUs
- Authors: Ruilong Li, Sanja Fidler, Angjoo Kanazawa, Francis Williams,
- Abstract summary: We present NeRF-XL, a principled method for distributing Neural Radiance Fields (NeRFs) across multiple GPU.
We show improvements in reconstruction quality with larger parameter counts and speed improvements with more GPU.
We demonstrate the effectiveness of NeRF-XL on a wide variety of datasets, including the largest open-source dataset to date, MatrixCity, containing 258K images covering a 25km2 city area.
- Score: 72.75214892939411
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present NeRF-XL, a principled method for distributing Neural Radiance Fields (NeRFs) across multiple GPUs, thus enabling the training and rendering of NeRFs with an arbitrarily large capacity. We begin by revisiting existing multi-GPU approaches, which decompose large scenes into multiple independently trained NeRFs, and identify several fundamental issues with these methods that hinder improvements in reconstruction quality as additional computational resources (GPUs) are used in training. NeRF-XL remedies these issues and enables the training and rendering of NeRFs with an arbitrary number of parameters by simply using more hardware. At the core of our method lies a novel distributed training and rendering formulation, which is mathematically equivalent to the classic single-GPU case and minimizes communication between GPUs. By unlocking NeRFs with arbitrarily large parameter counts, our approach is the first to reveal multi-GPU scaling laws for NeRFs, showing improvements in reconstruction quality with larger parameter counts and speed improvements with more GPUs. We demonstrate the effectiveness of NeRF-XL on a wide variety of datasets, including the largest open-source dataset to date, MatrixCity, containing 258K images covering a 25km^2 city area.
Related papers
- GL-NeRF: Gauss-Laguerre Quadrature Enables Training-Free NeRF Acceleration [4.06770650829784]
We propose GL-NeRF, a new perspective of computing volume rendering with the Gauss-Laguerre quadrature.
GL-NeRF significantly reduces the number of calls needed for volume rendering, introducing no additional data structures or neural networks.
We show that with a minimal drop in performance, GL-NeRF can significantly reduce the number of calls, showing the potential to speed up any NeRF model.
arXiv Detail & Related papers (2024-10-19T04:49:13Z) - FastSR-NeRF: Improving NeRF Efficiency on Consumer Devices with A Simple
Super-Resolution Pipeline [10.252591107152503]
Super-resolution (SR) techniques have been proposed to upscale the outputs of neural radiance fields (NeRF)
In this paper, we aim to leverage SR for efficiency gains without costly training or architectural changes.
arXiv Detail & Related papers (2023-12-15T21:02:23Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - DReg-NeRF: Deep Registration for Neural Radiance Fields [66.69049158826677]
We propose DReg-NeRF to solve the NeRF registration problem on object-centric annotated scenes without human intervention.
Our proposed method beats the SOTA point cloud registration methods by a large margin.
arXiv Detail & Related papers (2023-08-18T08:37:49Z) - Federated Neural Radiance Fields [36.42289161746808]
We consider training NeRFs in a federated manner, whereby multiple compute nodes, each having acquired a distinct set of observations of the overall scene, learn a common NeRF in parallel.
Our contribution is the first federated learning algorithm for NeRF, which splits the training effort across multiple compute nodes and obviates the need to pool the images at a central node.
A technique based on low-rank decomposition of NeRF layers is introduced to reduce bandwidth consumption to transmit the model parameters for aggregation.
arXiv Detail & Related papers (2023-05-02T02:33:22Z) - MEIL-NeRF: Memory-Efficient Incremental Learning of Neural Radiance
Fields [49.68916478541697]
We develop a Memory-Efficient Incremental Learning algorithm for NeRF (MEIL-NeRF)
MEIL-NeRF takes inspiration from NeRF itself in that a neural network can serve as a memory that provides the pixel RGB values, given rays as queries.
As a result, MEIL-NeRF demonstrates constant memory consumption and competitive performance.
arXiv Detail & Related papers (2022-12-16T08:04:56Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - VaxNeRF: Revisiting the Classic for Voxel-Accelerated Neural Radiance
Field [28.087183395793236]
We propose Voxel-Accelearated NeRF (VaxNeRF) to integrate NeRF with visual hull.
VaxNeRF achieves about 2-8x faster learning on top of the highly-performative JaxNeRF.
We hope VaxNeRF can empower and accelerate new NeRF extensions and applications.
arXiv Detail & Related papers (2021-11-25T14:56:53Z) - Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance
Fields [45.84983186882732]
"mip-NeRF" (a la "mipmap"), extends NeRF to represent the scene at a continuously-valued scale.
By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts.
Compared to NeRF, mip-NeRF reduces average error rates by 16% on the dataset presented with NeRF and by 60% on a challenging multiscale variant of that dataset.
arXiv Detail & Related papers (2021-03-24T18:02:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.