Tile and Slide : A New Framework for Scaling NeRF from Local to Global 3D Earth Observation
- URL: http://arxiv.org/abs/2507.01631v2
- Date: Thu, 31 Jul 2025 13:32:03 GMT
- Title: Tile and Slide : A New Framework for Scaling NeRF from Local to Global 3D Earth Observation
- Authors: Camille Billouard, Dawa Derksen, Alexandre Constantin, Bruno Vallet,
- Abstract summary: Snake-NeRF is a framework that scales to large scenes.<n>We achieve this by dividing the region of interest into NeRFs that 3D tile without overlap.<n>We introduce a novel $2times 2$ 3D tile progression strategy and segmented sampler, which together prevent 3D reconstruction errors along the tile edges.
- Score: 45.22460694311405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRF) have recently emerged as a paradigm for 3D reconstruction from multiview satellite imagery. However, state-of-the-art NeRF methods are typically constrained to small scenes due to the memory footprint during training, which we study in this paper. Previous work on large-scale NeRFs palliate this by dividing the scene into NeRFs. This paper introduces Snake-NeRF, a framework that scales to large scenes. Our out-of-core method eliminates the need to load all images and networks simultaneously, and operates on a single device. We achieve this by dividing the region of interest into NeRFs that 3D tile without overlap. Importantly, we crop the images with overlap to ensure each NeRFs is trained with all the necessary pixels. We introduce a novel $2\times 2$ 3D tile progression strategy and segmented sampler, which together prevent 3D reconstruction errors along the tile edges. Our experiments conclude that large satellite images can effectively be processed with linear time complexity, on a single GPU, and without compromise in quality.
Related papers
- Inpaint4DNeRF: Promptable Spatio-Temporal NeRF Inpainting with
Generative Diffusion Models [59.96172701917538]
Current Neural Radiance Fields (NeRF) can generate photorealistic novel views.
This paper proposes Inpaint4DNeRF to capitalize on state-of-the-art stable diffusion models.
arXiv Detail & Related papers (2023-12-30T11:26:55Z) - PERF: Panoramic Neural Radiance Field from a Single Panorama [109.31072618058043]
PERF is a novel view synthesis framework that trains a panoramic neural radiance field from a single panorama.
We propose a novel collaborative RGBD inpainting method and a progressive inpainting-and-erasing method to lift up a 360-degree 2D scene to a 3D scene.
Our PERF can be widely used for real-world applications, such as panorama-to-3D, text-to-3D, and 3D scene stylization applications.
arXiv Detail & Related papers (2023-10-25T17:59:01Z) - Registering Neural Radiance Fields as 3D Density Images [55.64859832225061]
We propose to use universal pre-trained neural networks that can be trained and tested on different scenes.
We demonstrate that our method, as a global approach, can effectively register NeRF models.
arXiv Detail & Related papers (2023-05-22T09:08:46Z) - MultiPlaneNeRF: Neural Radiance Field with Non-Trainable Representation [6.860380947025009]
NeRF is a popular model that efficiently represents 3D objects from 2D images.<n>We present MultiPlaneNeRF -- a model that simultaneously solves the above problems.
arXiv Detail & Related papers (2023-05-17T21:27:27Z) - NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction [50.54946139497575]
We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering.
We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.
arXiv Detail & Related papers (2022-03-21T18:56:35Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z) - DeRF: Decomposed Radiance Fields [30.784481193893345]
In this paper, we propose a technique based on spatial decomposition capable of mitigating this issue.
We show that a Voronoi spatial decomposition is preferable for this purpose, as it is provably compatible with the Painter's Algorithm.
Our experiments show that for real-world scenes, our method provides up to 3x more efficient inference than NeRF.
arXiv Detail & Related papers (2020-11-25T02:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.