Direct Voxel Grid Optimization: Super-fast Convergence for Radiance
Fields Reconstruction
- URL: http://arxiv.org/abs/2111.11215v1
- Date: Mon, 22 Nov 2021 14:02:07 GMT
- Title: Direct Voxel Grid Optimization: Super-fast Convergence for Radiance
Fields Reconstruction
- Authors: Cheng Sun, Min Sun, Hwann-Tzong Chen
- Abstract summary: We present a super-fast convergence approach to reconstructing the per-scene radiance field from a set of images.
Our approach achieves NeRF-comparable quality and converges rapidly from scratch in less than 15 minutes with a single GPU.
- Score: 42.3230709881297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a super-fast convergence approach to reconstructing the per-scene
radiance field from a set of images that capture the scene with known poses.
This task, which is often applied to novel view synthesis, is recently
revolutionized by Neural Radiance Field (NeRF) for its state-of-the-art quality
and flexibility. However, NeRF and its variants require a lengthy training time
ranging from hours to days for a single scene. In contrast, our approach
achieves NeRF-comparable quality and converges rapidly from scratch in less
than 15 minutes with a single GPU. We adopt a representation consisting of a
density voxel grid for scene geometry and a feature voxel grid with a shallow
network for complex view-dependent appearance. Modeling with explicit and
discretized volume representations is not new, but we propose two simple yet
non-trivial techniques that contribute to fast convergence speed and
high-quality output. First, we introduce the post-activation interpolation on
voxel density, which is capable of producing sharp surfaces in lower grid
resolution. Second, direct voxel density optimization is prone to suboptimal
geometry solutions, so we robustify the optimization process by imposing
several priors. Finally, evaluation on five inward-facing benchmarks shows that
our method matches, if not surpasses, NeRF's quality, yet it only takes about
15 minutes to train from scratch for a new scene.
Related papers
- Spatial Annealing for Efficient Few-shot Neural Rendering [73.49548565633123]
We introduce an accurate and efficient few-shot neural rendering method named textbfSpatial textbfAnnealing regularized textbfNeRF (textbfSANeRF)
By adding merely one line of code, SANeRF delivers superior rendering quality and much faster reconstruction speed compared to current few-shot neural rendering methods.
arXiv Detail & Related papers (2024-06-12T02:48:52Z) - INPC: Implicit Neural Point Clouds for Radiance Field Rendering [5.64500060725726]
We introduce a new approach for reconstruction and novel-view synthesis of real-world scenes.
We propose a hybrid scene representation, which implicitly encodes a point cloud in a continuous octree-based probability field and a multi-resolution hash grid.
Our method achieves fast inference at interactive frame rates, and can extract explicit point clouds to further enhance performance.
arXiv Detail & Related papers (2024-03-25T15:26:32Z) - NeuV-SLAM: Fast Neural Multiresolution Voxel Optimization for RGBD Dense
SLAM [5.709880146357355]
We introduce NeuV-SLAM, a novel simultaneous localization and mapping pipeline based on neural multiresolution voxels.
NeuV-SLAM is characterized by ultra-fast convergence and incremental expansion capabilities.
arXiv Detail & Related papers (2024-02-03T04:26:35Z) - VoxNeRF: Bridging Voxel Representation and Neural Radiance Fields for Enhanced Indoor View Synthesis [73.50359502037232]
VoxNeRF is a novel approach to enhance the quality and efficiency of neural indoor reconstruction and novel view synthesis.
We propose an efficient voxel-guided sampling technique that allocates computational resources to selectively the most relevant segments of rays.
Our approach is validated with extensive experiments on ScanNet and ScanNet++.
arXiv Detail & Related papers (2023-11-09T11:32:49Z) - VGOS: Voxel Grid Optimization for View Synthesis from Sparse Inputs [9.374561178958404]
VGOS is an approach for fast (3-5 minutes) radiance field reconstruction from sparse inputs (3-10 views)
We introduce an incremental voxel training strategy, which prevents overfitting by suppressing the optimization of peripheral voxels.
Experiments demonstrate that VGOS achieves state-of-the-art performance for sparse inputs with super-fast convergence.
arXiv Detail & Related papers (2023-04-26T08:52:55Z) - Neural Deformable Voxel Grid for Fast Optimization of Dynamic View
Synthesis [63.25919018001152]
We propose a fast deformable radiance field method to handle dynamic scenes.
Our method achieves comparable performance to D-NeRF using only 20 minutes for training.
arXiv Detail & Related papers (2022-06-15T17:49:08Z) - Fast Dynamic Radiance Fields with Time-Aware Neural Voxels [106.69049089979433]
We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox.
Our framework accelerates the optimization of dynamic radiance fields while maintaining high rendering quality.
Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.
arXiv Detail & Related papers (2022-05-30T17:47:31Z) - Differentiable Point-Based Radiance Fields for Efficient View Synthesis [57.56579501055479]
We propose a differentiable rendering algorithm for efficient novel view synthesis.
Our method is up to 300x faster than NeRF in both training and inference.
For dynamic scenes, our method trains two orders of magnitude faster than STNeRF and renders at near interactive rate.
arXiv Detail & Related papers (2022-05-28T04:36:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.