NeuV-SLAM: Fast Neural Multiresolution Voxel Optimization for RGBD Dense
SLAM
- URL: http://arxiv.org/abs/2402.02020v1
- Date: Sat, 3 Feb 2024 04:26:35 GMT
- Title: NeuV-SLAM: Fast Neural Multiresolution Voxel Optimization for RGBD Dense
SLAM
- Authors: Wenzhi Guo, Bing Wang, Lijun Chen
- Abstract summary: We introduce NeuV-SLAM, a novel simultaneous localization and mapping pipeline based on neural multiresolution voxels.
NeuV-SLAM is characterized by ultra-fast convergence and incremental expansion capabilities.
- Score: 5.709880146357355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce NeuV-SLAM, a novel dense simultaneous localization and mapping
pipeline based on neural multiresolution voxels, characterized by ultra-fast
convergence and incremental expansion capabilities. This pipeline utilizes RGBD
images as input to construct multiresolution neural voxels, achieving rapid
convergence while maintaining robust incremental scene reconstruction and
camera tracking. Central to our methodology is to propose a novel implicit
representation, termed VDF that combines the implementation of neural signed
distance field (SDF) voxels with an SDF activation strategy. This approach
entails the direct optimization of color features and SDF values anchored
within the voxels, substantially enhancing the rate of scene convergence. To
ensure the acquisition of clear edge delineation, SDF activation is designed,
which maintains exemplary scene representation fidelity even under constraints
of voxel resolution. Furthermore, in pursuit of advancing rapid incremental
expansion with low computational overhead, we developed hashMV, a novel
hash-based multiresolution voxel management structure. This architecture is
complemented by a strategically designed voxel generation technique that
synergizes with a two-dimensional scene prior. Our empirical evaluations,
conducted on the Replica and ScanNet Datasets, substantiate NeuV-SLAM's
exceptional efficacy in terms of convergence speed, tracking accuracy, scene
reconstruction, and rendering quality.
Related papers
- SP-SLAM: Neural Real-Time Dense SLAM With Scene Priors [32.42183561158492]
We introduce SP-SLAM, a novel neural RGB-D SLAM system that performs tracking and mapping in real-time.
In SP-SLAM, we introduce an effective optimization strategy for mapping, allowing the system to continuously optimize the poses of all historical input frames during runtime.
The results demonstrate that, compared to existing methods, we achieve superior tracking accuracy and reconstruction quality, while running at a significantly faster speed.
arXiv Detail & Related papers (2025-01-11T07:53:58Z) - Spatial Annealing for Efficient Few-shot Neural Rendering [73.49548565633123]
We introduce an accurate and efficient few-shot neural rendering method named textbfSpatial textbfAnnealing regularized textbfNeRF (textbfSANeRF)
By adding merely one line of code, SANeRF delivers superior rendering quality and much faster reconstruction speed compared to current few-shot neural rendering methods.
arXiv Detail & Related papers (2024-06-12T02:48:52Z) - Low-Light Video Enhancement via Spatial-Temporal Consistent Illumination and Reflection Decomposition [68.6707284662443]
Low-Light Video Enhancement (LLVE) seeks to restore dynamic and static scenes plagued by severe invisibility and noise.
One critical aspect is formulating a consistency constraint specifically for temporal-spatial illumination and appearance enhanced versions.
We present an innovative video Retinex-based decomposition strategy that operates without the need for explicit supervision.
arXiv Detail & Related papers (2024-05-24T15:56:40Z) - VoxNeRF: Bridging Voxel Representation and Neural Radiance Fields for Enhanced Indoor View Synthesis [73.50359502037232]
VoxNeRF is a novel approach to enhance the quality and efficiency of neural indoor reconstruction and novel view synthesis.
We propose an efficient voxel-guided sampling technique that allocates computational resources to selectively the most relevant segments of rays.
Our approach is validated with extensive experiments on ScanNet and ScanNet++.
arXiv Detail & Related papers (2023-11-09T11:32:49Z) - Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail [54.03399077258403]
We present LoD-NeuS, an efficient neural representation for high-frequency geometry detail recovery and anti-aliased novel view rendering.
Our representation aggregates space features from a multi-convolved featurization within a conical frustum along a ray.
arXiv Detail & Related papers (2023-09-19T05:44:00Z) - Fast Monocular Scene Reconstruction with Global-Sparse Local-Dense Grids [84.90863397388776]
We propose to directly use signed distance function (SDF) in sparse voxel block grids for fast and accurate scene reconstruction without distances.
Our globally sparse and locally dense data structure exploits surfaces' spatial sparsity, enables cache-friendly queries, and allows direct extensions to multi-modal data.
Experiments show that our approach is 10x faster in training and 100x faster in rendering while achieving comparable accuracy to state-of-the-art neural implicit methods.
arXiv Detail & Related papers (2023-05-22T16:50:19Z) - Direct Voxel Grid Optimization: Super-fast Convergence for Radiance
Fields Reconstruction [42.3230709881297]
We present a super-fast convergence approach to reconstructing the per-scene radiance field from a set of images.
Our approach achieves NeRF-comparable quality and converges rapidly from scratch in less than 15 minutes with a single GPU.
arXiv Detail & Related papers (2021-11-22T14:02:07Z) - NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo [97.07453889070574]
We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
arXiv Detail & Related papers (2021-09-02T17:54:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.