NeuV-SLAM: Fast Neural Multiresolution Voxel Optimization for RGBD Dense
SLAM
- URL: http://arxiv.org/abs/2402.02020v1
- Date: Sat, 3 Feb 2024 04:26:35 GMT
- Title: NeuV-SLAM: Fast Neural Multiresolution Voxel Optimization for RGBD Dense
SLAM
- Authors: Wenzhi Guo, Bing Wang, Lijun Chen
- Abstract summary: We introduce NeuV-SLAM, a novel simultaneous localization and mapping pipeline based on neural multiresolution voxels.
NeuV-SLAM is characterized by ultra-fast convergence and incremental expansion capabilities.
- Score: 5.709880146357355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce NeuV-SLAM, a novel dense simultaneous localization and mapping
pipeline based on neural multiresolution voxels, characterized by ultra-fast
convergence and incremental expansion capabilities. This pipeline utilizes RGBD
images as input to construct multiresolution neural voxels, achieving rapid
convergence while maintaining robust incremental scene reconstruction and
camera tracking. Central to our methodology is to propose a novel implicit
representation, termed VDF that combines the implementation of neural signed
distance field (SDF) voxels with an SDF activation strategy. This approach
entails the direct optimization of color features and SDF values anchored
within the voxels, substantially enhancing the rate of scene convergence. To
ensure the acquisition of clear edge delineation, SDF activation is designed,
which maintains exemplary scene representation fidelity even under constraints
of voxel resolution. Furthermore, in pursuit of advancing rapid incremental
expansion with low computational overhead, we developed hashMV, a novel
hash-based multiresolution voxel management structure. This architecture is
complemented by a strategically designed voxel generation technique that
synergizes with a two-dimensional scene prior. Our empirical evaluations,
conducted on the Replica and ScanNet Datasets, substantiate NeuV-SLAM's
exceptional efficacy in terms of convergence speed, tracking accuracy, scene
reconstruction, and rendering quality.
Related papers
- Event-Stream Super Resolution using Sigma-Delta Neural Network [0.10923877073891444]
Event cameras present unique challenges due to their low resolution and sparse, asynchronous nature of the data they collect.
Current event super-resolution algorithms are not fully optimized for the distinct data structure produced by event cameras.
Research proposes a method that integrates binary spikes with Sigma Delta Neural Networks (SDNNs)
arXiv Detail & Related papers (2024-08-13T15:25:18Z) - Low-Light Video Enhancement via Spatial-Temporal Consistent Illumination and Reflection Decomposition [68.6707284662443]
Low-Light Video Enhancement (LLVE) seeks to restore dynamic and static scenes plagued by severe invisibility and noise.
One critical aspect is formulating a consistency constraint specifically for temporal-spatial illumination and appearance enhanced versions.
We present an innovative video Retinex-based decomposition strategy that operates without the need for explicit supervision.
arXiv Detail & Related papers (2024-05-24T15:56:40Z) - VoxNeRF: Bridging Voxel Representation and Neural Radiance Fields for
Enhanced Indoor View Synthesis [51.49008959209671]
We introduce VoxNeRF, a novel approach that leverages volumetric representations to enhance the quality and efficiency of indoor view synthesis.
We employ multi-resolution hash grids to adaptively capture spatial features, effectively managing occlusions and the intricate geometry of indoor scenes.
We validate our approach against three public indoor datasets and demonstrate that VoxNeRF outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-11-09T11:32:49Z) - Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail [54.03399077258403]
We present LoD-NeuS, an efficient neural representation for high-frequency geometry detail recovery and anti-aliased novel view rendering.
Our representation aggregates space features from a multi-convolved featurization within a conical frustum along a ray.
arXiv Detail & Related papers (2023-09-19T05:44:00Z) - Fast Monocular Scene Reconstruction with Global-Sparse Local-Dense Grids [84.90863397388776]
We propose to directly use signed distance function (SDF) in sparse voxel block grids for fast and accurate scene reconstruction without distances.
Our globally sparse and locally dense data structure exploits surfaces' spatial sparsity, enables cache-friendly queries, and allows direct extensions to multi-modal data.
Experiments show that our approach is 10x faster in training and 100x faster in rendering while achieving comparable accuracy to state-of-the-art neural implicit methods.
arXiv Detail & Related papers (2023-05-22T16:50:19Z) - Direct Voxel Grid Optimization: Super-fast Convergence for Radiance
Fields Reconstruction [42.3230709881297]
We present a super-fast convergence approach to reconstructing the per-scene radiance field from a set of images.
Our approach achieves NeRF-comparable quality and converges rapidly from scratch in less than 15 minutes with a single GPU.
arXiv Detail & Related papers (2021-11-22T14:02:07Z) - Optical-Flow-Reuse-Based Bidirectional Recurrent Network for Space-Time
Video Super-Resolution [52.899234731501075]
Space-time video super-resolution (ST-VSR) simultaneously increases the spatial resolution and frame rate for a given video.
Existing methods typically suffer from difficulties in how to efficiently leverage information from a large range of neighboring frames.
We propose a coarse-to-fine bidirectional recurrent neural network instead of using ConvLSTM to leverage knowledge between adjacent frames.
arXiv Detail & Related papers (2021-10-13T15:21:30Z) - NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo [97.07453889070574]
We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
arXiv Detail & Related papers (2021-09-02T17:54:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.