NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric
Mapping
- URL: http://arxiv.org/abs/2110.09415v1
- Date: Mon, 18 Oct 2021 15:45:05 GMT
- Title: NeuralBlox: Real-Time Neural Representation Fusion for Robust Volumetric
Mapping
- Authors: Stefan Lionar, Lukas Schmid, Cesar Cadena, Roland Siegwart, Andrei
Cramariuc
- Abstract summary: We present a novel 3D mapping method leveraging the recent progress in neural implicit representation for 3D reconstruction.
We propose a fusion strategy and training pipeline to incrementally build and update neural implicit representations.
We show that incrementally built occupancy maps can be obtained in real-time even on a CPU.
- Score: 29.3378360000956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel 3D mapping method leveraging the recent progress in neural
implicit representation for 3D reconstruction. Most existing state-of-the-art
neural implicit representation methods are limited to object-level
reconstructions and can not incrementally perform updates given new data. In
this work, we propose a fusion strategy and training pipeline to incrementally
build and update neural implicit representations that enable the reconstruction
of large scenes from sequential partial observations. By representing an
arbitrarily sized scene as a grid of latent codes and performing updates
directly in latent space, we show that incrementally built occupancy maps can
be obtained in real-time even on a CPU. Compared to traditional approaches such
as Truncated Signed Distance Fields (TSDFs), our map representation is
significantly more robust in yielding a better scene completeness given noisy
inputs. We demonstrate the performance of our approach in thorough experimental
validation on real-world datasets with varying degrees of added pose noise.
Related papers
- DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Neural Kernel Surface Reconstruction [80.51581494300423]
We present a novel method for reconstructing a 3D implicit surface from a large-scale, sparse, and noisy point cloud.
Our approach builds upon the recently introduced Neural Kernel Fields representation.
arXiv Detail & Related papers (2023-05-31T06:25:18Z) - Fast Monocular Scene Reconstruction with Global-Sparse Local-Dense Grids [84.90863397388776]
We propose to directly use signed distance function (SDF) in sparse voxel block grids for fast and accurate scene reconstruction without distances.
Our globally sparse and locally dense data structure exploits surfaces' spatial sparsity, enables cache-friendly queries, and allows direct extensions to multi-modal data.
Experiments show that our approach is 10x faster in training and 100x faster in rendering while achieving comparable accuracy to state-of-the-art neural implicit methods.
arXiv Detail & Related papers (2023-05-22T16:50:19Z) - Neural Poisson: Indicator Functions for Neural Fields [25.41908065938424]
Implicit neural field generating signed distance field representations (SDFs) of 3D shapes have shown remarkable progress.
We introduce a new paradigm for neural field representations of 3D scenes.
We show that our approach demonstrates state-of-the-art reconstruction performance on both synthetic and real scanned 3D scene data.
arXiv Detail & Related papers (2022-11-25T17:28:22Z) - Sphere-Guided Training of Neural Implicit Surfaces [14.882607960908217]
In 3D reconstruction, neural distance functions trained via ray marching have been widely adopted for multi-view 3D reconstruction.
These methods, however, apply the ray marching procedure for the entire scene volume, leading to reduced sampling efficiency.
We address this problem via joint training of the implicit function and our new coarse sphere-based surface reconstruction.
arXiv Detail & Related papers (2022-09-30T15:00:03Z) - BNV-Fusion: Dense 3D Reconstruction using Bi-level Neural Volume Fusion [85.24673400250671]
We present Bi-level Neural Volume Fusion (BNV-Fusion), which leverages recent advances in neural implicit representations and neural rendering for dense 3D reconstruction.
In order to incrementally integrate new depth maps into a global neural implicit representation, we propose a novel bi-level fusion strategy.
We evaluate the proposed method on multiple datasets quantitatively and qualitatively, demonstrating a significant improvement over existing methods.
arXiv Detail & Related papers (2022-04-03T19:33:09Z) - SCFusion: Real-time Incremental Scene Reconstruction with Semantic
Completion [86.77318031029404]
We propose a framework that performs scene reconstruction and semantic scene completion jointly in an incremental and real-time manner.
Our framework relies on a novel neural architecture designed to process occupancy maps and leverages voxel states to accurately and efficiently fuse semantic completion with the 3D global model.
arXiv Detail & Related papers (2020-10-26T15:31:52Z) - Convolutional Occupancy Networks [88.48287716452002]
We propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes.
By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space.
We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
arXiv Detail & Related papers (2020-03-10T10:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.