SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit
Neural Representations
- URL: http://arxiv.org/abs/2210.02299v1
- Date: Wed, 5 Oct 2022 14:38:49 GMT
- Title: SHINE-Mapping: Large-Scale 3D Mapping Using Sparse Hierarchical Implicit
Neural Representations
- Authors: Xingguang Zhong and Yue Pan and Jens Behley and Cyrill Stachniss
- Abstract summary: This paper addresses the problems of achieving large-scale 3D reconstructions with implicit representations using 3D LiDAR measurements.
We learn and store implicit features through an octree-based hierarchical structure, which is sparse and sparse.
Our experiments show that our 3D reconstructions are more accurate, complete, and memory-efficient than current state-of-the-art 3D mapping methods.
- Score: 37.733802382489515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate mapping of large-scale environments is an essential building block
of most outdoor autonomous systems. Challenges of traditional mapping methods
include the balance between memory consumption and mapping accuracy. This paper
addresses the problems of achieving large-scale 3D reconstructions with
implicit representations using 3D LiDAR measurements. We learn and store
implicit features through an octree-based hierarchical structure, which is
sparse and extensible. The features can be turned into signed distance values
through a shallow neural network. We leverage binary cross entropy loss to
optimize the local features with the 3D measurements as supervision. Based on
our implicit representation, we design an incremental mapping system with
regularization to tackle the issue of catastrophic forgetting in continual
learning. Our experiments show that our 3D reconstructions are more accurate,
complete, and memory-efficient than current state-of-the-art 3D mapping
methods.
Related papers
- Large Spatial Model: End-to-end Unposed Images to Semantic 3D [79.94479633598102]
Large Spatial Model (LSM) processes unposed RGB images directly into semantic radiance fields.
LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward operation.
It can generate versatile label maps by interacting with language at novel viewpoints.
arXiv Detail & Related papers (2024-10-24T17:54:42Z) - Decomposition of Neural Discrete Representations for Large-Scale 3D Mapping [15.085191496726967]
We introduce Decomposition-based Neural Mapping (DNMap)
DNMap is a storage-efficient large-scale 3D mapping method.
We learn low-resolution continuous embeddings that require tiny storage space.
arXiv Detail & Related papers (2024-07-22T11:32:33Z) - 3D LiDAR Mapping in Dynamic Environments Using a 4D Implicit Neural Representation [33.92758288570465]
Building accurate maps is a key building block to enable reliable localization, planning, and navigation of autonomous vehicles.
We propose encoding the 4D scene into a novel implicit neural map representation.
Our method is capable of removing the dynamic part of the input point clouds while reconstructing accurate and complete 3D maps.
arXiv Detail & Related papers (2024-05-06T11:46:04Z) - DeepMIF: Deep Monotonic Implicit Fields for Large-Scale LiDAR 3D Mapping [46.80755234561584]
Recent learning-based methods integrate neural implicit representations and optimizable feature grids to approximate surfaces of 3D scenes.
In this work we depart from fitting LiDAR data exactly, instead letting the network optimize a non-metric monotonic implicit field defined in 3D space.
Our algorithm achieves high-quality dense 3D mapping performance as captured by multiple quantitative and perceptual measures and visual results obtained for Mai City, Newer College, and KITTI benchmarks.
arXiv Detail & Related papers (2024-03-26T09:58:06Z) - ALSTER: A Local Spatio-Temporal Expert for Online 3D Semantic
Reconstruction [62.599588577671796]
We propose an online 3D semantic segmentation method that incrementally reconstructs a 3D semantic map from a stream of RGB-D frames.
Unlike offline methods, ours is directly applicable to scenarios with real-time constraints, such as robotics or mixed reality.
arXiv Detail & Related papers (2023-11-29T20:30:18Z) - SeMLaPS: Real-time Semantic Mapping with Latent Prior Networks and
Quasi-Planar Segmentation [53.83313235792596]
We present a new methodology for real-time semantic mapping from RGB-D sequences.
It combines a 2D neural network and a 3D network based on a SLAM system with 3D occupancy mapping.
Our system achieves state-of-the-art semantic mapping quality within 2D-3D networks-based systems.
arXiv Detail & Related papers (2023-06-28T22:36:44Z) - Neural 3D Scene Reconstruction with the Manhattan-world Assumption [58.90559966227361]
This paper addresses the challenge of reconstructing 3D indoor scenes from multi-view images.
Planar constraints can be conveniently integrated into the recent implicit neural representation-based reconstruction methods.
The proposed method outperforms previous methods by a large margin on 3D reconstruction quality.
arXiv Detail & Related papers (2022-05-05T17:59:55Z) - Convolutional Occupancy Networks [88.48287716452002]
We propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes.
By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space.
We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
arXiv Detail & Related papers (2020-03-10T10:17:07Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.