On the descriptive power of LiDAR intensity images for segment-based
loop closing in 3-D SLAM
- URL: http://arxiv.org/abs/2108.01383v1
- Date: Tue, 3 Aug 2021 09:44:23 GMT
- Title: On the descriptive power of LiDAR intensity images for segment-based
loop closing in 3-D SLAM
- Authors: Jan Wietrzykowski and Piotr Skrzypczy\'nski
- Abstract summary: We propose an extension to the segment-based global localization method for LiDAR SLAM using descriptors learned considering the visual context of the segments.
A new architecture of the deep neural network is presented that learns the visual context acquired from synthetic LiDAR intensity images.
- Score: 7.310043452300736
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an extension to the segment-based global localization method for
LiDAR SLAM using descriptors learned considering the visual context of the
segments. A new architecture of the deep neural network is presented that
learns the visual context acquired from synthetic LiDAR intensity images. This
approach allows a single multi-beam LiDAR to produce rich and highly
descriptive location signatures. The method is tested on two public datasets,
demonstrating an improved descriptiveness of the new descriptors, and more
reliable loop closure detection in SLAM. Attention analysis of the network is
used to show the importance of focusing on the broader context rather than only
on the 3-D segment.
Related papers
- Weakly Supervised LiDAR Semantic Segmentation via Scatter Image Annotation [38.715754110667916]
We implement LiDAR semantic segmentation using scatter image annotation.
We also propose ScatterNet, a network that includes three pivotal strategies to reduce the performance gap.
Our method requires less than 0.02% of the labeled points to achieve over 95% of the performance of fully-supervised methods.
arXiv Detail & Related papers (2024-04-19T13:01:30Z) - DetCLIPv3: Towards Versatile Generative Open-vocabulary Object Detection [111.68263493302499]
We introduce DetCLIPv3, a high-performing detector that excels at both open-vocabulary object detection and hierarchical labels.
DetCLIPv3 is characterized by three core designs: 1) Versatile model architecture; 2) High information density data; and 3) Efficient training strategy.
DetCLIPv3 demonstrates superior open-vocabulary detection performance, outperforming GLIPv2, GroundingDINO, and DetCLIPv2 by 18.0/19.6/6.6 AP, respectively.
arXiv Detail & Related papers (2024-04-14T11:01:44Z) - CP-SLAM: Collaborative Neural Point-based SLAM System [54.916578456416204]
This paper presents a collaborative implicit neural localization and mapping (SLAM) system with RGB-D image sequences.
In order to enable all these modules in a unified framework, we propose a novel neural point based 3D scene representation.
A distributed-to-centralized learning strategy is proposed for the collaborative implicit SLAM to improve consistency and cooperation.
arXiv Detail & Related papers (2023-11-14T09:17:15Z) - LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields [112.62936571539232]
We introduce a new task, novel view synthesis for LiDAR sensors.
Traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views.
We use a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points.
arXiv Detail & Related papers (2023-04-20T15:44:37Z) - Point-SLAM: Dense Neural Point Cloud-based SLAM [61.96492935210654]
We propose a dense neural simultaneous localization and mapping (SLAM) approach for monocular RGBD input.
We demonstrate that both tracking and mapping can be performed with the same point-based neural scene representation.
arXiv Detail & Related papers (2023-04-09T16:48:26Z) - Improving Lidar-Based Semantic Segmentation of Top-View Grid Maps by
Learning Features in Complementary Representations [3.0413873719021995]
We introduce a novel way to predict semantic information from sparse, single-shot LiDAR measurements in the context of autonomous driving.
The approach is aimed specifically at improving the semantic segmentation of top-view grid maps.
For each representation a tailored deep learning architecture is developed to effectively extract semantic information.
arXiv Detail & Related papers (2022-03-02T14:49:51Z) - Depth-conditioned Dynamic Message Propagation for Monocular 3D Object
Detection [86.25022248968908]
We learn context- and depth-aware feature representation to solve the problem of monocular 3D object detection.
We show state-of-the-art results among the monocular-based approaches on the KITTI benchmark dataset.
arXiv Detail & Related papers (2021-03-30T16:20:24Z) - S3Net: 3D LiDAR Sparse Semantic Segmentation Network [1.330528227599978]
S3Net is a novel convolutional neural network for LiDAR point cloud semantic segmentation.
It adopts an encoder-decoder backbone that consists of Sparse Intra-channel Attention Module (SIntraAM) and Sparse Inter-channel Attention Module (SInterAM)
arXiv Detail & Related papers (2021-03-15T22:15:24Z) - PC-RGNN: Point Cloud Completion and Graph Neural Network for 3D Object
Detection [57.49788100647103]
LiDAR-based 3D object detection is an important task for autonomous driving.
Current approaches suffer from sparse and partial point clouds of distant and occluded objects.
In this paper, we propose a novel two-stage approach, namely PC-RGNN, dealing with such challenges by two specific solutions.
arXiv Detail & Related papers (2020-12-18T18:06:43Z) - Intensity Scan Context: Coding Intensity and Geometry Relations for Loop
Closure Detection [26.209412893744094]
Loop closure detection is an essential and challenging problem in simultaneous localization and mapping (SLAM)
Existing works on 3D loop closure detection often leverage the matching of local or global geometrical-only descriptors.
We propose a novel global descriptor, intensity scan context (ISC), that explores both geometry and intensity characteristics.
arXiv Detail & Related papers (2020-03-12T08:11:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.