LatentSLAM: unsupervised multi-sensor representation learning for
localization and mapping
- URL: http://arxiv.org/abs/2105.03265v1
- Date: Fri, 7 May 2021 13:44:32 GMT
- Title: LatentSLAM: unsupervised multi-sensor representation learning for
localization and mapping
- Authors: Ozan \c{C}atal, Wouter Jansen, Tim Verbelen, Bart Dhoedt and Jan
Steckel
- Abstract summary: We propose an unsupervised representation learning method that yields low-dimensional latent state descriptors.
Our method is sensor agnostic and can be applied to any sensor modality.
We show how combining multiple sensors can increase the robustness, by reducing the number of false matches.
- Score: 7.857987850592964
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Biologically inspired algorithms for simultaneous localization and mapping
(SLAM) such as RatSLAM have been shown to yield effective and robust robot
navigation in both indoor and outdoor environments. One drawback however is the
sensitivity to perceptual aliasing due to the template matching of
low-dimensional sensory templates. In this paper, we propose an unsupervised
representation learning method that yields low-dimensional latent state
descriptors that can be used for RatSLAM. Our method is sensor agnostic and can
be applied to any sensor modality, as we illustrate for camera images, radar
range-doppler maps and lidar scans. We also show how combining multiple sensors
can increase the robustness, by reducing the number of false matches. We
evaluate on a dataset captured with a mobile robot navigating in a
warehouse-like environment, moving through different aisles with similar
appearance, making it hard for the SLAM algorithms to disambiguate locations.
Related papers
- SemanticSLAM: Learning based Semantic Map Construction and Robust Camera
Localization [8.901799744401314]
We introduce SemanticSLAM, an end-to-end visual-inertial odometry system.
SemanticSLAM uses semantic features extracted from an RGB-D sensor.
It operates effectively in indoor settings, even with infrequent camera input.
arXiv Detail & Related papers (2024-01-23T20:02:02Z) - LCPR: A Multi-Scale Attention-Based LiDAR-Camera Fusion Network for
Place Recognition [11.206532393178385]
We present a novel neural network named LCPR for robust multimodal place recognition.
Our method can effectively utilize multi-view camera and LiDAR data to improve the place recognition performance.
arXiv Detail & Related papers (2023-11-06T15:39:48Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - UncLe-SLAM: Uncertainty Learning for Dense Neural SLAM [60.575435353047304]
We present an uncertainty learning framework for dense neural simultaneous localization and mapping (SLAM)
We propose an online framework for sensor uncertainty estimation that can be trained in a self-supervised manner from only 2D input data.
arXiv Detail & Related papers (2023-06-19T16:26:25Z) - Neural Implicit Dense Semantic SLAM [83.04331351572277]
We propose a novel RGBD vSLAM algorithm that learns a memory-efficient, dense 3D geometry, and semantic segmentation of an indoor scene in an online manner.
Our pipeline combines classical 3D vision-based tracking and loop closing with neural fields-based mapping.
Our proposed algorithm can greatly enhance scene perception and assist with a range of robot control problems.
arXiv Detail & Related papers (2023-04-27T23:03:52Z) - iSDF: Real-Time Neural Signed Distance Fields for Robot Perception [64.80458128766254]
iSDF is a continuous learning system for real-time signed distance field reconstruction.
It produces more accurate reconstructions and better approximations of collision costs and gradients.
arXiv Detail & Related papers (2022-04-05T15:48:39Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Graph-based Proprioceptive Localization Using a Discrete Heading-Length
Feature Sequence Matching Approach [14.356113113268389]
Proprioceptive localization refers to a new class of robot egocentric localization methods.
These methods are naturally immune to bad weather, poor lighting conditions, or other extreme environmental conditions.
We provide a low cost fallback solution for localization under challenging environmental conditions.
arXiv Detail & Related papers (2020-05-27T23:10:15Z) - DynamicSLAM: Leveraging Human Anchors for Ubiquitous Low-Overhead Indoor
Localization [5.198840934055703]
DynamicSLAM is an indoor localization technique that eliminates the need for the daunting calibration step.
We employ the phone inertial sensors to keep track of the user's path.
DynamicSLAM introduces the novel concept of mobile human anchors that are based on the encounters with other users in the environment.
arXiv Detail & Related papers (2020-03-30T19:49:31Z) - Deep Soft Procrustes for Markerless Volumetric Sensor Alignment [81.13055566952221]
In this work, we improve markerless data-driven correspondence estimation to achieve more robust multi-sensor spatial alignment.
We incorporate geometric constraints in an end-to-end manner into a typical segmentation based model and bridge the intermediate dense classification task with the targeted pose estimation one.
Our model is experimentally shown to achieve similar results with marker-based methods and outperform the markerless ones, while also being robust to the pose variations of the calibration structure.
arXiv Detail & Related papers (2020-03-23T10:51:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.