Improved Signed Distance Function for 2D Real-time SLAM and Accurate
Localization
- URL: http://arxiv.org/abs/2101.08018v1
- Date: Wed, 20 Jan 2021 08:28:19 GMT
- Title: Improved Signed Distance Function for 2D Real-time SLAM and Accurate
Localization
- Authors: Xingyin Fu, Zheng Fang, Xizhen Xiao, Yijia He, Xiao Liu
- Abstract summary: We propose an improved Signed Distance Function (SDF) for both 2D SLAM and pure localization to improve the accuracy of mapping and localization.
Experimental results show that based on the merged SDF map, a localization accuracy of a few millimeters (5mm) can be achieved globally within the map.
- Score: 12.443507219951092
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate mapping and localization are very important for many industrial
robotics applications. In this paper, we propose an improved Signed Distance
Function (SDF) for both 2D SLAM and pure localization to improve the accuracy
of mapping and localization. To achieve this goal, firstly we improved the
back-end mapping to build a more accurate SDF map by extending the update range
and building free space, etc. Secondly, to get more accurate pose estimation
for the front-end, we proposed a new iterative registration method to align the
current scan to the SDF submap by removing random outliers of laser scanners.
Thirdly, we merged all the SDF submaps to produce an integrated SDF map for
highly accurate pure localization. Experimental results show that based on the
merged SDF map, a localization accuracy of a few millimeters (5mm) can be
achieved globally within the map. We believe that this method is important for
mobile robots working in scenarios where high localization accuracy matters.
Related papers
- Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - LGSDF: Continual Global Learning of Signed Distance Fields Aided by Local Updating [22.948360879064758]
Implicit reconstruction of ESDF (Euclidean Signed Distance Field) involves training a neural network to regress the signed distance from any point to the nearest obstacle.
We propose LGSDF, an ESDF continual Global learning algorithm aided by Local updating.
The results on multiple scenes show that LGSDF can construct more accurate ESDF maps and meshes compared with SOTA (State Of The Art) explicit and implicit mapping algorithms.
arXiv Detail & Related papers (2024-04-08T04:27:36Z) - Volumetric Semantically Consistent 3D Panoptic Mapping [77.13446499924977]
We introduce an online 2D-to-3D semantic instance mapping algorithm aimed at generating semantic 3D maps suitable for autonomous agents in unstructured environments.
It introduces novel ways of integrating semantic prediction confidence during mapping, producing semantic and instance-consistent 3D regions.
The proposed method achieves accuracy superior to the state of the art on public large-scale datasets, improving on a number of widely used metrics.
arXiv Detail & Related papers (2023-09-26T08:03:10Z) - DDF-HO: Hand-Held Object Reconstruction via Conditional Directed
Distance Field [82.81337273685176]
DDF-HO is a novel approach leveraging Directed Distance Field (DDF) as the shape representation.
We randomly sample multiple rays and collect local to global geometric features for them by introducing a novel 2D ray-based feature aggregation scheme.
Experiments on synthetic and real-world datasets demonstrate that DDF-HO consistently outperforms all baseline methods by a large margin.
arXiv Detail & Related papers (2023-08-16T09:06:32Z) - A Survey on Visual Map Localization Using LiDARs and Cameras [0.0]
We define visual map localization as a two-stage process.
At the stage of place recognition, the initial position of the vehicle in the map is determined by comparing the visual sensor output with a set of geo-tagged map regions of interest.
At the stage of map metric localization, the vehicle is tracked while it moves across the map by continuously aligning the visual sensors' output with the current area of the map that is being traversed.
arXiv Detail & Related papers (2022-08-05T20:11:18Z) - Satellite Image Based Cross-view Localization for Autonomous Vehicle [59.72040418584396]
This paper shows that by using an off-the-shelf high-definition satellite image as a ready-to-use map, we are able to achieve cross-view vehicle localization up to a satisfactory accuracy.
Our method is validated on KITTI and Ford Multi-AV Seasonal datasets as ground view and Google Maps as the satellite view.
arXiv Detail & Related papers (2022-07-27T13:16:39Z) - PlaneSDF-based Change Detection for Long-term Dense Mapping [10.159737713094119]
We look into the problem of change detection based on a novel map representation, dubbed Plane Signed Distance Fields (PlaneSDF)
Given point clouds of the source and target scenes, we propose a three-step PlaneSDF-based change detection approach.
We evaluate our approach on both synthetic and real-world datasets and demonstrate its effectiveness via the task of changed object detection.
arXiv Detail & Related papers (2022-07-18T00:19:45Z) - iSDF: Real-Time Neural Signed Distance Fields for Robot Perception [64.80458128766254]
iSDF is a continuous learning system for real-time signed distance field reconstruction.
It produces more accurate reconstructions and better approximations of collision costs and gradients.
arXiv Detail & Related papers (2022-04-05T15:48:39Z) - FD-SLAM: 3-D Reconstruction Using Features and Dense Matching [18.577229381683434]
We propose an RGB-D SLAM system that uses dense frame-to-model odometry to build accurate sub-maps.
We incorporate a learning-based loop closure component based on 3-D features which further stabilises map building.
The approach can also scale to large scenes where other systems often fail.
arXiv Detail & Related papers (2022-03-25T18:58:46Z) - OmniSLAM: Omnidirectional Localization and Dense Mapping for
Wide-baseline Multi-camera Systems [88.41004332322788]
We present an omnidirectional localization and dense mapping system for a wide-baseline multiview stereo setup with ultra-wide field-of-view (FOV) fisheye cameras.
For more practical and accurate reconstruction, we first introduce improved and light-weighted deep neural networks for the omnidirectional depth estimation.
We integrate our omnidirectional depth estimates into the visual odometry (VO) and add a loop closing module for global consistency.
arXiv Detail & Related papers (2020-03-18T05:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.