Improved LiDAR Odometry and Mapping using Deep Semantic Segmentation and
Novel Outliers Detection
- URL: http://arxiv.org/abs/2403.03111v1
- Date: Tue, 5 Mar 2024 16:53:24 GMT
- Title: Improved LiDAR Odometry and Mapping using Deep Semantic Segmentation and
Novel Outliers Detection
- Authors: Mohamed Afifi, Mohamed ElHelw
- Abstract summary: We propose a novel framework for real-time LiDAR odometry and mapping based on LOAM architecture for fast moving platforms.
Our framework utilizes semantic information produced by a deep learning model to improve point-to-line and point-to-plane matching.
We study the effect of improving the matching process on the robustness of LiDAR odometry against high speed motion.
- Score: 1.0334138809056097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Perception is a key element for enabling intelligent autonomous navigation.
Understanding the semantics of the surrounding environment and accurate vehicle
pose estimation are essential capabilities for autonomous vehicles, including
self-driving cars and mobile robots that perform complex tasks. Fast moving
platforms like self-driving cars impose a hard challenge for localization and
mapping algorithms. In this work, we propose a novel framework for real-time
LiDAR odometry and mapping based on LOAM architecture for fast moving
platforms. Our framework utilizes semantic information produced by a deep
learning model to improve point-to-line and point-to-plane matching between
LiDAR scans and build a semantic map of the environment, leading to more
accurate motion estimation using LiDAR data. We observe that including semantic
information in the matching process introduces a new type of outlier matches to
the process, where matching occur between different objects of the same
semantic class. To this end, we propose a novel algorithm that explicitly
identifies and discards potential outliers in the matching process. In our
experiments, we study the effect of improving the matching process on the
robustness of LiDAR odometry against high speed motion. Our experimental
evaluations on KITTI dataset demonstrate that utilizing semantic information
and rejecting outliers significantly enhance the robustness of LiDAR odometry
and mapping when there are large gaps between scan acquisition poses, which is
typical for fast moving platforms.
Related papers
- STCMOT: Spatio-Temporal Cohesion Learning for UAV-Based Multiple Object Tracking [13.269416985959404]
Multiple object tracking (MOT) in Unmanned Aerial Vehicle (UAV) videos is important for diverse applications in computer vision.
We propose a novel Spatio-Temporal Cohesion Multiple Object Tracking framework (STCMOT)
We use historical embedding features to model the representation of ReID and detection features in a sequential order.
Our framework sets a new state-of-the-art performance in MOTA and IDF1 metrics.
arXiv Detail & Related papers (2024-09-17T14:34:18Z) - Off-Road LiDAR Intensity Based Semantic Segmentation [11.684330305297523]
Learning-based LiDAR semantic segmentation utilizes machine learning techniques to automatically classify objects in LiDAR point clouds.
We address this problem by harnessing the LiDAR intensity parameter to enhance object segmentation in off-road environments.
Our approach was evaluated in the RELLIS-3D data set and yielded promising results as a preliminary analysis with improved mIoU for classes "puddle" and "grass"
arXiv Detail & Related papers (2024-01-02T21:27:43Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Performance Study of YOLOv5 and Faster R-CNN for Autonomous Navigation
around Non-Cooperative Targets [0.0]
This paper discusses how the combination of cameras and machine learning algorithms can achieve the relative navigation task.
The performance of two deep learning-based object detection algorithms, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5) is tested.
The paper discusses the path to implementing the feature recognition algorithms and towards integrating them into the spacecraft Guidance Navigation and Control system.
arXiv Detail & Related papers (2023-01-22T04:53:38Z) - Large-scale Autonomous Flight with Real-time Semantic SLAM under Dense
Forest Canopy [48.51396198176273]
We propose an integrated system that can perform large-scale autonomous flights and real-time semantic mapping in challenging under-canopy environments.
We detect and model tree trunks and ground planes from LiDAR data, which are associated across scans and used to constrain robot poses as well as tree trunk models.
A drift-compensation mechanism is designed to minimize the odometry drift using semantic SLAM outputs in real time, while maintaining planner optimality and controller stability.
arXiv Detail & Related papers (2021-09-14T07:24:53Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - IntentNet: Learning to Predict Intention from Raw Sensor Data [86.74403297781039]
In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment.
Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.
arXiv Detail & Related papers (2021-01-20T00:31:52Z) - Radar-based Dynamic Occupancy Grid Mapping and Object Detection [55.74894405714851]
In recent years, the classical occupancy grid map approach has been extended to dynamic occupancy grid maps.
This paper presents the further development of a previous approach.
The data of multiple radar sensors are fused, and a grid-based object tracking and mapping method is applied.
arXiv Detail & Related papers (2020-08-09T09:26:30Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.