Off-Road LiDAR Intensity Based Semantic Segmentation
- URL: http://arxiv.org/abs/2401.01439v1
- Date: Tue, 2 Jan 2024 21:27:43 GMT
- Title: Off-Road LiDAR Intensity Based Semantic Segmentation
- Authors: Kasi Viswanath, Peng Jiang, Sujit PB, Srikanth Saripalli
- Abstract summary: Learning-based LiDAR semantic segmentation utilizes machine learning techniques to automatically classify objects in LiDAR point clouds.
We address this problem by harnessing the LiDAR intensity parameter to enhance object segmentation in off-road environments.
Our approach was evaluated in the RELLIS-3D data set and yielded promising results as a preliminary analysis with improved mIoU for classes "puddle" and "grass"
- Score: 11.684330305297523
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: LiDAR is used in autonomous driving to provide 3D spatial information and
enable accurate perception in off-road environments, aiding in obstacle
detection, mapping, and path planning. Learning-based LiDAR semantic
segmentation utilizes machine learning techniques to automatically classify
objects and regions in LiDAR point clouds. Learning-based models struggle in
off-road environments due to the presence of diverse objects with varying
colors, textures, and undefined boundaries, which can lead to difficulties in
accurately classifying and segmenting objects using traditional geometric-based
features. In this paper, we address this problem by harnessing the LiDAR
intensity parameter to enhance object segmentation in off-road environments.
Our approach was evaluated in the RELLIS-3D data set and yielded promising
results as a preliminary analysis with improved mIoU for classes "puddle" and
"grass" compared to more complex deep learning-based benchmarks. The
methodology was evaluated for compatibility across both Velodyne and Ouster
LiDAR systems, assuring its cross-platform applicability. This analysis
advocates for the incorporation of calibrated intensity as a supplementary
input, aiming to enhance the prediction accuracy of learning based semantic
segmentation frameworks.
https://github.com/MOONLABIISERB/lidar-intensity-predictor/tree/main
Related papers
- Object-Oriented Material Classification and 3D Clustering for Improved Semantic Perception and Mapping in Mobile Robots [6.395242048226456]
We propose a complement-aware deep learning approach for RGB-D-based material classification built on top of an object-oriented pipeline.
We show a significant improvement in material classification and 3D clustering accuracy compared to state-of-the-art approaches for 3D semantic scene mapping.
arXiv Detail & Related papers (2024-07-08T16:25:01Z) - Reflectivity Is All You Need!: Advancing LiDAR Semantic Segmentation [11.684330305297523]
This paper explores the advantages of employing calibrated intensity (also referred to as reflectivity) within learning-based LiDAR semantic segmentation frameworks.
We show that replacing intensity with reflectivity results in a 4% improvement in mean Intersection over Union for off-road scenarios.
We demonstrate the potential benefits of using calibrated intensity for semantic segmentation in urban environments.
arXiv Detail & Related papers (2024-03-19T22:57:03Z) - Improved LiDAR Odometry and Mapping using Deep Semantic Segmentation and
Novel Outliers Detection [1.0334138809056097]
We propose a novel framework for real-time LiDAR odometry and mapping based on LOAM architecture for fast moving platforms.
Our framework utilizes semantic information produced by a deep learning model to improve point-to-line and point-to-plane matching.
We study the effect of improving the matching process on the robustness of LiDAR odometry against high speed motion.
arXiv Detail & Related papers (2024-03-05T16:53:24Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Attention-Guided Lidar Segmentation and Odometry Using Image-to-Point Cloud Saliency Transfer [6.058427379240697]
SalLiDAR is a saliency-guided 3D semantic segmentation model that integrates saliency information to improve segmentation performance.
SalLONet is a self-supervised saliency-guided LiDAR odometry network that uses the semantic and saliency predictions of SalLiDAR to achieve better odometry estimation.
arXiv Detail & Related papers (2023-08-28T06:22:10Z) - Benchmarking the Robustness of LiDAR Semantic Segmentation Models [78.6597530416523]
In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions.
We propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy.
We design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications.
arXiv Detail & Related papers (2023-01-03T06:47:31Z) - Weighted Bayesian Gaussian Mixture Model for Roadside LiDAR Object
Detection [0.5156484100374059]
Background modeling is widely used for intelligent surveillance systems to detect moving targets by subtracting the static background components.
Most roadside LiDAR object detection methods filter out foreground points by comparing new data points to pre-trained background references.
In this paper, we transform the raw LiDAR data into a structured representation based on the elevation and azimuth value of each LiDAR point.
The proposed method was compared against two state-of-the-art roadside LiDAR background models, computer vision benchmark, and deep learning baselines, evaluated at point, object, and path levels under heavy traffic and challenging weather.
arXiv Detail & Related papers (2022-04-20T22:48:05Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - GANav: Group-wise Attention Network for Classifying Navigable Regions in
Unstructured Outdoor Environments [54.21959527308051]
We present a new learning-based method for identifying safe and navigable regions in off-road terrains and unstructured environments from RGB images.
Our approach consists of classifying groups of terrain classes based on their navigability levels using coarse-grained semantic segmentation.
We show through extensive evaluations on the RUGD and RELLIS-3D datasets that our learning algorithm improves the accuracy of visual perception in off-road terrains for navigation.
arXiv Detail & Related papers (2021-03-07T02:16:24Z) - LiDAR-based Panoptic Segmentation via Dynamic Shifting Network [56.71765153629892]
LiDAR-based panoptic segmentation aims to parse both objects and scenes in a unified manner.
We propose the Dynamic Shifting Network (DS-Net), which serves as an effective panoptic segmentation framework in the point cloud realm.
Our proposed DS-Net achieves superior accuracies over current state-of-the-art methods.
arXiv Detail & Related papers (2020-11-24T08:44:46Z) - ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework
for LiDAR Point Cloud Segmentation [111.56730703473411]
Training deep neural networks (DNNs) on LiDAR data requires large-scale point-wise annotations.
Simulation-to-real domain adaptation (SRDA) trains a DNN using unlimited synthetic data with automatically generated labels.
ePointDA consists of three modules: self-supervised dropout noise rendering, statistics-invariant and spatially-adaptive feature alignment, and transferable segmentation learning.
arXiv Detail & Related papers (2020-09-07T23:46:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.