Analysis of LiDAR Configurations on Off-road Semantic Segmentation
Performance
- URL: http://arxiv.org/abs/2306.16551v1
- Date: Wed, 28 Jun 2023 20:41:45 GMT
- Title: Analysis of LiDAR Configurations on Off-road Semantic Segmentation
Performance
- Authors: Jinhee Yu, Jingdao Chen, Lalitha Dabbiru, Christopher T. Goodin
- Abstract summary: This paper investigates the impact of LiDAR configuration shifts on the performance of 3D LiDAR point cloud semantic segmentation models.
A Cylinder3D model is trained and tested on simulated 3D LiDAR point cloud datasets created using the Mississippi State University Autonomous Vehicle Simulator (MAVS) and 32, 64 channel 3D LiDAR point clouds of the RELLIS-3D dataset collected in a real-world off-road environment.
- Score: 0.6882042556551609
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates the impact of LiDAR configuration shifts on the
performance of 3D LiDAR point cloud semantic segmentation models, a topic not
extensively studied before. We explore the effect of using different LiDAR
channels when training and testing a 3D LiDAR point cloud semantic segmentation
model, utilizing Cylinder3D for the experiments. A Cylinder3D model is trained
and tested on simulated 3D LiDAR point cloud datasets created using the
Mississippi State University Autonomous Vehicle Simulator (MAVS) and 32, 64
channel 3D LiDAR point clouds of the RELLIS-3D dataset collected in a
real-world off-road environment. Our experimental results demonstrate that
sensor and spatial domain shifts significantly impact the performance of
LiDAR-based semantic segmentation models. In the absence of spatial domain
changes between training and testing, models trained and tested on the same
sensor type generally exhibited better performance. Moreover, higher-resolution
sensors showed improved performance compared to those with lower-resolution
ones. However, results varied when spatial domain changes were present. In some
cases, the advantage of a sensor's higher resolution led to better performance
both with and without sensor domain shifts. In other instances, the higher
resolution resulted in overfitting within a specific domain, causing a lack of
generalization capability and decreased performance when tested on data with
different sensor configurations.
Related papers
- Study of Dropout in PointPillars with 3D Object Detection [0.0]
3D object detection is critical for autonomous driving, leveraging deep learning techniques to interpret LiDAR data.
This study provides an analysis of enhancing the performance of PointPillars model under various dropout rates.
arXiv Detail & Related papers (2024-09-01T09:30:54Z) - Revisiting Cross-Domain Problem for LiDAR-based 3D Object Detection [5.149095033945412]
We deeply analyze the cross-domain performance of the state-of-the-art models.
We observe that most models will overfit the training domains and it is challenging to adapt them to other domains directly.
We propose additional evaluation metrics -- the side-view and front-view AP -- to better analyze the core issues of the methods' heavy drops in accuracy levels.
arXiv Detail & Related papers (2024-08-22T19:52:44Z) - Improving LiDAR 3D Object Detection via Range-based Point Cloud Density
Optimization [13.727464375608765]
Existing 3D object detectors tend to perform well on the point cloud regions closer to the LiDAR sensor as opposed to on regions that are farther away.
We observe that there is a learning bias in detection models towards the dense objects near the sensor and show that the detection performance can be improved by simply manipulating the input point cloud density at different distance ranges.
arXiv Detail & Related papers (2023-06-09T04:11:43Z) - Instant Domain Augmentation for LiDAR Semantic Segmentation [10.250046817380458]
This paper presents a fast and flexible LiDAR augmentation method for the semantic segmentation task, called 'LiDomAug'.
Our on-demand augmentation module runs at 330 FPS, so it can be seamlessly integrated into the data loader in the learning framework.
arXiv Detail & Related papers (2023-03-25T06:59:12Z) - LiDAR-CS Dataset: LiDAR Point Cloud Dataset with Cross-Sensors for 3D
Object Detection [36.77084564823707]
deep learning methods heavily rely on annotated data and often face domain generalization issues.
LiDAR-CS dataset is the first dataset that addresses the sensor-related gaps in the domain of 3D object detection in real traffic.
arXiv Detail & Related papers (2023-01-29T19:10:35Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - PLUME: Efficient 3D Object Detection from Stereo Images [95.31278688164646]
Existing methods tackle the problem in two steps: first depth estimation is performed, a pseudo LiDAR point cloud representation is computed from the depth estimates, and then object detection is performed in 3D space.
We propose a model that unifies these two tasks in the same metric space.
Our approach achieves state-of-the-art performance on the challenging KITTI benchmark, with significantly reduced inference time compared with existing methods.
arXiv Detail & Related papers (2021-01-17T05:11:38Z) - Complete & Label: A Domain Adaptation Approach to Semantic Segmentation
of LiDAR Point Clouds [49.47017280475232]
We study an unsupervised domain adaptation problem for the semantic labeling of 3D point clouds.
We take a Complete and Label approach to recover the underlying surfaces before passing them to a segmentation network.
The recovered 3D surfaces serve as a canonical domain, from which semantic labels can transfer across different LiDAR sensors.
arXiv Detail & Related papers (2020-07-16T17:42:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.