Road Segmentation on low resolution Lidar point clouds for autonomous
vehicles
- URL: http://arxiv.org/abs/2005.13102v1
- Date: Wed, 27 May 2020 00:38:39 GMT
- Title: Road Segmentation on low resolution Lidar point clouds for autonomous
vehicles
- Authors: Leonardo Gigli, B Ravi Kiran, Thomas Paul, Andres Serna, Nagarjuna
Vemuri, Beatriz Marcotegui, Santiago Velasco-Forero
- Abstract summary: We evaluate the effect of subsampling image based representations of dense point clouds on the accuracy of the road segmentation task.
We introduce the usage of the local normal vector with the LIDAR's spherical coordinates as an input channel to existing LoDNN architectures.
- Score: 3.6020689500145653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud datasets for perception tasks in the context of autonomous
driving often rely on high resolution 64-layer Light Detection and Ranging
(LIDAR) scanners. They are expensive to deploy on real-world autonomous driving
sensor architectures which usually employ 16/32 layer LIDARs. We evaluate the
effect of subsampling image based representations of dense point clouds on the
accuracy of the road segmentation task. In our experiments the low resolution
16/32 layer LIDAR point clouds are simulated by subsampling the original 64
layer data, for subsequent transformation in to a feature map in the
Bird-Eye-View (BEV) and SphericalView (SV) representations of the point cloud.
We introduce the usage of the local normal vector with the LIDAR's spherical
coordinates as an input channel to existing LoDNN architectures. We demonstrate
that this local normal feature in conjunction with classical features not only
improves performance for binary road segmentation on full resolution point
clouds, but it also reduces the negative impact on the accuracy when
subsampling dense point clouds as compared to the usage of classical features
alone. We assess our method with several experiments on two datasets: KITTI
Road-segmentation benchmark and the recently released Semantic KITTI dataset.
Related papers
- EEPNet-V2: Patch-to-Pixel Solution for Efficient Cross-Modal Registration between LiDAR Point Cloud and Camera Image [15.975048012603914]
We propose a framework that projects point clouds into several 2D representations for matching with camera images.<n>To tackle the challenges of cross modal differences and the limited overlap between LiDAR point clouds and images in the image matching task, we introduce a multi-scale feature extraction network.<n>We validate the performance of our model through experiments on the KITTI and nuScenes datasets.
arXiv Detail & Related papers (2025-03-19T15:04:01Z) - P2P-Bridge: Diffusion Bridges for 3D Point Cloud Denoising [81.92854168911704]
We tackle the task of point cloud denoising through a novel framework that adapts Diffusion Schr"odinger bridges to points clouds.
Experiments on object datasets show that P2P-Bridge achieves significant improvements over existing methods.
arXiv Detail & Related papers (2024-08-29T08:00:07Z) - FASTC: A Fast Attentional Framework for Semantic Traversability Classification Using Point Cloud [7.711666704468952]
We address the problem of traversability assessment using point clouds.
We propose a pillar feature extraction module that utilizes PointNet to capture features from point clouds organized in vertical volume.
We then propose a newtemporal attention module to fuse multi-frame information, which can properly handle the varying density problem of LIDAR point clouds.
arXiv Detail & Related papers (2024-06-24T12:01:55Z) - Improved Multi-Scale Grid Rendering of Point Clouds for Radar Object
Detection Networks [3.3787383461150045]
The transfer from irregular point cloud data to a dense grid structure is often associated with a loss of information.
We propose a novel architecture, multi-scale KPPillarsBEV, that aims to mitigate the negative effects of grid rendering.
arXiv Detail & Related papers (2023-05-25T08:26:42Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - I2P-Rec: Recognizing Images on Large-scale Point Cloud Maps through
Bird's Eye View Projections [18.7557037030769]
Place recognition is an important technique for autonomous cars to achieve full autonomy.
We propose the I2P-Rec method to solve the problem by transforming the cross-modal data into the same modality.
With only a small set of training data, I2P-Rec achieves recall rates at Top-1% over 80% and 90%, when localizing monocular and stereo images on point cloud maps.
arXiv Detail & Related papers (2023-03-02T07:56:04Z) - LiDAR-based 4D Panoptic Segmentation via Dynamic Shifting Network [56.71765153629892]
We propose the Dynamic Shifting Network (DS-Net), which serves as an effective panoptic segmentation framework in the point cloud realm.
Our proposed DS-Net achieves superior accuracies over current state-of-the-art methods in both tasks.
We extend DS-Net to 4D panoptic LiDAR segmentation by the temporally unified instance clustering on aligned LiDAR frames.
arXiv Detail & Related papers (2022-03-14T15:25:42Z) - PCSCNet: Fast 3D Semantic Segmentation of LiDAR Point Cloud for
Autonomous Car using Point Convolution and Sparse Convolution Network [8.959391124399925]
We propose a fast voxel-based semantic segmentation model using Point Convolution and 3D Sparse Convolution (PCSCNet)
The proposed model is designed to outperform at both high and low voxel resolution using point convolution-based feature extraction.
arXiv Detail & Related papers (2022-02-21T08:31:37Z) - LiDAR-based Panoptic Segmentation via Dynamic Shifting Network [56.71765153629892]
LiDAR-based panoptic segmentation aims to parse both objects and scenes in a unified manner.
We propose the Dynamic Shifting Network (DS-Net), which serves as an effective panoptic segmentation framework in the point cloud realm.
Our proposed DS-Net achieves superior accuracies over current state-of-the-art methods.
arXiv Detail & Related papers (2020-11-24T08:44:46Z) - ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework
for LiDAR Point Cloud Segmentation [111.56730703473411]
Training deep neural networks (DNNs) on LiDAR data requires large-scale point-wise annotations.
Simulation-to-real domain adaptation (SRDA) trains a DNN using unlimited synthetic data with automatically generated labels.
ePointDA consists of three modules: self-supervised dropout noise rendering, statistics-invariant and spatially-adaptive feature alignment, and transferable segmentation learning.
arXiv Detail & Related papers (2020-09-07T23:46:08Z) - MNEW: Multi-domain Neighborhood Embedding and Weighting for Sparse Point
Clouds Segmentation [1.2380933178502298]
We propose MNEW, including multi-domain neighborhood embedding, and attention weighting based on their geometry distance, feature similarity, and neighborhood sparsity.
MNEW achieves the top performance for sparse point clouds, which is important to the application of LiDAR-based automated driving perception.
arXiv Detail & Related papers (2020-04-05T18:02:07Z) - Real-Time High-Performance Semantic Image Segmentation of Urban Street
Scenes [98.65457534223539]
We propose a real-time high-performance DCNN-based method for robust semantic segmentation of urban street scenes.
The proposed method achieves the accuracy of 73.6% and 68.0% mean Intersection over Union (mIoU) with the inference speed of 51.0 fps and 39.3 fps.
arXiv Detail & Related papers (2020-03-11T08:45:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.