Scan-based Semantic Segmentation of LiDAR Point Clouds: An Experimental
Study
- URL: http://arxiv.org/abs/2004.11803v3
- Date: Fri, 24 Sep 2021 07:28:13 GMT
- Title: Scan-based Semantic Segmentation of LiDAR Point Clouds: An Experimental
Study
- Authors: Larissa T. Triess, David Peter, Christoph B. Rist, J. Marius Z\"ollner
- Abstract summary: State of the art methods use deep neural networks to predict semantic classes for each point in a LiDAR scan.
A powerful and efficient way to process LiDAR measurements is to use two-dimensional, image-like projections.
We demonstrate various techniques to boost the performance and to improve runtime as well as memory constraints.
- Score: 2.6205925938720833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous vehicles need to have a semantic understanding of the
three-dimensional world around them in order to reason about their environment.
State of the art methods use deep neural networks to predict semantic classes
for each point in a LiDAR scan. A powerful and efficient way to process LiDAR
measurements is to use two-dimensional, image-like projections. In this work,
we perform a comprehensive experimental study of image-based semantic
segmentation architectures for LiDAR point clouds. We demonstrate various
techniques to boost the performance and to improve runtime as well as memory
constraints.
First, we examine the effect of network size and suggest that much faster
inference times can be achieved at a very low cost to accuracy. Next, we
introduce an improved point cloud projection technique that does not suffer
from systematic occlusions. We use a cyclic padding mechanism that provides
context at the horizontal field-of-view boundaries. In a third part, we perform
experiments with a soft Dice loss function that directly optimizes for the
intersection-over-union metric. Finally, we propose a new kind of convolution
layer with a reduced amount of weight-sharing along one of the two spatial
dimensions, addressing the large difference in appearance along the vertical
axis of a LiDAR scan. We propose a final set of the above methods with which
the model achieves an increase of 3.2% in mIoU segmentation performance over
the baseline while requiring only 42% of the original inference time.
Related papers
- LiOn-XA: Unsupervised Domain Adaptation via LiDAR-Only Cross-Modal Adversarial Training [61.26381389532653]
LiOn-XA is an unsupervised domain adaptation (UDA) approach that combines LiDAR-Only Cross-Modal (X) learning with Adversarial training for 3D LiDAR point cloud semantic segmentation.
Our experiments on 3 real-to-real adaptation scenarios demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-10-21T09:50:17Z) - Self-supervised Learning of LiDAR 3D Point Clouds via 2D-3D Neural Calibration [107.61458720202984]
This paper introduces a novel self-supervised learning framework for enhancing 3D perception in autonomous driving scenes.
We propose the learnable transformation alignment to bridge the domain gap between image and point cloud data.
We establish dense 2D-3D correspondences to estimate the rigid pose.
arXiv Detail & Related papers (2024-01-23T02:41:06Z) - Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud
Semantic Segmentation via Decoupling Optimization [64.36097398869774]
Semi-supervised learning (SSL) has been an active research topic for large-scale 3D scene understanding.
The existing SSL-based methods suffer from severe training bias due to class imbalance and long-tail distributions of the point cloud data.
We introduce a new decoupling optimization framework, which disentangles feature representation learning and classifier in an alternative optimization manner to shift the bias decision boundary effectively.
arXiv Detail & Related papers (2024-01-13T04:16:40Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - Improving Lidar-Based Semantic Segmentation of Top-View Grid Maps by
Learning Features in Complementary Representations [3.0413873719021995]
We introduce a novel way to predict semantic information from sparse, single-shot LiDAR measurements in the context of autonomous driving.
The approach is aimed specifically at improving the semantic segmentation of top-view grid maps.
For each representation a tailored deep learning architecture is developed to effectively extract semantic information.
arXiv Detail & Related papers (2022-03-02T14:49:51Z) - Semi-Local Convolutions for LiDAR Scan Processing [0.42970700836450487]
A number of applications, such as mobile robots or automated vehicles, use LiDAR sensors to obtain detailed information about their surroundings.
Many methods use image-like projections to efficiently process these LiDAR measurements and use deep convolutional neural networks to predict semantic classes for each point in the scan.
We propose semi local convolution (SLC), a convolution layer with reduced amount of weight-sharing along the vertical dimension.
arXiv Detail & Related papers (2021-11-30T18:09:43Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - S3Net: 3D LiDAR Sparse Semantic Segmentation Network [1.330528227599978]
S3Net is a novel convolutional neural network for LiDAR point cloud semantic segmentation.
It adopts an encoder-decoder backbone that consists of Sparse Intra-channel Attention Module (SIntraAM) and Sparse Inter-channel Attention Module (SInterAM)
arXiv Detail & Related papers (2021-03-15T22:15:24Z) - Multi Projection Fusion for Real-time Semantic Segmentation of 3D LiDAR
Point Clouds [2.924868086534434]
This paper introduces a novel approach for 3D point cloud semantic segmentation that exploits multiple projections of the point cloud.
Our Multi-Projection Fusion framework analyzes spherical and bird's-eye view projections using two separate highly-efficient 2D fully convolutional models.
arXiv Detail & Related papers (2020-11-03T19:40:43Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.