Analyzing Deep Learning Representations of Point Clouds for Real-Time
In-Vehicle LiDAR Perception
- URL: http://arxiv.org/abs/2210.14612v3
- Date: Mon, 15 May 2023 08:03:26 GMT
- Title: Analyzing Deep Learning Representations of Point Clouds for Real-Time
In-Vehicle LiDAR Perception
- Authors: Marc Uecker and Tobias Fleck and Marcel Pflugfelder and J. Marius
Z\"ollner
- Abstract summary: We propose a novel computational taxonomy of LiDAR point cloud representations used in modern deep neural networks for 3D point cloud processing.
Thereby, we uncover common advantages and limitations in terms of computational efficiency, memory requirements, and representational capacity.
- Score: 2.365702128814616
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: LiDAR sensors are an integral part of modern autonomous vehicles as they
provide an accurate, high-resolution 3D representation of the vehicle's
surroundings. However, it is computationally difficult to make use of the
ever-increasing amounts of data from multiple high-resolution LiDAR sensors. As
frame-rates, point cloud sizes and sensor resolutions increase, real-time
processing of these point clouds must still extract semantics from this
increasingly precise picture of the vehicle's environment. One deciding factor
of the run-time performance and accuracy of deep neural networks operating on
these point clouds is the underlying data representation and the way it is
computed. In this work, we examine the relationship between the computational
representations used in neural networks and their performance characteristics.
To this end, we propose a novel computational taxonomy of LiDAR point cloud
representations used in modern deep neural networks for 3D point cloud
processing. Using this taxonomy, we perform a structured analysis of different
families of approaches. Thereby, we uncover common advantages and limitations
in terms of computational efficiency, memory requirements, and representational
capacity as measured by semantic segmentation performance. Finally, we provide
some insights and guidance for future developments in neural point cloud
processing methods.
Related papers
- Application of Tensorized Neural Networks for Cloud Classification [0.0]
Convolutional neural networks (CNNs) have gained widespread usage across various fields such as weather forecasting, computer vision, autonomous driving, and medical image analysis.
However, the practical implementation and commercialization of CNNs in these domains are hindered by challenges related to model sizes, overfitting, and computational time.
We propose a groundbreaking approach that involves tensorizing the dense layers in the CNN to reduce model size and computational time.
arXiv Detail & Related papers (2024-03-21T06:28:22Z) - AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud
Registration [69.21282992341007]
Auto Synth automatically generates 3D training data for point cloud registration.
We replace the point cloud registration network with a much smaller surrogate network, leading to a $4056.43$ speedup.
Our results on TUD-L, LINEMOD and Occluded-LINEMOD evidence that a neural network trained on our searched dataset yields consistently better performance than the same one trained on the widely used ModelNet40 dataset.
arXiv Detail & Related papers (2023-09-20T09:29:44Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Ponder: Point Cloud Pre-training via Neural Rendering [93.34522605321514]
We propose a novel approach to self-supervised learning of point cloud representations by differentiable neural encoders.
The learned point-cloud can be easily integrated into various downstream tasks, including not only high-level rendering tasks like 3D detection and segmentation, but low-level tasks like 3D reconstruction and image rendering.
arXiv Detail & Related papers (2022-12-31T08:58:39Z) - Self-Supervised Learning with Multi-View Rendering for 3D Point Cloud
Analysis [33.31864436614945]
We propose a novel pre-training method for 3D point cloud models.
Our pre-training is self-supervised by a local pixel/point level correspondence loss and a global image/point cloud level loss.
These improved models outperform existing state-of-the-art methods on various datasets and downstream tasks.
arXiv Detail & Related papers (2022-10-28T05:23:03Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - Continual learning on 3D point clouds with random compressed rehearsal [10.667104977730304]
This work proposes a novel neural network architecture capable of continual learning on 3D point cloud data.
We utilize point cloud structure properties for preserving a heavily compressed set of past data.
arXiv Detail & Related papers (2022-05-16T22:59:52Z) - PCSCNet: Fast 3D Semantic Segmentation of LiDAR Point Cloud for
Autonomous Car using Point Convolution and Sparse Convolution Network [8.959391124399925]
We propose a fast voxel-based semantic segmentation model using Point Convolution and 3D Sparse Convolution (PCSCNet)
The proposed model is designed to outperform at both high and low voxel resolution using point convolution-based feature extraction.
arXiv Detail & Related papers (2022-02-21T08:31:37Z) - Revisiting Point Cloud Simplification: A Learnable Feature Preserving
Approach [57.67932970472768]
Mesh and Point Cloud simplification methods aim to reduce the complexity of 3D models while retaining visual quality and relevant salient features.
We propose a fast point cloud simplification method by learning to sample salient points.
The proposed method relies on a graph neural network architecture trained to select an arbitrary, user-defined, number of points from the input space and to re-arrange their positions so as to minimize the visual perception error.
arXiv Detail & Related papers (2021-09-30T10:23:55Z) - Learning Rotation-Invariant Representations of Point Clouds Using
Aligned Edge Convolutional Neural Networks [29.3830445533532]
Point cloud analysis is an area of increasing interest due to the development of 3D sensors that are able to rapidly measure the depth of scenes accurately.
Applying deep learning techniques to perform point cloud analysis is non-trivial due to the inability of these methods to generalize to unseen rotations.
To address this limitation, one usually has to augment the training data, which can lead to extra computation and require larger model complexity.
This paper proposes a new neural network called the Aligned Edge Convolutional Neural Network (AECNN) that learns a feature representation of point clouds relative to Local Reference Frames (LRFs)
arXiv Detail & Related papers (2021-01-02T17:36:00Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.