A Realism Metric for Generated LiDAR Point Clouds
- URL: http://arxiv.org/abs/2208.14958v1
- Date: Wed, 31 Aug 2022 16:37:57 GMT
- Title: A Realism Metric for Generated LiDAR Point Clouds
- Authors: Larissa T. Triess, Christoph B. Rist, David Peter, J. Marius Z\"ollner
- Abstract summary: This paper presents a novel metric to quantify the realism of LiDAR point clouds.
Relevant features are learned from real-world and synthetic point clouds by training on a proxy classification task.
In a series of experiments, we demonstrate the application of our metric to determine the realism of generated LiDAR data and compare the realism estimation of our metric to the performance of a segmentation model.
- Score: 2.6205925938720833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A considerable amount of research is concerned with the generation of
realistic sensor data. LiDAR point clouds are generated by complex simulations
or learned generative models. The generated data is usually exploited to enable
or improve downstream perception algorithms. Two major questions arise from
these procedures: First, how to evaluate the realism of the generated data?
Second, does more realistic data also lead to better perception performance?
This paper addresses both questions and presents a novel metric to quantify the
realism of LiDAR point clouds. Relevant features are learned from real-world
and synthetic point clouds by training on a proxy classification task. In a
series of experiments, we demonstrate the application of our metric to
determine the realism of generated LiDAR data and compare the realism
estimation of our metric to the performance of a segmentation model. We confirm
that our metric provides an indication for the downstream segmentation
performance.
Related papers
- Are NeRFs ready for autonomous driving? Towards closing the real-to-simulation gap [6.393953433174051]
We propose a novel perspective for addressing the real-to-simulated data gap.
We conduct the first large-scale investigation into the real-to-simulated data gap in an autonomous driving setting.
Our results show notable improvements in model robustness to simulated data, even improving real-world performance in some cases.
arXiv Detail & Related papers (2024-03-24T11:09:41Z) - GAN-Based LiDAR Intensity Simulation [3.8697834534260447]
We train GANs to translate between camera images and LiDAR scans from real test drives.
We test the performance of the LiDAR simulation by testing how well an object detection network generalizes between real and synthetic point clouds.
arXiv Detail & Related papers (2023-11-26T20:44:09Z) - Learning Defect Prediction from Unrealistic Data [57.53586547895278]
Pretrained models of code have become popular choices for code understanding and generation tasks.
Such models tend to be large and require commensurate volumes of training data.
It has become popular to train models with far larger but less realistic datasets, such as functions with artificially injected bugs.
Models trained on such data tend to only perform well on similar data, while underperforming on real world programs.
arXiv Detail & Related papers (2023-11-02T01:51:43Z) - Bidirectional Knowledge Reconfiguration for Lightweight Point Cloud
Analysis [74.00441177577295]
Point cloud analysis faces computational system overhead, limiting its application on mobile or edge devices.
This paper explores feature distillation for lightweight point cloud models.
We propose bidirectional knowledge reconfiguration to distill informative contextual knowledge from the teacher to the student.
arXiv Detail & Related papers (2023-10-08T11:32:50Z) - Pre-training on Synthetic Driving Data for Trajectory Prediction [61.520225216107306]
We propose a pipeline-level solution to mitigate the issue of data scarcity in trajectory forecasting.
We adopt HD map augmentation and trajectory synthesis for generating driving data, and then we learn representations by pre-training on them.
We conduct extensive experiments to demonstrate the effectiveness of our data expansion and pre-training strategies.
arXiv Detail & Related papers (2023-09-18T19:49:22Z) - PCGen: Point Cloud Generator for LiDAR Simulation [10.692184635629792]
Existing methods generate data which are more noisy and complete than the real point clouds.
We propose FPA raycasting and surrogate model raydrop.
With minimal training data, the surrogate model can generalize to different geographies and scenes.
Results show that object detection models trained by simulation data can achieve similar result as the real data trained model.
arXiv Detail & Related papers (2022-10-17T04:13:21Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - GIPSO: Geometrically Informed Propagation for Online Adaptation in 3D
LiDAR Segmentation [60.07812405063708]
3D point cloud semantic segmentation is fundamental for autonomous driving.
Most approaches in the literature neglect an important aspect, i.e., how to deal with domain shift when handling dynamic scenes.
This paper advances the state of the art in this research field.
arXiv Detail & Related papers (2022-07-20T09:06:07Z) - Quantifying point cloud realism through adversarially learned latent
representations [0.38233569758620056]
This paper presents a novel approach to quantify the realism of local regions in LiDAR point clouds.
The resulting metric can assign a quality score to samples without requiring any task specific annotations.
As one important application, we demonstrate how the local realism score can be used for anomaly detection in point clouds.
arXiv Detail & Related papers (2021-09-24T07:17:27Z) - Semi-Supervised Video Deraining with Dynamic Rain Generator [59.71640025072209]
This paper proposes a new semi-supervised video deraining method, in which a dynamic rain generator is employed to fit the rain layer.
Specifically, such dynamic generator consists of one emission model and one transition model to simultaneously encode the spatially physical structure and temporally continuous changes of rain streaks.
Various prior formats are designed for the labeled synthetic and unlabeled real data, so as to fully exploit the common knowledge underlying them.
arXiv Detail & Related papers (2021-03-14T14:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.