Contrastive Learning-Based Framework for Sim-to-Real Mapping of Lidar
Point Clouds in Autonomous Driving Systems
- URL: http://arxiv.org/abs/2312.15817v1
- Date: Mon, 25 Dec 2023 21:55:00 GMT
- Title: Contrastive Learning-Based Framework for Sim-to-Real Mapping of Lidar
Point Clouds in Autonomous Driving Systems
- Authors: Hamed Haghighi, Mehrdad Dianati, Kurt Debattista, Valentina Donzella
- Abstract summary: This paper focuses on sim-to-real mapping of Lidar point clouds, a widely used perception sensor in automated driving systems.
We introduce a novel Contrastive-Learning-based Sim-to-Real mapping framework, namely CLS2R, inspired by the recent advancements in image-to-image translation techniques.
- Score: 10.964549009068344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Perception sensor models are essential elements of automotive simulation
environments; they also serve as powerful tools for creating synthetic datasets
to train deep learning-based perception models. Developing realistic perception
sensor models poses a significant challenge due to the large gap between
simulated sensor data and real-world sensor outputs, known as the sim-to-real
gap. To address this problem, learning-based models have emerged as promising
solutions in recent years, with unparalleled potential to map low-fidelity
simulated sensor data into highly realistic outputs. Motivated by this
potential, this paper focuses on sim-to-real mapping of Lidar point clouds, a
widely used perception sensor in automated driving systems. We introduce a
novel Contrastive-Learning-based Sim-to-Real mapping framework, namely CLS2R,
inspired by the recent advancements in image-to-image translation techniques.
The proposed CLS2R framework employs a lossless representation of Lidar point
clouds, considering all essential Lidar attributes such as depth, reflectance,
and raydrop. We extensively evaluate the proposed framework, comparing it with
state-of-the-art image-to-image translation methods using a diverse range of
metrics to assess realness, faithfulness, and the impact on the performance of
a downstream task. Our results show that CLS2R demonstrates superior
performance across nearly all metrics. Source code is available at
https://github.com/hamedhaghighi/CLS2R.git.
Related papers
- Automatically Learning Hybrid Digital Twins of Dynamical Systems [56.69628749813084]
Digital Twins (DTs) simulate the states and temporal dynamics of real-world systems.
DTs often struggle to generalize to unseen conditions in data-scarce settings.
In this paper, we propose an evolutionary algorithm ($textbfHDTwinGen$) to autonomously propose, evaluate, and optimize HDTwins.
arXiv Detail & Related papers (2024-10-31T07:28:22Z) - SimGen: Simulator-conditioned Driving Scene Generation [50.03358485083602]
We introduce a simulator-conditioned scene generation framework called SimGen.
SimGen learns to generate diverse driving scenes by mixing data from the simulator and the real world.
It achieves superior generation quality and diversity while preserving controllability based on the text prompt and the layout pulled from a simulator.
arXiv Detail & Related papers (2024-06-13T17:58:32Z) - Exploring Generative AI for Sim2Real in Driving Data Synthesis [6.769182994217369]
Driving simulators offer a solution by automatically generating various driving scenarios with corresponding annotations, but the simulation-to-reality (Sim2Real) domain gap remains a challenge.
This paper applied three different generative AI methods to leverage semantic label maps from a driving simulator as a bridge for the creation of realistic datasets.
Experiments show that although GAN-based methods are adept at generating high-quality images when provided with manually annotated labels, ControlNet produces synthetic datasets with fewer artefacts and more structural fidelity when using simulator-generated labels.
arXiv Detail & Related papers (2024-04-14T01:23:19Z) - Are NeRFs ready for autonomous driving? Towards closing the real-to-simulation gap [6.393953433174051]
We propose a novel perspective for addressing the real-to-simulated data gap.
We conduct the first large-scale investigation into the real-to-simulated data gap in an autonomous driving setting.
Our results show notable improvements in model robustness to simulated data, even improving real-world performance in some cases.
arXiv Detail & Related papers (2024-03-24T11:09:41Z) - Review of the Learning-based Camera and Lidar Simulation Methods for
Autonomous Driving Systems [7.90336803821407]
This paper reviews the current state-of-the-art in learning-based sensor simulation methods and validation approaches.
It focuses on two main types of perception sensors: cameras and Lidars.
arXiv Detail & Related papers (2024-01-29T16:56:17Z) - Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - PCGen: Point Cloud Generator for LiDAR Simulation [10.692184635629792]
Existing methods generate data which are more noisy and complete than the real point clouds.
We propose FPA raycasting and surrogate model raydrop.
With minimal training data, the surrogate model can generalize to different geographies and scenes.
Results show that object detection models trained by simulation data can achieve similar result as the real data trained model.
arXiv Detail & Related papers (2022-10-17T04:13:21Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Hybrid modeling: Applications in real-time diagnosis [64.5040763067757]
We outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models.
We are using such models for real-time diagnosis applications.
arXiv Detail & Related papers (2020-03-04T00:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.