Generative Adversarial Synthesis of Radar Point Cloud Scenes
- URL: http://arxiv.org/abs/2410.13526v1
- Date: Thu, 17 Oct 2024 13:14:25 GMT
- Title: Generative Adversarial Synthesis of Radar Point Cloud Scenes
- Authors: Muhammad Saad Nawaz, Thomas Dallmann, Torsten Schoen, Dirk Heberling,
- Abstract summary: We introduce radar scene synthesis using GANs as an alternative to the real dataset acquisition and simulation-based approaches.
We train a PointNet++ based GAN model to generate realistic radar point cloud scenes and use a binary classifier to evaluate the performance of scenes generated using this model against a test set of real scenes.
- Score: 0.0
- License:
- Abstract: For the validation and verification of automotive radars, datasets of realistic traffic scenarios are required, which, how ever, are laborious to acquire. In this paper, we introduce radar scene synthesis using GANs as an alternative to the real dataset acquisition and simulation-based approaches. We train a PointNet++ based GAN model to generate realistic radar point cloud scenes and use a binary classifier to evaluate the performance of scenes generated using this model against a test set of real scenes. We demonstrate that our GAN model achieves similar performance (~87%) to the real scenes test set.
Related papers
- CERES: Critical-Event Reconstruction via Temporal Scene Graph Completion [7.542220697870245]
This paper proposes a method for on-demand scenario generation in simulation, grounded on real-world data.
By integrating scenarios derived from real-world datasets into the simulation, we enhance the plausibility and validity of testing.
arXiv Detail & Related papers (2024-10-17T13:02:06Z) - XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
The dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters.
We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - RASPNet: A Benchmark Dataset for Radar Adaptive Signal Processing Applications [20.589332431911842]
The dataset, called RASPNet, consists of 100 realistic scenarios compiled over a variety of topographies and land types from across the contiguous United States.
RASPNet intends to fill a prominent gap in the availability of a large-scale, realistic dataset that standardizes the evaluation of adaptive radar processing techniques.
arXiv Detail & Related papers (2024-06-14T00:07:52Z) - Learning to Simulate Realistic LiDARs [66.7519667383175]
We introduce a pipeline for data-driven simulation of a realistic LiDAR sensor.
We show that our model can learn to encode realistic effects such as dropped points on transparent surfaces.
We use our technique to learn models of two distinct LiDAR sensors and use them to improve simulated LiDAR data accordingly.
arXiv Detail & Related papers (2022-09-22T13:12:54Z) - CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of
Adversarial Robustness of Vision Models [61.68061613161187]
This paper presents CARLA-GeAR, a tool for the automatic generation of synthetic datasets for evaluating the robustness of neural models against physical adversarial patches.
The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving.
The paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world.
arXiv Detail & Related papers (2022-06-09T09:17:38Z) - Toward Data-Driven STAP Radar [23.333816677794115]
We characterize our data-driven approach to space-time adaptive processing (STAP) radar.
We generate a rich example dataset of received radar signals by randomly placing targets of variable strengths in a predetermined region.
For each data sample within this region, we generate heatmap tensors in range, azimuth, and elevation of the output power of a beamformer.
In an airborne scenario, the moving radar creates a sequence of these time-indexed image stacks, resembling a video.
arXiv Detail & Related papers (2022-01-26T02:28:13Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - There and Back Again: Learning to Simulate Radar Data for Real-World
Applications [21.995474023869388]
We learn a radar sensor model capable of synthesising faithful radar observations based on simulated elevation maps.
We adopt an adversarial approach to learning a forward sensor model from unaligned radar examples.
We demonstrate the efficacy of our approach by evaluating a down-stream segmentation model trained purely on simulated data in a real-world deployment.
arXiv Detail & Related papers (2020-11-29T15:49:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.