RadarGen: Automotive Radar Point Cloud Generation from Cameras
- URL: http://arxiv.org/abs/2512.17897v1
- Date: Fri, 19 Dec 2025 18:57:33 GMT
- Title: RadarGen: Automotive Radar Point Cloud Generation from Cameras
- Authors: Tomer Borreda, Fangqiang Ding, Sanja Fidler, Shengyu Huang, Or Litany,
- Abstract summary: We present RadarGen, a diffusion model for synthesizing realistic automotive radar point clouds from multi-view camera imagery.<n>RadarGen adapts efficient image-latent diffusion to the radar domain by representing radar measurements in bird's-eye-view form.<n>We show that RadarGen captures characteristic radar measurement distributions and reduces the gap to perception models trained on real data.
- Score: 64.69976771710057
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present RadarGen, a diffusion model for synthesizing realistic automotive radar point clouds from multi-view camera imagery. RadarGen adapts efficient image-latent diffusion to the radar domain by representing radar measurements in bird's-eye-view form that encodes spatial structure together with radar cross section (RCS) and Doppler attributes. A lightweight recovery step reconstructs point clouds from the generated maps. To better align generation with the visual scene, RadarGen incorporates BEV-aligned depth, semantic, and motion cues extracted from pretrained foundation models, which guide the stochastic generation process toward physically plausible radar patterns. Conditioning on images makes the approach broadly compatible, in principle, with existing visual datasets and simulation frameworks, offering a scalable direction for multimodal generative simulation. Evaluations on large-scale driving data show that RadarGen captures characteristic radar measurement distributions and reduces the gap to perception models trained on real data, marking a step toward unified generative simulation across sensing modalities.
Related papers
- Sim2Radar: Toward Bridging the Radar Sim-to-Real Gap with VLM-Guided Scene Reconstruction [2.3510064024442374]
Sim2Radar is an end-to-end framework that synthesizes training radar data directly from single-view RGB images.<n>Sim2Radar reconstructs a material-aware 3D scene by combining monocular depth estimation, segmentation, and vision-language reasoning.<n> Evaluated on real-world indoor scenes, Sim2Radar improves downstream 3D radar perception via transfer learning.
arXiv Detail & Related papers (2026-02-10T10:56:47Z) - Synthetic FMCW Radar Range Azimuth Maps Augmentation with Generative Diffusion Model [9.764772760421792]
We propose a conditional generative framework for synthesizing realistic Frequency-Modulated Continuous-Wave radar Range-Azimuth Maps.<n>Our approach leverages a generative diffusion model to generate radar data for multiple object categories, including pedestrians, cars, and cyclists.
arXiv Detail & Related papers (2026-01-09T10:59:46Z) - Simulating Automotive Radar with Lidar and Camera Inputs [10.56730571225466]
Low-cost millimeter automotive radar has received more and more attention due to its ability to handle adverse weather and lighting conditions in autonomous driving.<n>We report a new method that is able to simulate 4D millimeter wave radar signals using camera image, light detection and ranging (lidar) point cloud, and ego-velocity.
arXiv Detail & Related papers (2025-03-11T05:59:43Z) - RobuRCDet: Enhancing Robustness of Radar-Camera Fusion in Bird's Eye View for 3D Object Detection [68.99784784185019]
Poor lighting or adverse weather conditions degrade camera performance.<n>Radar suffers from noise and positional ambiguity.<n>We propose RobuRCDet, a robust object detection model in BEV.
arXiv Detail & Related papers (2025-02-18T17:17:38Z) - Toward a Low-Cost Perception System in Autonomous Vehicles: A Spectrum Learning Approach [19.23732332126651]
We introduce a novel pixel positional encoding algorithm inspired by Bartlett's spatial spectrum estimation technique.<n>Our method effectively leverages high-resolution camera images to train radar depth map generative models.<n>Our results demonstrate that our approach also outperforms the state-of-the-art (SOTA) by 27.95% in terms of Unidirectional Chamfer Distance (UCD)
arXiv Detail & Related papers (2025-02-04T02:20:52Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - DART: Implicit Doppler Tomography for Radar Novel View Synthesis [9.26298115522881]
DART is a Neural Radiance Field-inspired method which uses radar-specific physics to create a reflectance and transmittance-based rendering pipeline for range-Doppler images.
In comparison to state-of-the-art baselines, DART synthesizes superior radar range-Doppler images from novel views across all datasets.
arXiv Detail & Related papers (2024-03-06T17:54:50Z) - Diffusion Models for Interferometric Satellite Aperture Radar [73.01013149014865]
Probabilistic Diffusion Models (PDMs) have recently emerged as a very promising class of generative models.
Here, we leverage PDMs to generate several radar-based satellite image datasets.
We show that PDMs succeed in generating images with complex and realistic structures, but that sampling time remains an issue.
arXiv Detail & Related papers (2023-08-31T16:26:17Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.