4D Radar Ground Truth Augmentation with LiDAR-to-4D Radar Data Synthesis
- URL: http://arxiv.org/abs/2503.03637v1
- Date: Wed, 05 Mar 2025 16:16:46 GMT
- Title: 4D Radar Ground Truth Augmentation with LiDAR-to-4D Radar Data Synthesis
- Authors: Woo-Jin Jung, Dong-Hee Paek, Seung-Hyun Kong,
- Abstract summary: Ground truth augmentation (GT-Aug) is a common method for LiDAR-based object detection.<n>We propose 4D Radar Ground Truth Augmentation (4DR GT-Aug)<n>Our approach first augments LiDAR data and then converts it to 4D Radar data via a LiDAR-to-4D Radar data synthesis (L2RDaS) module.<n>In doing so, it produces 4D Radar data distributions that more closely resemble real-world measurements.
- Score: 6.605694475813286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ground truth augmentation (GT-Aug) is a common method for LiDAR-based object detection, as it enhances object density by leveraging ground truth bounding boxes (GT bboxes). However, directly applying GT-Aug to 4D Radar tensor data overlooks important measurements outside the GT bboxes-such as sidelobes-leading to synthetic distributions that deviate from real-world 4D Radar data. To address this limitation, we propose 4D Radar Ground Truth Augmentation (4DR GT-Aug). Our approach first augments LiDAR data and then converts it to 4D Radar data via a LiDAR-to-4D Radar data synthesis (L2RDaS) module, which explicitly accounts for measurements both inside and outside GT bboxes. In doing so, it produces 4D Radar data distributions that more closely resemble real-world measurements, thereby improving object detection accuracy. Experiments on the K-Radar dataset show that the proposed method achieves improved performance compared to conventional GT-Aug in object detection for 4D Radar. The implementation code is available at https://github.com/kaist-avelab/K-Radar.
Related papers
- RadarSplat: Radar Gaussian Splatting for High-Fidelity Data Synthesis and 3D Reconstruction of Autonomous Driving Scenes [8.117954246685976]
High-Fidelity 3D scene reconstruction plays a crucial role in autonomous driving by enabling novel data generation from existing datasets.<n> Radar is crucial for autonomous driving due to its robustness in adverse weather conditions like rain, fog, and snow, where optical sensors often struggle.<n>This paper proposes RadarSplat, which integrates Gaussian Splatting with novel radar noise modeling to enable realistic radar data synthesis and enhanced 3D reconstruction.
arXiv Detail & Related papers (2025-06-02T07:14:27Z) - 4D-ROLLS: 4D Radar Occupancy Learning via LiDAR Supervision [7.099012213719072]
We propose 4D-ROLLS, the first weakly supervised occupancy estimation method for 4D radar using the LiDAR point cloud as the supervisory signal.<n>The model is seamlessly transferred to downstream tasks BEV segmentation and point cloud occupancy prediction.<n>The lightweight network enables 4D-ROLLS model to achieve fast inference speeds at about 30 Hz on a 4060 GPU.
arXiv Detail & Related papers (2025-05-20T04:12:44Z) - ZFusion: An Effective Fuser of Camera and 4D Radar for 3D Object Perception in Autonomous Driving [7.037019489455008]
We propose a 3D object detection method, termed ZFusion, which fuses 4D radar and vision modality.
FP-DDCA fuser packs Transformer blocks to interactively fuse multi-modal features at different scales.
Experiments show that ZFusion achieved the state-of-the-art mAP in the region of interest.
arXiv Detail & Related papers (2025-04-04T13:29:32Z) - RobuRCDet: Enhancing Robustness of Radar-Camera Fusion in Bird's Eye View for 3D Object Detection [68.99784784185019]
Poor lighting or adverse weather conditions degrade camera performance.<n>Radar suffers from noise and positional ambiguity.<n>We propose RobuRCDet, a robust object detection model in BEV.
arXiv Detail & Related papers (2025-02-18T17:17:38Z) - V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception [47.55064735186109]
We present V2X-Radar, the first large-scale, real-world multi-modal dataset featuring 4D Radar.
The dataset consists of 20K LiDAR frames, 40K camera images, and 20K 4D Radar data, including 350K annotated boxes across five categories.
To support various research domains, we have established V2X-Radar-C for cooperative perception, V2X-Radar-I for roadside perception, and V2X-Radar-V for single-vehicle perception.
arXiv Detail & Related papers (2024-11-17T04:59:00Z) - RadarPillars: Efficient Object Detection from 4D Radar Point Clouds [42.9356088038035]
We present RadarPillars, a pillar-based object detection network.
By decomposing radial velocity data, RadarPillars significantly outperform state-of-the-art detection results on the View-of-Delft dataset.
This comes at a significantly reduced parameter count, surpassing existing methods in terms of efficiency and enabling real-time performance on edge devices.
arXiv Detail & Related papers (2024-08-09T12:13:38Z) - RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar [15.776076554141687]
3D occupancy-based perception pipeline has significantly advanced autonomous driving.
Current methods rely on LiDAR or camera inputs for 3D occupancy prediction.
We introduce a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction.
arXiv Detail & Related papers (2024-05-22T21:48:17Z) - Efficient 4D Radar Data Auto-labeling Method using LiDAR-based Object Detection Network [5.405156980077946]
Existing 4D radar datasets lack sufficient sensor data and labels.
To address these issues, we propose the auto-labeling method of 4D radar tensor (4DRT) in the K-Radar dataset.
arXiv Detail & Related papers (2024-05-13T04:28:06Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - K-Radar: 4D Radar Object Detection for Autonomous Driving in Various
Weather Conditions [9.705678194028895]
KAIST-Radar is a novel large-scale object detection dataset and benchmark.
It contains 35K frames of 4D Radar tensor (4DRT) data with power measurements along the Doppler, range, azimuth, and elevation dimensions.
We provide auxiliary measurements from carefully calibrated high-resolution Lidars, surround stereo cameras, and RTK-GPS.
arXiv Detail & Related papers (2022-06-16T13:39:21Z) - LiDAR-based 4D Panoptic Segmentation via Dynamic Shifting Network [56.71765153629892]
We propose the Dynamic Shifting Network (DS-Net), which serves as an effective panoptic segmentation framework in the point cloud realm.
Our proposed DS-Net achieves superior accuracies over current state-of-the-art methods in both tasks.
We extend DS-Net to 4D panoptic LiDAR segmentation by the temporally unified instance clustering on aligned LiDAR frames.
arXiv Detail & Related papers (2022-03-14T15:25:42Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object
Tracking (Accepted as an extended abstract in JRDB-ACT Workshop at CVPR21) [68.12101204123422]
A point cloud is a dense compilation of spatial data in 3D coordinates.
We propose a PointNet-based approach for 3D Multi-Object Tracking (MOT)
arXiv Detail & Related papers (2021-06-03T05:36:39Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.