Efficient 4D Radar Data Auto-labeling Method using LiDAR-based Object Detection Network
- URL: http://arxiv.org/abs/2407.04709v1
- Date: Mon, 13 May 2024 04:28:06 GMT
- Title: Efficient 4D Radar Data Auto-labeling Method using LiDAR-based Object Detection Network
- Authors: Min-Hyeok Sun, Dong-Hee Paek, Seung-Hyun Song, Seung-Hyun Kong,
- Abstract summary: Existing 4D radar datasets lack sufficient sensor data and labels.
To address these issues, we propose the auto-labeling method of 4D radar tensor (4DRT) in the K-Radar dataset.
- Score: 5.405156980077946
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Focusing on the strength of 4D (4-Dimensional) radar, research about robust 3D object detection networks in adverse weather conditions has gained attention. To train such networks, datasets that contain large amounts of 4D radar data and ground truth labels are essential. However, the existing 4D radar datasets (e.g., K-Radar) lack sufficient sensor data and labels, which hinders the advancement in this research domain. Furthermore, enlarging the 4D radar datasets requires a time-consuming and expensive manual labeling process. To address these issues, we propose the auto-labeling method of 4D radar tensor (4DRT) in the K-Radar dataset. The proposed method initially trains a LiDAR-based object detection network (LODN) using calibrated LiDAR point cloud (LPC). The trained LODN then automatically generates ground truth labels (i.e., auto-labels, ALs) of the K-Radar train dataset without human intervention. The generated ALs are used to train the 4D radar-based object detection network (4DRODN), Radar Tensor Network with Height (RTNH). The experimental results demonstrate that RTNH trained with ALs has achieved a similar detection performance to the original RTNH which is trained with manually annotated ground truth labels, thereby verifying the effectiveness of the proposed auto-labeling method. All relevant codes will be soon available at the following GitHub project: https://github.com/kaist-avelab/K-Radar
Related papers
- Multistream Network for LiDAR and Camera-based 3D Object Detection in Outdoor Scenes [59.78696921486972]
Fusion of LiDAR and RGB data has the potential to enhance outdoor 3D object detection accuracy.<n>We propose a MultiStream Detection (MuStD) network, that meticulously extracts task-relevant information from both data modalities.
arXiv Detail & Related papers (2025-07-25T14:20:16Z) - 4D Radar Ground Truth Augmentation with LiDAR-to-4D Radar Data Synthesis [6.605694475813286]
Ground truth augmentation (GT-Aug) is a common method for LiDAR-based object detection.
We propose 4D Radar Ground Truth Augmentation (4DR GT-Aug)
Our approach first augments LiDAR data and then converts it to 4D Radar data via a LiDAR-to-4D Radar data synthesis (L2RDaS) module.
In doing so, it produces 4D Radar data distributions that more closely resemble real-world measurements.
arXiv Detail & Related papers (2025-03-05T16:16:46Z) - Enhanced 3D Object Detection via Diverse Feature Representations of 4D Radar Tensor [5.038148262901536]
Raw 4D Radar (4DRT) offers richer spatial and Doppler information than conventional point clouds.<n>We propose a novel 3D object detection framework that maximizes the utility of 4DRT while preserving efficiency.<n>We show that our framework achieves improvements of 7.3% in AP_3D and 9.5% in AP_BEV over the baseline RTNH model when using extremely sparse inputs.
arXiv Detail & Related papers (2025-02-10T02:48:56Z) - SpikingRTNH: Spiking Neural Network for 4D Radar Object Detection [6.636342419996716]
SpikingRTNH is the first spiking neural network (SNN) for 3D object detection using 4D Radar data.
We introduce biological top-down inference (BTI) which processes point clouds sequentially from higher to lower densities.
Results establish the viability of SNNs for energy-efficient 4D Radar-based object detection in autonomous driving systems.
arXiv Detail & Related papers (2025-01-31T07:33:30Z) - RadarPillars: Efficient Object Detection from 4D Radar Point Clouds [42.9356088038035]
We present RadarPillars, a pillar-based object detection network.
By decomposing radial velocity data, RadarPillars significantly outperform state-of-the-art detection results on the View-of-Delft dataset.
This comes at a significantly reduced parameter count, surpassing existing methods in terms of efficiency and enabling real-time performance on edge devices.
arXiv Detail & Related papers (2024-08-09T12:13:38Z) - RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar [15.776076554141687]
3D occupancy-based perception pipeline has significantly advanced autonomous driving.
Current methods rely on LiDAR or camera inputs for 3D occupancy prediction.
We introduce a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction.
arXiv Detail & Related papers (2024-05-22T21:48:17Z) - Human Detection from 4D Radar Data in Low-Visibility Field Conditions [17.1888913327586]
Modern 4D imaging radars provide target responses across the range, vertical angle, horizontal angle and Doppler velocity dimensions.
We propose TMVA4D, a CNN architecture that leverages this 4D radar modality for semantic segmentation.
Using TMVA4D on this dataset, we achieve an mIoU score of 78.2% and an mDice score of 86.1%, evaluated on the two classes background and person.
arXiv Detail & Related papers (2024-04-08T08:53:54Z) - RTNH+: Enhanced 4D Radar Object Detection Network using Combined
CFAR-based Two-level Preprocessing and Vertical Encoding [8.017543518311196]
RTNH+ is an enhanced version of RTNH, a 4D Radar object detection network.
We show that RTNH+ achieves significant performance improvement of 10.14% in $AP_3DIoU=0.3$ and 16.12% in $AP_3DIoU=0.5$ over RTNH.
arXiv Detail & Related papers (2023-10-19T06:45:19Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - K-Radar: 4D Radar Object Detection for Autonomous Driving in Various
Weather Conditions [9.705678194028895]
KAIST-Radar is a novel large-scale object detection dataset and benchmark.
It contains 35K frames of 4D Radar tensor (4DRT) data with power measurements along the Doppler, range, azimuth, and elevation dimensions.
We provide auxiliary measurements from carefully calibrated high-resolution Lidars, surround stereo cameras, and RTK-GPS.
arXiv Detail & Related papers (2022-06-16T13:39:21Z) - A Lightweight and Detector-free 3D Single Object Tracker on Point Clouds [50.54083964183614]
It is non-trivial to perform accurate target-specific detection since the point cloud of objects in raw LiDAR scans is usually sparse and incomplete.
We propose DMT, a Detector-free Motion prediction based 3D Tracking network that totally removes the usage of complicated 3D detectors.
arXiv Detail & Related papers (2022-03-08T17:49:07Z) - Self-Supervised Person Detection in 2D Range Data using a Calibrated
Camera [83.31666463259849]
We propose a method to automatically generate training labels (called pseudo-labels) for 2D LiDAR-based person detectors.
We show that self-supervised detectors, trained or fine-tuned with pseudo-labels, outperform detectors trained using manual annotations.
Our method is an effective way to improve person detectors during deployment without any additional labeling effort.
arXiv Detail & Related papers (2020-12-16T12:10:04Z) - Manual-Label Free 3D Detection via An Open-Source Simulator [50.74299948748722]
We propose a manual-label free 3D detection algorithm that leverages the CARLA simulator to generate a large amount of self-labeled training samples.
Domain Adaptive VoxelNet (DA-VoxelNet) can cross the distribution gap from the synthetic data to the real scenario.
Experimental results show that the proposed unsupervised DA 3D detector can achieve 76.66% and 56.64% mAP on KITTI evaluation set.
arXiv Detail & Related papers (2020-11-16T08:29:01Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.