K-Radar: 4D Radar Object Detection for Autonomous Driving in Various
Weather Conditions
- URL: http://arxiv.org/abs/2206.08171v4
- Date: Tue, 7 Nov 2023 17:06:09 GMT
- Title: K-Radar: 4D Radar Object Detection for Autonomous Driving in Various
Weather Conditions
- Authors: Dong-Hee Paek, Seung-Hyun Kong, Kevin Tirta Wijaya
- Abstract summary: KAIST-Radar is a novel large-scale object detection dataset and benchmark.
It contains 35K frames of 4D Radar tensor (4DRT) data with power measurements along the Doppler, range, azimuth, and elevation dimensions.
We provide auxiliary measurements from carefully calibrated high-resolution Lidars, surround stereo cameras, and RTK-GPS.
- Score: 9.705678194028895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unlike RGB cameras that use visible light bands (384$\sim$769 THz) and Lidars
that use infrared bands (361$\sim$331 THz), Radars use relatively longer
wavelength radio bands (77$\sim$81 GHz), resulting in robust measurements in
adverse weathers. Unfortunately, existing Radar datasets only contain a
relatively small number of samples compared to the existing camera and Lidar
datasets. This may hinder the development of sophisticated data-driven deep
learning techniques for Radar-based perception. Moreover, most of the existing
Radar datasets only provide 3D Radar tensor (3DRT) data that contain power
measurements along the Doppler, range, and azimuth dimensions. As there is no
elevation information, it is challenging to estimate the 3D bounding box of an
object from 3DRT. In this work, we introduce KAIST-Radar (K-Radar), a novel
large-scale object detection dataset and benchmark that contains 35K frames of
4D Radar tensor (4DRT) data with power measurements along the Doppler, range,
azimuth, and elevation dimensions, together with carefully annotated 3D
bounding box labels of objects on the roads. K-Radar includes challenging
driving conditions such as adverse weathers (fog, rain, and snow) on various
road structures (urban, suburban roads, alleyways, and highways). In addition
to the 4DRT, we provide auxiliary measurements from carefully calibrated
high-resolution Lidars, surround stereo cameras, and RTK-GPS. We also provide
4DRT-based object detection baseline neural networks (baseline NNs) and show
that the height information is crucial for 3D object detection. And by
comparing the baseline NN with a similarly-structured Lidar-based neural
network, we demonstrate that 4D Radar is a more robust sensor for adverse
weather conditions. All codes are available at
https://github.com/kaist-avelab/k-radar.
Related papers
- RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar [15.776076554141687]
3D occupancy-based perception pipeline has significantly advanced autonomous driving.
Current methods rely on LiDAR or camera inputs for 3D occupancy prediction.
We introduce a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction.
arXiv Detail & Related papers (2024-05-22T21:48:17Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Sparse Points to Dense Clouds: Enhancing 3D Detection with Limited LiDAR Data [68.18735997052265]
We propose a balanced approach that combines the advantages of monocular and point cloud-based 3D detection.
Our method requires only a small number of 3D points, that can be obtained from a low-cost, low-resolution sensor.
The accuracy of 3D detection improves by 20% compared to the state-of-the-art monocular detection methods.
arXiv Detail & Related papers (2024-04-10T03:54:53Z) - CenterRadarNet: Joint 3D Object Detection and Tracking Framework using
4D FMCW Radar [28.640714690346353]
CenterRadarNet is designed to facilitate high-resolution representation learning from 4D (Doppler-range-azimuth-ele) radar data.
As a single-stage 3D object detector, CenterRadarNet infers the BEV object distribution confidence maps, corresponding 3D bounding box attributes, and appearance embedding for each pixel.
In diverse driving scenarios, CenterRadarNet shows consistent, robust performance, emphasizing its wide applicability.
arXiv Detail & Related papers (2023-11-02T17:36:40Z) - Dual Radar: A Multi-modal Dataset with Dual 4D Radar for Autonomous
Driving [22.633794566422687]
We introduce a novel large-scale multi-modal dataset featuring, for the first time, two types of 4D radars captured simultaneously.
Our dataset consists of 151 consecutive series, most of which last 20 seconds and contain 10,007 meticulously synchronized and annotated frames.
We experimentally validate our dataset, providing valuable results for studying different types of 4D radars.
arXiv Detail & Related papers (2023-10-11T15:41:52Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - TJ4DRadSet: A 4D Radar Dataset for Autonomous Driving [16.205201694162092]
We introduce an autonomous driving dataset named TJ4DRadSet, including multi-modal sensors that are 4D radar, lidar, camera and sequences with about 40K frames in total.
We provide a 4D radar-based 3D object detection baseline for our dataset to demonstrate the effectiveness of deep learning methods for 4D radar point clouds.
arXiv Detail & Related papers (2022-04-28T13:17:06Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - Deep Learning on Radar Centric 3D Object Detection [4.822598110892847]
We introduce a deep learning approach to 3D object detection with radar only.
To overcome the lack of radar labeled data, we propose a novel way of making use of abundant LiDAR data.
arXiv Detail & Related papers (2020-02-27T10:16:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.