Dual Radar: A Multi-modal Dataset with Dual 4D Radar for Autonomous
Driving
- URL: http://arxiv.org/abs/2310.07602v3
- Date: Thu, 9 Nov 2023 07:18:27 GMT
- Title: Dual Radar: A Multi-modal Dataset with Dual 4D Radar for Autonomous
Driving
- Authors: Xinyu Zhang, Li Wang, Jian Chen, Cheng Fang, Lei Yang, Ziying Song,
Guangqi Yang, Yichen Wang, Xiaofei Zhang, Jun Li, Zhiwei Li, Qingshan Yang,
Zhenlin Zhang, Shuzhi Sam Ge
- Abstract summary: We introduce a novel large-scale multi-modal dataset featuring, for the first time, two types of 4D radars captured simultaneously.
Our dataset consists of 151 consecutive series, most of which last 20 seconds and contain 10,007 meticulously synchronized and annotated frames.
We experimentally validate our dataset, providing valuable results for studying different types of 4D radars.
- Score: 22.633794566422687
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Radar has stronger adaptability in adverse scenarios for autonomous driving
environmental perception compared to widely adopted cameras and LiDARs.
Compared with commonly used 3D radars, the latest 4D radars have precise
vertical resolution and higher point cloud density, making it a highly
promising sensor for autonomous driving in complex environmental perception.
However, due to the much higher noise than LiDAR, manufacturers choose
different filtering strategies, resulting in an inverse ratio between noise
level and point cloud density. There is still a lack of comparative analysis on
which method is beneficial for deep learning-based perception algorithms in
autonomous driving. One of the main reasons is that current datasets only adopt
one type of 4D radar, making it difficult to compare different 4D radars in the
same scene. Therefore, in this paper, we introduce a novel large-scale
multi-modal dataset featuring, for the first time, two types of 4D radars
captured simultaneously. This dataset enables further research into effective
4D radar perception algorithms.Our dataset consists of 151 consecutive series,
most of which last 20 seconds and contain 10,007 meticulously synchronized and
annotated frames. Moreover, our dataset captures a variety of challenging
driving scenarios, including many road conditions, weather conditions,
nighttime and daytime with different lighting intensities and periods. Our
dataset annotates consecutive frames, which can be applied to 3D object
detection and tracking, and also supports the study of multi-modal tasks. We
experimentally validate our dataset, providing valuable results for studying
different types of 4D radars. This dataset is released on
https://github.com/adept-thu/Dual-Radar.
Related papers
- RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar [15.776076554141687]
3D occupancy-based perception pipeline has significantly advanced autonomous driving.
Current methods rely on LiDAR or camera inputs for 3D occupancy prediction.
We introduce a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction.
arXiv Detail & Related papers (2024-05-22T21:48:17Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Human Detection from 4D Radar Data in Low-Visibility Field Conditions [17.1888913327586]
Modern 4D imaging radars provide target responses across the range, vertical angle, horizontal angle and Doppler velocity dimensions.
We propose TMVA4D, a CNN architecture that leverages this 4D radar modality for semantic segmentation.
Using TMVA4D on this dataset, we achieve an mIoU score of 78.2% and an mDice score of 86.1%, evaluated on the two classes background and person.
arXiv Detail & Related papers (2024-04-08T08:53:54Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - SMURF: Spatial Multi-Representation Fusion for 3D Object Detection with
4D Imaging Radar [12.842457981088378]
This paper introduces spatial multi-representation fusion (SMURF), a novel approach to 3D object detection using a single 4D imaging radar.
SMURF mitigates measurement inaccuracy caused by limited angular resolution and multi-path propagation of radar signals.
Experimental evaluations on View-of-Delft (VoD) and TJ4DRadSet datasets demonstrate the effectiveness and generalization ability of SMURF.
arXiv Detail & Related papers (2023-07-20T11:33:46Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - K-Radar: 4D Radar Object Detection for Autonomous Driving in Various
Weather Conditions [9.705678194028895]
KAIST-Radar is a novel large-scale object detection dataset and benchmark.
It contains 35K frames of 4D Radar tensor (4DRT) data with power measurements along the Doppler, range, azimuth, and elevation dimensions.
We provide auxiliary measurements from carefully calibrated high-resolution Lidars, surround stereo cameras, and RTK-GPS.
arXiv Detail & Related papers (2022-06-16T13:39:21Z) - TJ4DRadSet: A 4D Radar Dataset for Autonomous Driving [16.205201694162092]
We introduce an autonomous driving dataset named TJ4DRadSet, including multi-modal sensors that are 4D radar, lidar, camera and sequences with about 40K frames in total.
We provide a 4D radar-based 3D object detection baseline for our dataset to demonstrate the effectiveness of deep learning methods for 4D radar point clouds.
arXiv Detail & Related papers (2022-04-28T13:17:06Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.