TJ4DRadSet: A 4D Radar Dataset for Autonomous Driving
- URL: http://arxiv.org/abs/2204.13483v2
- Date: Sat, 30 Apr 2022 06:15:11 GMT
- Title: TJ4DRadSet: A 4D Radar Dataset for Autonomous Driving
- Authors: Lianqing Zheng, Zhixiong Ma, Xichan Zhu, Bin Tan, Sen Li, Kai Long,
Weiqi Sun, Sihan Chen, Lu Zhang, Mengyue Wan, Libo Huang, Jie Bai
- Abstract summary: We introduce an autonomous driving dataset named TJ4DRadSet, including multi-modal sensors that are 4D radar, lidar, camera and sequences with about 40K frames in total.
We provide a 4D radar-based 3D object detection baseline for our dataset to demonstrate the effectiveness of deep learning methods for 4D radar point clouds.
- Score: 16.205201694162092
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The new generation of 4D high-resolution imaging radar provides not only a
huge amount of point cloud but also additional elevation measurement, which has
a great potential of 3D sensing in autonomous driving. In this paper, we
introduce an autonomous driving dataset named TJ4DRadSet, including multi-modal
sensors that are 4D radar, lidar, camera and GNSS, with about 40K frames in
total. 7757 frames within 44 consecutive sequences in various driving scenarios
are well annotated with 3D bounding boxes and track id. We provide a 4D
radar-based 3D object detection baseline for our dataset to demonstrate the
effectiveness of deep learning methods for 4D radar point clouds.
Related papers
- RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar [15.776076554141687]
3D occupancy-based perception pipeline has significantly advanced autonomous driving.
Current methods rely on LiDAR or camera inputs for 3D occupancy prediction.
We introduce a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction.
arXiv Detail & Related papers (2024-05-22T21:48:17Z) - Sparse Points to Dense Clouds: Enhancing 3D Detection with Limited LiDAR Data [68.18735997052265]
We propose a balanced approach that combines the advantages of monocular and point cloud-based 3D detection.
Our method requires only a small number of 3D points, that can be obtained from a low-cost, low-resolution sensor.
The accuracy of 3D detection improves by 20% compared to the state-of-the-art monocular detection methods.
arXiv Detail & Related papers (2024-04-10T03:54:53Z) - MVFAN: Multi-View Feature Assisted Network for 4D Radar Object Detection [15.925365473140479]
4D radar is recognized for its resilience and cost-effectiveness under adverse weather conditions.
Unlike LiDAR and cameras, radar remains unimpaired by harsh weather conditions.
We propose a framework for developing radar-based 3D object detection for autonomous vehicles.
arXiv Detail & Related papers (2023-10-25T06:10:07Z) - Dual Radar: A Multi-modal Dataset with Dual 4D Radar for Autonomous
Driving [22.633794566422687]
We introduce a novel large-scale multi-modal dataset featuring, for the first time, two types of 4D radars captured simultaneously.
Our dataset consists of 151 consecutive series, most of which last 20 seconds and contain 10,007 meticulously synchronized and annotated frames.
We experimentally validate our dataset, providing valuable results for studying different types of 4D radars.
arXiv Detail & Related papers (2023-10-11T15:41:52Z) - SMURF: Spatial Multi-Representation Fusion for 3D Object Detection with
4D Imaging Radar [12.842457981088378]
This paper introduces spatial multi-representation fusion (SMURF), a novel approach to 3D object detection using a single 4D imaging radar.
SMURF mitigates measurement inaccuracy caused by limited angular resolution and multi-path propagation of radar signals.
Experimental evaluations on View-of-Delft (VoD) and TJ4DRadSet datasets demonstrate the effectiveness and generalization ability of SMURF.
arXiv Detail & Related papers (2023-07-20T11:33:46Z) - 3D Data Augmentation for Driving Scenes on Camera [50.41413053812315]
We propose a 3D data augmentation approach termed Drive-3DAug, aiming at augmenting the driving scenes on camera in the 3D space.
We first utilize Neural Radiance Field (NeRF) to reconstruct the 3D models of background and foreground objects.
Then, augmented driving scenes can be obtained by placing the 3D objects with adapted location and orientation at the pre-defined valid region of backgrounds.
arXiv Detail & Related papers (2023-03-18T05:51:05Z) - K-Radar: 4D Radar Object Detection for Autonomous Driving in Various
Weather Conditions [9.705678194028895]
KAIST-Radar is a novel large-scale object detection dataset and benchmark.
It contains 35K frames of 4D Radar tensor (4DRT) data with power measurements along the Doppler, range, azimuth, and elevation dimensions.
We provide auxiliary measurements from carefully calibrated high-resolution Lidars, surround stereo cameras, and RTK-GPS.
arXiv Detail & Related papers (2022-06-16T13:39:21Z) - A Lightweight and Detector-free 3D Single Object Tracker on Point Clouds [50.54083964183614]
It is non-trivial to perform accurate target-specific detection since the point cloud of objects in raw LiDAR scans is usually sparse and incomplete.
We propose DMT, a Detector-free Motion prediction based 3D Tracking network that totally removes the usage of complicated 3D detectors.
arXiv Detail & Related papers (2022-03-08T17:49:07Z) - 4D-Net for Learned Multi-Modal Alignment [87.58354992455891]
We present 4D-Net, a 3D object detection approach, which utilizes 3D Point Cloud and RGB sensing information, both in time.
We are able to incorporate the 4D information by performing a novel connection learning across various feature representations and levels of abstraction, as well as by observing geometric constraints.
arXiv Detail & Related papers (2021-09-02T16:35:00Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object
Tracking (Accepted as an extended abstract in JRDB-ACT Workshop at CVPR21) [68.12101204123422]
A point cloud is a dense compilation of spatial data in 3D coordinates.
We propose a PointNet-based approach for 3D Multi-Object Tracking (MOT)
arXiv Detail & Related papers (2021-06-03T05:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.