NTU4DRadLM: 4D Radar-centric Multi-Modal Dataset for Localization and
Mapping
- URL: http://arxiv.org/abs/2309.00962v1
- Date: Sat, 2 Sep 2023 15:12:20 GMT
- Title: NTU4DRadLM: 4D Radar-centric Multi-Modal Dataset for Localization and
Mapping
- Authors: Jun Zhang, Huayang Zhuge, Yiyao Liu, Guohao Peng, Zhenyu Wu, Haoyuan
Zhang, Qiyang Lyu, Heshan Li, Chunyang Zhao, Dogan Kircali, Sanat Mharolkar,
Xun Yang, Su Yi, Yuanzhe Wang and Danwei Wang
- Abstract summary: SLAM based on 4D Radar, thermal camera and IMU can work robustly.
The main characteristics are: 1) It is the only dataset that simultaneously includes all 6 sensors: 4D radar, thermal camera, IMU, 3D LiDAR, visual camera and RTK GPS.
- Score: 32.0536548410301
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simultaneous Localization and Mapping (SLAM) is moving towards a robust
perception age. However, LiDAR- and visual- SLAM may easily fail in adverse
conditions (rain, snow, smoke and fog, etc.). In comparison, SLAM based on 4D
Radar, thermal camera and IMU can work robustly. But only a few literature can
be found. A major reason is the lack of related datasets, which seriously
hinders the research. Even though some datasets are proposed based on 4D radar
in past four years, they are mainly designed for object detection, rather than
SLAM. Furthermore, they normally do not include thermal camera. Therefore, in
this paper, NTU4DRadLM is presented to meet this requirement. The main
characteristics are: 1) It is the only dataset that simultaneously includes all
6 sensors: 4D radar, thermal camera, IMU, 3D LiDAR, visual camera and RTK GPS.
2) Specifically designed for SLAM tasks, which provides fine-tuned ground truth
odometry and intentionally formulated loop closures. 3) Considered both
low-speed robot platform and fast-speed unmanned vehicle platform. 4) Covered
structured, unstructured and semi-structured environments. 5) Considered both
middle- and large- scale outdoor environments, i.e., the 6 trajectories range
from 246m to 6.95km. 6) Comprehensively evaluated three types of SLAM
algorithms. Totally, the dataset is around 17.6km, 85mins, 50GB and it will be
accessible from this link: https://github.com/junzhang2016/NTU4DRadLM
Related papers
- RobuRCDet: Enhancing Robustness of Radar-Camera Fusion in Bird's Eye View for 3D Object Detection [68.99784784185019]
Poor lighting or adverse weather conditions degrade camera performance.
Radar suffers from noise and positional ambiguity.
We propose RobuRCDet, a robust object detection model in BEV.
arXiv Detail & Related papers (2025-02-18T17:17:38Z) - DIDLM: A SLAM Dataset for Difficult Scenarios Featuring Infrared, Depth Cameras, LIDAR, 4D Radar, and Others under Adverse Weather, Low Light Conditions, and Rough Roads [20.600516423425688]
We introduce a multi-sensor dataset covering challenging scenarios such as snowy weather, rainy weather, nighttime conditions, speed bumps, and rough terrains.
The dataset includes rarely utilized sensors for extreme conditions, such as 4D millimeter-wave radar, infrared cameras, and depth cameras, alongside 3D LiDAR, RGB cameras, GPS, and IMU.
It supports both autonomous driving and ground robot applications and provides reliable GPS/INS ground truth data, covering structured and semi-structured terrains.
arXiv Detail & Related papers (2024-04-15T09:49:33Z) - Human Detection from 4D Radar Data in Low-Visibility Field Conditions [17.1888913327586]
Modern 4D imaging radars provide target responses across the range, vertical angle, horizontal angle and Doppler velocity dimensions.
We propose TMVA4D, a CNN architecture that leverages this 4D radar modality for semantic segmentation.
Using TMVA4D on this dataset, we achieve an mIoU score of 78.2% and an mDice score of 86.1%, evaluated on the two classes background and person.
arXiv Detail & Related papers (2024-04-08T08:53:54Z) - Dual Radar: A Multi-modal Dataset with Dual 4D Radar for Autonomous
Driving [22.633794566422687]
We introduce a novel large-scale multi-modal dataset featuring, for the first time, two types of 4D radars captured simultaneously.
Our dataset consists of 151 consecutive series, most of which last 20 seconds and contain 10,007 meticulously synchronized and annotated frames.
We experimentally validate our dataset, providing valuable results for studying different types of 4D radars.
arXiv Detail & Related papers (2023-10-11T15:41:52Z) - ThermRad: A Multi-modal Dataset for Robust 3D Object Detection under
Challenging Conditions [15.925365473140479]
We present a new multi-modal dataset called ThermRad, which includes a 3D LiDAR, a 4D radar, an RGB camera and a thermal camera.
We propose a new multi-modal fusion method called RTDF-RCNN, which leverages the complementary strengths of 4D radars and thermal cameras to boost object detection performance.
Our method achieves significant enhancements in detecting cars, pedestrians, and cyclists, with improvements of over 7.98%, 24.27%, and 27.15%, respectively.
arXiv Detail & Related papers (2023-08-20T04:34:30Z) - Using Detection, Tracking and Prediction in Visual SLAM to Achieve
Real-time Semantic Mapping of Dynamic Scenarios [70.70421502784598]
RDS-SLAM can build semantic maps at object level for dynamic scenarios in real time using only one commonly used Intel Core i7 CPU.
We evaluate RDS-SLAM in TUM RGB-D dataset, and experimental results show that RDS-SLAM can run with 30.3 ms per frame in dynamic scenarios.
arXiv Detail & Related papers (2022-10-10T11:03:32Z) - K-Radar: 4D Radar Object Detection for Autonomous Driving in Various
Weather Conditions [9.705678194028895]
KAIST-Radar is a novel large-scale object detection dataset and benchmark.
It contains 35K frames of 4D Radar tensor (4DRT) data with power measurements along the Doppler, range, azimuth, and elevation dimensions.
We provide auxiliary measurements from carefully calibrated high-resolution Lidars, surround stereo cameras, and RTK-GPS.
arXiv Detail & Related papers (2022-06-16T13:39:21Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object
Tracking (Accepted as an extended abstract in JRDB-ACT Workshop at CVPR21) [68.12101204123422]
A point cloud is a dense compilation of spatial data in 3D coordinates.
We propose a PointNet-based approach for 3D Multi-Object Tracking (MOT)
arXiv Detail & Related papers (2021-06-03T05:36:39Z) - End-to-End Pseudo-LiDAR for Image-Based 3D Object Detection [62.34374949726333]
Pseudo-LiDAR (PL) has led to a drastic reduction in the accuracy gap between methods based on LiDAR sensors and those based on cheap stereo cameras.
PL combines state-of-the-art deep neural networks for 3D depth estimation with those for 3D object detection by converting 2D depth map outputs to 3D point cloud inputs.
We introduce a new framework based on differentiable Change of Representation (CoR) modules that allow the entire PL pipeline to be trained end-to-end.
arXiv Detail & Related papers (2020-04-07T02:18:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.