RADIATE: A Radar Dataset for Automotive Perception in Bad Weather
- URL: http://arxiv.org/abs/2010.09076v3
- Date: Mon, 5 Apr 2021 14:00:22 GMT
- Title: RADIATE: A Radar Dataset for Automotive Perception in Bad Weather
- Authors: Marcel Sheeny, Emanuele De Pellegrin, Saptarshi Mukherjee, Alireza
Ahrabian, Sen Wang, Andrew Wallace
- Abstract summary: RADIATE includes 3 hours of annotated radar images with more than 200K labelled road actors in total.
It covers 8 different categories of actors in a variety of weather conditions.
RADIATE also has stereo images, 32-channel LiDAR and GPS data, directed at other applications.
- Score: 13.084162751635239
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Datasets for autonomous cars are essential for the development and
benchmarking of perception systems. However, most existing datasets are
captured with camera and LiDAR sensors in good weather conditions. In this
paper, we present the RAdar Dataset In Adverse weaThEr (RADIATE), aiming to
facilitate research on object detection, tracking and scene understanding using
radar sensing for safe autonomous driving. RADIATE includes 3 hours of
annotated radar images with more than 200K labelled road actors in total, on
average about 4.6 instances per radar image. It covers 8 different categories
of actors in a variety of weather conditions (e.g., sun, night, rain, fog and
snow) and driving scenarios (e.g., parked, urban, motorway and suburban),
representing different levels of challenge. To the best of our knowledge, this
is the first public radar dataset which provides high-resolution radar images
on public roads with a large amount of road actors labelled. The data collected
in adverse weather, e.g., fog and snowfall, is unique. Some baseline results of
radar based object detection and recognition are given to show that the use of
radar data is promising for automotive applications in bad weather, where
vision and LiDAR can fail. RADIATE also has stereo images, 32-channel LiDAR and
GPS data, directed at other applications such as sensor fusion, localisation
and mapping. The public dataset can be accessed at
http://pro.hw.ac.uk/radiate/.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Vision meets mmWave Radar: 3D Object Perception Benchmark for Autonomous
Driving [30.456314610767667]
We introduce the CRUW3D dataset, including 66K synchronized and well-calibrated camera, radar, and LiDAR frames.
This kind of format can enable machine learning models to more reliable perception results after fusing the information or features between the camera and radar.
arXiv Detail & Related papers (2023-11-17T01:07:37Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection
Model [13.214257841152033]
Radar-centric data sets do not get a lot of attention in the development of deep learning techniques for radar perception.
We propose a transformers-based model, named RadarFormer, that utilizes state-of-the-art developments in vision deep learning.
Our model also introduces a channel-chirp-time merging module that reduces the size and complexity of our models by more than 10 times without compromising accuracy.
arXiv Detail & Related papers (2023-04-17T17:07:35Z) - Raw High-Definition Radar for Multi-Task Learning [0.0]
We propose a novel HD radar sensing model, FFT-RadNet, that eliminates the overhead of computing the Range-Azimuth-Doppler 3D tensor.
FFT-RadNet is trained both to detect vehicles and to segment free driving space.
On both tasks, it competes with the most recent radar-based models while requiring less compute and memory.
arXiv Detail & Related papers (2021-12-20T16:15:26Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - Rethinking of Radar's Role: A Camera-Radar Dataset and Systematic
Annotator via Coordinate Alignment [38.24705460170415]
We propose a new dataset, named CRUW, with a systematic annotator and performance evaluation system.
CRUW aims to classify and localize the objects in 3D purely from radar's radio frequency (RF) images.
To the best of our knowledge, CRUW is the first public large-scale dataset with a systematic annotation and evaluation system.
arXiv Detail & Related papers (2021-05-11T17:13:45Z) - Radar Artifact Labeling Framework (RALF): Method for Plausible Radar
Detections in Datasets [2.5899040911480187]
We propose a cross sensor Radar Artifact Labeling Framework (RALF) for labeling sparse radar point clouds.
RALF provides plausibility labels for radar raw detections, distinguishing between artifacts and targets.
We validate the results by evaluating error metrics on semi-manually labeled ground truth dataset of $3.28cdot106$ points.
arXiv Detail & Related papers (2020-12-03T15:11:31Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.