CARRADA Dataset: Camera and Automotive Radar with Range-Angle-Doppler
Annotations
- URL: http://arxiv.org/abs/2005.01456v6
- Date: Wed, 26 May 2021 13:52:09 GMT
- Title: CARRADA Dataset: Camera and Automotive Radar with Range-Angle-Doppler
Annotations
- Authors: A. Ouaknine, A. Newson, J. Rebut, F. Tupin and P. P\'erez
- Abstract summary: We introduce CARRADA, a dataset of synchronized camera and radar recordings with range-angle-Doppler annotations.
We also present a semi-automatic annotation approach, which was used to annotate the dataset, and a radar semantic segmentation baseline.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High quality perception is essential for autonomous driving (AD) systems. To
reach the accuracy and robustness that are required by such systems, several
types of sensors must be combined. Currently, mostly cameras and laser scanners
(lidar) are deployed to build a representation of the world around the vehicle.
While radar sensors have been used for a long time in the automotive industry,
they are still under-used for AD despite their appealing characteristics
(notably, their ability to measure the relative speed of obstacles and to
operate even in adverse weather conditions). To a large extent, this situation
is due to the relative lack of automotive datasets with real radar signals that
are both raw and annotated. In this work, we introduce CARRADA, a dataset of
synchronized camera and radar recordings with range-angle-Doppler annotations.
We also present a semi-automatic annotation approach, which was used to
annotate the dataset, and a radar semantic segmentation baseline, which we
evaluate on several metrics. Both our code and dataset are available online.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Exploring Radar Data Representations in Autonomous Driving: A Comprehensive Review [9.68427762815025]
Review focuses on exploring different radar data representations utilized in autonomous driving systems.
We introduce the capabilities and limitations of the radar sensor.
For each radar representation, we examine the related datasets, methods, advantages and limitations.
arXiv Detail & Related papers (2023-12-08T06:31:19Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving [57.03126447713602]
We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
arXiv Detail & Related papers (2022-09-29T01:30:34Z) - Exploiting Temporal Relations on Radar Perception for Autonomous Driving [26.736501544682294]
We exploit the temporal information from successive ego-centric bird-eye-view radar image frames for radar object recognition.
We propose a temporal relational layer to explicitly model the relations between objects within successive radar images.
arXiv Detail & Related papers (2022-04-03T23:52:25Z) - Deep Instance Segmentation with High-Resolution Automotive Radar [2.167586397005864]
We propose two efficient methods for instance segmentation with radar detection points.
One is implemented in an end-to-end deep learning driven fashion using PointNet++ framework.
The other is based on clustering of the radar detection points with semantic information.
arXiv Detail & Related papers (2021-10-05T01:18:27Z) - Rethinking of Radar's Role: A Camera-Radar Dataset and Systematic
Annotator via Coordinate Alignment [38.24705460170415]
We propose a new dataset, named CRUW, with a systematic annotator and performance evaluation system.
CRUW aims to classify and localize the objects in 3D purely from radar's radio frequency (RF) images.
To the best of our knowledge, CRUW is the first public large-scale dataset with a systematic annotation and evaluation system.
arXiv Detail & Related papers (2021-05-11T17:13:45Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - Multi-View Radar Semantic Segmentation [3.2093811507874768]
Automotive radars are low-cost active sensors that measure properties of surrounding objects.
They are seldom used for scene understanding due to the size and complexity of radar raw data.
We propose several novel architectures, and their associated losses, which analyse multiple "views" of the range-angle-Doppler radar tensor to segment it semantically.
arXiv Detail & Related papers (2021-03-30T09:56:41Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.