Rethinking of Radar's Role: A Camera-Radar Dataset and Systematic
Annotator via Coordinate Alignment
- URL: http://arxiv.org/abs/2105.05207v1
- Date: Tue, 11 May 2021 17:13:45 GMT
- Title: Rethinking of Radar's Role: A Camera-Radar Dataset and Systematic
Annotator via Coordinate Alignment
- Authors: Yizhou Wang, Gaoang Wang, Hung-Min Hsu, Hui Liu, Jenq-Neng Hwang
- Abstract summary: We propose a new dataset, named CRUW, with a systematic annotator and performance evaluation system.
CRUW aims to classify and localize the objects in 3D purely from radar's radio frequency (RF) images.
To the best of our knowledge, CRUW is the first public large-scale dataset with a systematic annotation and evaluation system.
- Score: 38.24705460170415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Radar has long been a common sensor on autonomous vehicles for obstacle
ranging and speed estimation. However, as a robust sensor to all-weather
conditions, radar's capability has not been well-exploited, compared with
camera or LiDAR. Instead of just serving as a supplementary sensor, radar's
rich information hidden in the radio frequencies can potentially provide useful
clues to achieve more complicated tasks, like object classification and
detection. In this paper, we propose a new dataset, named CRUW, with a
systematic annotator and performance evaluation system to address the radar
object detection (ROD) task, which aims to classify and localize the objects in
3D purely from radar's radio frequency (RF) images. To the best of our
knowledge, CRUW is the first public large-scale dataset with a systematic
annotation and evaluation system, which involves camera RGB images and radar RF
images, collected in various driving scenarios.
Related papers
- A Resource Efficient Fusion Network for Object Detection in Bird's-Eye View using Camera and Raw Radar Data [7.2508100569856975]
We use the raw range-Doppler spectrum of radar data to process camera images.
We extract the corresponding features with our camera encoder-decoder architecture.
The resultant feature maps are fused with Range-Azimuth features, recovered from the RD spectrum input to perform object detection.
arXiv Detail & Related papers (2024-11-20T13:26:13Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Vision meets mmWave Radar: 3D Object Perception Benchmark for Autonomous
Driving [30.456314610767667]
We introduce the CRUW3D dataset, including 66K synchronized and well-calibrated camera, radar, and LiDAR frames.
This kind of format can enable machine learning models to more reliable perception results after fusing the information or features between the camera and radar.
arXiv Detail & Related papers (2023-11-17T01:07:37Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - RODNet: A Real-Time Radar Object Detection Network Cross-Supervised by
Camera-Radar Fused Object 3D Localization [30.42848269877982]
We propose a deep radar object detection network, named RODNet, which is cross-supervised by a camera-radar fused algorithm.
Our proposed RODNet takes a sequence of RF images as the input to predict the likelihood of objects in the radar field of view (FoV)
With intensive experiments, our proposed cross-supervised RODNet achieves 86% average precision and 88% average recall of object detection performance.
arXiv Detail & Related papers (2021-02-09T22:01:55Z) - Radar Artifact Labeling Framework (RALF): Method for Plausible Radar
Detections in Datasets [2.5899040911480187]
We propose a cross sensor Radar Artifact Labeling Framework (RALF) for labeling sparse radar point clouds.
RALF provides plausibility labels for radar raw detections, distinguishing between artifacts and targets.
We validate the results by evaluating error metrics on semi-manually labeled ground truth dataset of $3.28cdot106$ points.
arXiv Detail & Related papers (2020-12-03T15:11:31Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.