Radar Camera Fusion via Representation Learning in Autonomous Driving
- URL: http://arxiv.org/abs/2103.07825v1
- Date: Sun, 14 Mar 2021 01:32:03 GMT
- Title: Radar Camera Fusion via Representation Learning in Autonomous Driving
- Authors: Xu Dong, Binnan Zhuang, Yunxiang Mao, Langechuan Liu
- Abstract summary: Key to successful radar-camera fusion is accurate data association.
Traditional rule-based association methods are susceptible to performance degradation in challenging scenarios and failure in corner cases.
We propose to address rad-cam association via deep representation learning, to explore feature-level interaction and global reasoning.
- Score: 4.278336455989584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Radars and cameras are mature, cost-effective, and robust sensors and have
been widely used in the perception stack of mass-produced autonomous driving
systems. Due to their complementary properties, outputs from radar detection
(radar pins) and camera perception (2D bounding boxes) are usually fused to
generate the best perception results. The key to successful radar-camera fusion
is accurate data association. The challenges in radar-camera association can be
attributed to the complexity of driving scenes, the noisy and sparse nature of
radar measurements, and the depth ambiguity from 2D bounding boxes. Traditional
rule-based association methods are susceptible to performance degradation in
challenging scenarios and failure in corner cases. In this study, we propose to
address rad-cam association via deep representation learning, to explore
feature-level interaction and global reasoning. Concretely, we design a loss
sampling mechanism and an innovative ordinal loss to overcome the difficulty of
imperfect labeling and to enforce critical human reasoning. Despite being
trained with noisy labels generated by a rule-based algorithm, our proposed
method achieves a performance of 92.2% F1 score, which is 11.6% higher than the
rule-based teacher. Moreover, this data-driven method also lends itself to
continuous improvement via corner case mining.
Related papers
- Bootstrapping Autonomous Driving Radars with Self-Supervised Learning [13.13679517730015]
Training radar models is hindered by the cost and difficulty of annotating large-scale radar data.
We propose a self-supervised learning framework to leverage the large amount of unlabeled radar data to pre-train radar-only embeddings for self-driving perception tasks.
When used for downstream object detection, we demonstrate that the proposed self-supervision framework can improve the accuracy of state-of-the-art supervised baselines by $5.8%$ in mAP.
arXiv Detail & Related papers (2023-12-07T18:38:39Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - ROFusion: Efficient Object Detection using Hybrid Point-wise
Radar-Optical Fusion [14.419658061805507]
We propose a hybrid point-wise Radar-Optical fusion approach for object detection in autonomous driving scenarios.
The framework benefits from dense contextual information from both the range-doppler spectrum and images which are integrated to learn a multi-modal feature representation.
arXiv Detail & Related papers (2023-07-17T04:25:46Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection
Model [13.214257841152033]
Radar-centric data sets do not get a lot of attention in the development of deep learning techniques for radar perception.
We propose a transformers-based model, named RadarFormer, that utilizes state-of-the-art developments in vision deep learning.
Our model also introduces a channel-chirp-time merging module that reduces the size and complexity of our models by more than 10 times without compromising accuracy.
arXiv Detail & Related papers (2023-04-17T17:07:35Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Exploiting Temporal Relations on Radar Perception for Autonomous Driving [26.736501544682294]
We exploit the temporal information from successive ego-centric bird-eye-view radar image frames for radar object recognition.
We propose a temporal relational layer to explicitly model the relations between objects within successive radar images.
arXiv Detail & Related papers (2022-04-03T23:52:25Z) - Deep Instance Segmentation with High-Resolution Automotive Radar [2.167586397005864]
We propose two efficient methods for instance segmentation with radar detection points.
One is implemented in an end-to-end deep learning driven fashion using PointNet++ framework.
The other is based on clustering of the radar detection points with semantic information.
arXiv Detail & Related papers (2021-10-05T01:18:27Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - Automotive Radar Interference Mitigation with Unfolded Robust PCA based
on Residual Overcomplete Auto-Encoder Blocks [88.46770122522697]
In autonomous driving, radar systems play an important role in detecting targets such as other vehicles on the road.
Deep learning methods for automotive radar interference mitigation can succesfully estimate the amplitude of targets, but fail to recover the phase of the respective targets.
We propose an efficient and effective technique that is able to estimate both amplitude and phase in the presence of interference.
arXiv Detail & Related papers (2020-10-14T09:41:06Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.