RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection
Model
- URL: http://arxiv.org/abs/2304.08447v1
- Date: Mon, 17 Apr 2023 17:07:35 GMT
- Title: RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection
Model
- Authors: Yahia Dalbah, Jean Lahoud, Hisham Cholakkal
- Abstract summary: Radar-centric data sets do not get a lot of attention in the development of deep learning techniques for radar perception.
We propose a transformers-based model, named RadarFormer, that utilizes state-of-the-art developments in vision deep learning.
Our model also introduces a channel-chirp-time merging module that reduces the size and complexity of our models by more than 10 times without compromising accuracy.
- Score: 13.214257841152033
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The performance of perception systems developed for autonomous driving
vehicles has seen significant improvements over the last few years. This
improvement was associated with the increasing use of LiDAR sensors and point
cloud data to facilitate the task of object detection and recognition in
autonomous driving. However, LiDAR and camera systems show deteriorating
performances when used in unfavorable conditions like dusty and rainy weather.
Radars on the other hand operate on relatively longer wavelengths which allows
for much more robust measurements in these conditions. Despite that,
radar-centric data sets do not get a lot of attention in the development of
deep learning techniques for radar perception. In this work, we consider the
radar object detection problem, in which the radar frequency data is the only
input into the detection framework. We further investigate the challenges of
using radar-only data in deep learning models. We propose a transformers-based
model, named RadarFormer, that utilizes state-of-the-art developments in vision
deep learning. Our model also introduces a channel-chirp-time merging module
that reduces the size and complexity of our models by more than 10 times
without compromising accuracy. Comprehensive experiments on the CRUW radar
dataset demonstrate the advantages of the proposed method. Our RadarFormer
performs favorably against the state-of-the-art methods while being 2x faster
during inference and requiring only one-tenth of their model parameters. The
code associated with this paper is available at
https://github.com/YahiDar/RadarFormer.
Related papers
- Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Bootstrapping Autonomous Driving Radars with Self-Supervised Learning [13.13679517730015]
Training radar models is hindered by the cost and difficulty of annotating large-scale radar data.
We propose a self-supervised learning framework to leverage the large amount of unlabeled radar data to pre-train radar-only embeddings for self-driving perception tasks.
When used for downstream object detection, we demonstrate that the proposed self-supervision framework can improve the accuracy of state-of-the-art supervised baselines by $5.8%$ in mAP.
arXiv Detail & Related papers (2023-12-07T18:38:39Z) - TransRadar: Adaptive-Directional Transformer for Real-Time Multi-View
Radar Semantic Segmentation [21.72892413572166]
We propose a novel approach to the semantic segmentation of radar scenes using a multi-input fusion of radar data.
Our method, TransRadar, outperforms state-of-the-art methods on the CARRADA and RADIal datasets.
arXiv Detail & Related papers (2023-10-03T17:59:05Z) - Timely Fusion of Surround Radar/Lidar for Object Detection in Autonomous Driving Systems [13.998883144668941]
Fusing Radar and Lidar sensor data can fully utilize their complementary advantages and provide more accurate reconstruction of the surrounding.
Existing Radar/Lidar fusion methods have to work at the low frequency of surround Radar.
This paper develops techniques to fuse surround Radar/Lidar with working frequency only limited by the faster surround Lidar.
arXiv Detail & Related papers (2023-09-09T14:22:12Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Radars for Autonomous Driving: A Review of Deep Learning Methods and
Challenges [0.021665899581403605]
Radar is a key component of the suite of perception sensors used for autonomous vehicles.
It is characterized by low resolution, sparsity, clutter, high uncertainty, and lack of good datasets.
Current radar models are often influenced by lidar and vision models, which are focused on optical features that are relatively weak in radar data.
arXiv Detail & Related papers (2023-06-15T17:37:52Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.