MVFAN: Multi-View Feature Assisted Network for 4D Radar Object Detection
- URL: http://arxiv.org/abs/2310.16389v1
- Date: Wed, 25 Oct 2023 06:10:07 GMT
- Title: MVFAN: Multi-View Feature Assisted Network for 4D Radar Object Detection
- Authors: Qiao Yan, Yihan Wang
- Abstract summary: 4D radar is recognized for its resilience and cost-effectiveness under adverse weather conditions.
Unlike LiDAR and cameras, radar remains unimpaired by harsh weather conditions.
We propose a framework for developing radar-based 3D object detection for autonomous vehicles.
- Score: 15.925365473140479
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 4D radar is recognized for its resilience and cost-effectiveness under
adverse weather conditions, thus playing a pivotal role in autonomous driving.
While cameras and LiDAR are typically the primary sensors used in perception
modules for autonomous vehicles, radar serves as a valuable supplementary
sensor. Unlike LiDAR and cameras, radar remains unimpaired by harsh weather
conditions, thereby offering a dependable alternative in challenging
environments. Developing radar-based 3D object detection not only augments the
competency of autonomous vehicles but also provides economic benefits. In
response, we propose the Multi-View Feature Assisted Network (\textit{MVFAN}),
an end-to-end, anchor-free, and single-stage framework for 4D-radar-based 3D
object detection for autonomous vehicles. We tackle the issue of insufficient
feature utilization by introducing a novel Position Map Generation module to
enhance feature learning by reweighing foreground and background points, and
their features, considering the irregular distribution of radar point clouds.
Additionally, we propose a pioneering backbone, the Radar Feature Assisted
backbone, explicitly crafted to fully exploit the valuable Doppler velocity and
reflectivity data provided by the 4D radar sensor. Comprehensive experiments
and ablation studies carried out on Astyx and VoD datasets attest to the
efficacy of our framework. The incorporation of Doppler velocity and RCS
reflectivity dramatically improves the detection performance for small moving
objects such as pedestrians and cyclists. Consequently, our approach culminates
in a highly optimized 4D-radar-based 3D object detection capability for
autonomous driving systems, setting a new standard in the field.
Related papers
- RobuRCDet: Enhancing Robustness of Radar-Camera Fusion in Bird's Eye View for 3D Object Detection [68.99784784185019]
Poor lighting or adverse weather conditions degrade camera performance.
Radar suffers from noise and positional ambiguity.
We propose RobuRCDet, a robust object detection model in BEV.
arXiv Detail & Related papers (2025-02-18T17:17:38Z) - A Novel Multi-Teacher Knowledge Distillation for Real-Time Object Detection using 4D Radar [5.038148262901536]
3D object detection is crucial for safe autonomous navigation, requiring reliable performance across diverse weather conditions.
Traditional Radars have limitations due to their lack of elevation data.
4D Radars overcome this by measuring elevation alongside range, azimuth, and Doppler velocity, making them invaluable for autonomous vehicles.
arXiv Detail & Related papers (2025-02-10T02:48:56Z) - RadarNeXt: Real-Time and Reliable 3D Object Detector Based On 4D mmWave Imaging Radar [1.93832811391491]
RadarNeXt is a real-time and reliable 3D object detector based on the 4D mmWave radar point clouds.
We show that RadarNeXt brings a novel and effective paradigm for 3D perception based on 4D mmWave radar.
arXiv Detail & Related papers (2025-01-04T15:40:46Z) - RCBEVDet++: Toward High-accuracy Radar-Camera Fusion 3D Perception Network [34.45694077040797]
We present a radar-camera fusion 3D object detection framework called BEEVDet.
RadarBEVNet encodes sparse radar points into a dense bird's-eye-view feature.
Our method achieves state-of-the-art radar-camera fusion results in 3D object detection, BEV semantic segmentation, and 3D multi-object tracking tasks.
arXiv Detail & Related papers (2024-09-08T05:14:27Z) - RadarPillars: Efficient Object Detection from 4D Radar Point Clouds [42.9356088038035]
We present RadarPillars, a pillar-based object detection network.
By decomposing radial velocity data, RadarPillars significantly outperform state-of-the-art detection results on the View-of-Delft dataset.
This comes at a significantly reduced parameter count, surpassing existing methods in terms of efficiency and enabling real-time performance on edge devices.
arXiv Detail & Related papers (2024-08-09T12:13:38Z) - RadarOcc: Robust 3D Occupancy Prediction with 4D Imaging Radar [15.776076554141687]
3D occupancy-based perception pipeline has significantly advanced autonomous driving.
Current methods rely on LiDAR or camera inputs for 3D occupancy prediction.
We introduce a novel approach that utilizes 4D imaging radar sensors for 3D occupancy prediction.
arXiv Detail & Related papers (2024-05-22T21:48:17Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Better Monocular 3D Detectors with LiDAR from the Past [64.6759926054061]
Camera-based 3D detectors often suffer inferior performance compared to LiDAR-based counterparts due to inherent depth ambiguities in images.
In this work, we seek to improve monocular 3D detectors by leveraging unlabeled historical LiDAR data.
We show consistent and significant performance gain across multiple state-of-the-art models and datasets with a negligible additional latency of 9.66 ms and a small storage cost.
arXiv Detail & Related papers (2024-04-08T01:38:43Z) - NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving [57.03126447713602]
We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
arXiv Detail & Related papers (2022-09-29T01:30:34Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.