RaLiBEV: Radar and LiDAR BEV Fusion Learning for Anchor Box Free Object
Detection Systems
- URL: http://arxiv.org/abs/2211.06108v5
- Date: Tue, 6 Feb 2024 12:41:20 GMT
- Title: RaLiBEV: Radar and LiDAR BEV Fusion Learning for Anchor Box Free Object
Detection Systems
- Authors: Yanlong Yang, Jianan Liu, Tao Huang, Qing-Long Han, Gang Ma and Bing
Zhu
- Abstract summary: In autonomous driving, LiDAR and radar are crucial for environmental perception.
Recent state-of-the-art works reveal that the fusion of radar and LiDAR can lead to robust detection in adverse weather.
We propose a bird's-eye view fusion learning-based anchor box-free object detection system.
- Score: 13.046347364043594
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In autonomous driving, LiDAR and radar are crucial for environmental
perception. LiDAR offers precise 3D spatial sensing information but struggles
in adverse weather like fog. Conversely, radar signals can penetrate rain or
mist due to their specific wavelength but are prone to noise disturbances.
Recent state-of-the-art works reveal that the fusion of radar and LiDAR can
lead to robust detection in adverse weather. The existing works adopt
convolutional neural network architecture to extract features from each sensor
data, then align and aggregate the two branch features to predict object
detection results. However, these methods have low accuracy of predicted
bounding boxes due to a simple design of label assignment and fusion
strategies. In this paper, we propose a bird's-eye view fusion learning-based
anchor box-free object detection system, which fuses the feature derived from
the radar range-azimuth heatmap and the LiDAR point cloud to estimate possible
objects. Different label assignment strategies have been designed to facilitate
the consistency between the classification of foreground or background anchor
points and the corresponding bounding box regressions. Furthermore, the
performance of the proposed object detector is further enhanced by employing a
novel interactive transformer module. The superior performance of the methods
proposed in this paper has been demonstrated using the recently published
Oxford Radar RobotCar dataset. Our system's average precision significantly
outperforms the state-of-the-art method by 13.1% and 19.0% at Intersection of
Union (IoU) of 0.8 under 'Clear+Foggy' training conditions for 'Clear' and
'Foggy' testing, respectively.
Related papers
- LEROjD: Lidar Extended Radar-Only Object Detection [0.22870279047711525]
3+1D imaging radar sensors offer a cost-effective, robust alternative to lidar.
Although lidar should not be used during inference, it can aid the training of radar-only object detectors.
We explore two strategies to transfer knowledge from the lidar to the radar domain and radar-only object detectors.
arXiv Detail & Related papers (2024-09-09T12:43:25Z) - MUFASA: Multi-View Fusion and Adaptation Network with Spatial Awareness for Radar Object Detection [3.1212590312985986]
sparsity of radar point clouds poses challenges in achieving precise object detection.
This paper introduces a comprehensive feature extraction method for radar point clouds.
We achieve state-of-the-art results among radar-based methods on the VoD dataset with the mAP of 50.24%.
arXiv Detail & Related papers (2024-08-01T13:52:18Z) - Radar-Lidar Fusion for Object Detection by Designing Effective
Convolution Networks [18.17057711053028]
We propose a dual-branch framework to integrate radar and Lidar data for enhanced object detection.
The results show that it surpasses state-of-the-art methods by $1.89%$ and $2.61%$ in favorable and adverse weather conditions.
arXiv Detail & Related papers (2023-10-30T10:18:40Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - ROFusion: Efficient Object Detection using Hybrid Point-wise
Radar-Optical Fusion [14.419658061805507]
We propose a hybrid point-wise Radar-Optical fusion approach for object detection in autonomous driving scenarios.
The framework benefits from dense contextual information from both the range-doppler spectrum and images which are integrated to learn a multi-modal feature representation.
arXiv Detail & Related papers (2023-07-17T04:25:46Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.