Radar-Lidar Fusion for Object Detection by Designing Effective
Convolution Networks
- URL: http://arxiv.org/abs/2310.19405v1
- Date: Mon, 30 Oct 2023 10:18:40 GMT
- Title: Radar-Lidar Fusion for Object Detection by Designing Effective
Convolution Networks
- Authors: Farzeen Munir, Shoaib Azam, Tomasz Kucner, Ville Kyrki, Moongu Jeon
- Abstract summary: We propose a dual-branch framework to integrate radar and Lidar data for enhanced object detection.
The results show that it surpasses state-of-the-art methods by $1.89%$ and $2.61%$ in favorable and adverse weather conditions.
- Score: 18.17057711053028
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Object detection is a core component of perception systems, providing the ego
vehicle with information about its surroundings to ensure safe route planning.
While cameras and Lidar have significantly advanced perception systems, their
performance can be limited in adverse weather conditions. In contrast,
millimeter-wave technology enables radars to function effectively in such
conditions. However, relying solely on radar for building a perception system
doesn't fully capture the environment due to the data's sparse nature. To
address this, sensor fusion strategies have been introduced. We propose a
dual-branch framework to integrate radar and Lidar data for enhanced object
detection. The primary branch focuses on extracting radar features, while the
auxiliary branch extracts Lidar features. These are then combined using
additive attention. Subsequently, the integrated features are processed through
a novel Parallel Forked Structure (PFS) to manage scale variations. A region
proposal head is then utilized for object detection. We evaluated the
effectiveness of our proposed method on the Radiate dataset using COCO metrics.
The results show that it surpasses state-of-the-art methods by $1.89\%$ and
$2.61\%$ in favorable and adverse weather conditions, respectively. This
underscores the value of radar-Lidar fusion in achieving precise object
detection and localization, especially in challenging weather conditions.
Related papers
- MUFASA: Multi-View Fusion and Adaptation Network with Spatial Awareness for Radar Object Detection [3.1212590312985986]
sparsity of radar point clouds poses challenges in achieving precise object detection.
This paper introduces a comprehensive feature extraction method for radar point clouds.
We achieve state-of-the-art results among radar-based methods on the VoD dataset with the mAP of 50.24%.
arXiv Detail & Related papers (2024-08-01T13:52:18Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - ROFusion: Efficient Object Detection using Hybrid Point-wise
Radar-Optical Fusion [14.419658061805507]
We propose a hybrid point-wise Radar-Optical fusion approach for object detection in autonomous driving scenarios.
The framework benefits from dense contextual information from both the range-doppler spectrum and images which are integrated to learn a multi-modal feature representation.
arXiv Detail & Related papers (2023-07-17T04:25:46Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - Semantic Segmentation of Radar Detections using Convolutions on Point
Clouds [59.45414406974091]
We introduce a deep-learning based method to convolve radar detections into point clouds.
We adapt this algorithm to radar-specific properties through distance-dependent clustering and pre-processing of input point clouds.
Our network outperforms state-of-the-art approaches that are based on PointNet++ on the task of semantic segmentation of radar point clouds.
arXiv Detail & Related papers (2023-05-22T07:09:35Z) - RaLiBEV: Radar and LiDAR BEV Fusion Learning for Anchor Box Free Object
Detection Systems [13.046347364043594]
In autonomous driving, LiDAR and radar are crucial for environmental perception.
Recent state-of-the-art works reveal that the fusion of radar and LiDAR can lead to robust detection in adverse weather.
We propose a bird's-eye view fusion learning-based anchor box-free object detection system.
arXiv Detail & Related papers (2022-11-11T10:24:42Z) - Deep Instance Segmentation with High-Resolution Automotive Radar [2.167586397005864]
We propose two efficient methods for instance segmentation with radar detection points.
One is implemented in an end-to-end deep learning driven fashion using PointNet++ framework.
The other is based on clustering of the radar detection points with semantic information.
arXiv Detail & Related papers (2021-10-05T01:18:27Z) - Channel Boosting Feature Ensemble for Radar-based Object Detection [6.810856082577402]
Radar-based object detection is explored provides a counterpart sensor modality to be deployed and used in adverse weather conditions.
The proposed method's efficacy is extensively evaluated using the COCO evaluation metric.
arXiv Detail & Related papers (2021-01-10T12:20:58Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.