ROFusion: Efficient Object Detection using Hybrid Point-wise
Radar-Optical Fusion
- URL: http://arxiv.org/abs/2307.08233v1
- Date: Mon, 17 Jul 2023 04:25:46 GMT
- Title: ROFusion: Efficient Object Detection using Hybrid Point-wise
Radar-Optical Fusion
- Authors: Liu Liu, Shuaifeng Zhi, Zhenhua Du, Li Liu, Xinyu Zhang, Kai Huo, and
Weidong Jiang
- Abstract summary: We propose a hybrid point-wise Radar-Optical fusion approach for object detection in autonomous driving scenarios.
The framework benefits from dense contextual information from both the range-doppler spectrum and images which are integrated to learn a multi-modal feature representation.
- Score: 14.419658061805507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Radars, due to their robustness to adverse weather conditions and ability to
measure object motions, have served in autonomous driving and intelligent
agents for years. However, Radar-based perception suffers from its unintuitive
sensing data, which lack of semantic and structural information of scenes. To
tackle this problem, camera and Radar sensor fusion has been investigated as a
trending strategy with low cost, high reliability and strong maintenance. While
most recent works explore how to explore Radar point clouds and images, rich
contextual information within Radar observation are discarded. In this paper,
we propose a hybrid point-wise Radar-Optical fusion approach for object
detection in autonomous driving scenarios. The framework benefits from dense
contextual information from both the range-doppler spectrum and images which
are integrated to learn a multi-modal feature representation. Furthermore, we
propose a novel local coordinate formulation, tackling the object detection
task in an object-centric coordinate. Extensive results show that with the
information gained from optical images, we could achieve leading performance in
object detection (97.69\% recall) compared to recent state-of-the-art methods
FFT-RadNet (82.86\% recall). Ablation studies verify the key design choices and
practicability of our approach given machine generated imperfect detections.
The code will be available at https://github.com/LiuLiu-55/ROFusion.
Related papers
- Multi-Object Tracking based on Imaging Radar 3D Object Detection [0.13499500088995461]
This paper presents an approach for tracking surrounding traffic participants with a classical tracking algorithm.
Learning based object detectors have been shown to work adequately on lidar and camera data, while learning based object detectors using standard radar data input have proven to be inferior.
With the improvements to radar sensor technology in the form of imaging radars, the object detection performance on radar was greatly improved but is still limited compared to lidar sensors due to the sparsity of the radar point cloud.
The tracking algorithm must overcome the limited detection quality while generating consistent tracks.
arXiv Detail & Related papers (2024-06-03T05:46:23Z) - Radar Fields: Frequency-Space Neural Scene Representations for FMCW Radar [62.51065633674272]
We introduce Radar Fields - a neural scene reconstruction method designed for active radar imagers.
Our approach unites an explicit, physics-informed sensor model with an implicit neural geometry and reflectance model to directly synthesize raw radar measurements.
We validate the effectiveness of the method across diverse outdoor scenarios, including urban scenes with dense vehicles and infrastructure.
arXiv Detail & Related papers (2024-05-07T20:44:48Z) - Radar-Lidar Fusion for Object Detection by Designing Effective
Convolution Networks [18.17057711053028]
We propose a dual-branch framework to integrate radar and Lidar data for enhanced object detection.
The results show that it surpasses state-of-the-art methods by $1.89%$ and $2.61%$ in favorable and adverse weather conditions.
arXiv Detail & Related papers (2023-10-30T10:18:40Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - Multi-Task Cross-Modality Attention-Fusion for 2D Object Detection [6.388430091498446]
We propose two new radar preprocessing techniques to better align radar and camera data.
We also introduce a Multi-Task Cross-Modality Attention-Fusion Network (MCAF-Net) for object detection.
Our approach outperforms current state-of-the-art radar-camera fusion-based object detectors in the nuScenes dataset.
arXiv Detail & Related papers (2023-07-17T09:26:13Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - MVFusion: Multi-View 3D Object Detection with Semantic-aligned Radar and
Camera Fusion [6.639648061168067]
Multi-view radar-camera fused 3D object detection provides a farther detection range and more helpful features for autonomous driving.
Current radar-camera fusion methods deliver kinds of designs to fuse radar information with camera data.
We present MVFusion, a novel Multi-View radar-camera Fusion method to achieve semantic-aligned radar features.
arXiv Detail & Related papers (2023-02-21T08:25:50Z) - RODNet: A Real-Time Radar Object Detection Network Cross-Supervised by
Camera-Radar Fused Object 3D Localization [30.42848269877982]
We propose a deep radar object detection network, named RODNet, which is cross-supervised by a camera-radar fused algorithm.
Our proposed RODNet takes a sequence of RF images as the input to predict the likelihood of objects in the radar field of view (FoV)
With intensive experiments, our proposed cross-supervised RODNet achieves 86% average precision and 88% average recall of object detection performance.
arXiv Detail & Related papers (2021-02-09T22:01:55Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.