HVDetFusion: A Simple and Robust Camera-Radar Fusion Framework
- URL: http://arxiv.org/abs/2307.11323v1
- Date: Fri, 21 Jul 2023 03:08:28 GMT
- Title: HVDetFusion: A Simple and Robust Camera-Radar Fusion Framework
- Authors: Kai Lei, Zhan Chen, Shuman Jia, Xiaoteng Zhang
- Abstract summary: Current SOTA algorithm combines Camera and Lidar sensors, limited by the high price of Lidar.
HVDetFusion is a multi-modal detection algorithm that supports pure camera data as input for detection, but also can perform fusion input of radar data and camera data.
HVDetFusion achieves the new state-of-the-art 67.4% NDS on the challenging nuScenes test set among all camera-radar 3D object detectors.
- Score: 10.931114142452895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of autonomous driving, 3D object detection is a very important
perception module. Although the current SOTA algorithm combines Camera and
Lidar sensors, limited by the high price of Lidar, the current mainstream
landing schemes are pure Camera sensors or Camera+Radar sensors. In this study,
we propose a new detection algorithm called HVDetFusion, which is a multi-modal
detection algorithm that not only supports pure camera data as input for
detection, but also can perform fusion input of radar data and camera data. The
camera stream does not depend on the input of Radar data, thus addressing the
downside of previous methods. In the pure camera stream, we modify the
framework of Bevdet4D for better perception and more efficient inference, and
this stream has the whole 3D detection output. Further, to incorporate the
benefits of Radar signals, we use the prior information of different object
positions to filter the false positive information of the original radar data,
according to the positioning information and radial velocity information
recorded by the radar sensors to supplement and fuse the BEV features generated
by the original camera data, and the effect is further improved in the process
of fusion training. Finally, HVDetFusion achieves the new state-of-the-art
67.4\% NDS on the challenging nuScenes test set among all camera-radar 3D
object detectors. The code is available at
https://github.com/HVXLab/HVDetFusion
Related papers
- RCBEVDet++: Toward High-accuracy Radar-Camera Fusion 3D Perception Network [34.45694077040797]
We present a radar-camera fusion 3D object detection framework called BEEVDet.
RadarBEVNet encodes sparse radar points into a dense bird's-eye-view feature.
Our method achieves state-of-the-art radar-camera fusion results in 3D object detection, BEV semantic segmentation, and 3D multi-object tracking tasks.
arXiv Detail & Related papers (2024-09-08T05:14:27Z) - RCBEVDet: Radar-camera Fusion in Bird's Eye View for 3D Object Detection [33.07575082922186]
Three-dimensional object detection is one of the key tasks in autonomous driving.
relying solely on cameras is difficult to achieve highly accurate and robust 3D object detection.
radar-camera fusion 3D object detection method in the bird's eye view (BEV)
RadarBEVNet consists of a dual-stream radar backbone and a Radar Cross-Section (RC) aware BEV encoder.
arXiv Detail & Related papers (2024-03-25T06:02:05Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - RCM-Fusion: Radar-Camera Multi-Level Fusion for 3D Object Detection [15.686167262542297]
We propose Radar-Camera Multi-level fusion (RCM-Fusion), which attempts to fuse both modalities at both feature and instance levels.
For feature-level fusion, we propose a Radar Guided BEV which transforms camera features into precise BEV representations.
For instance-level fusion, we propose a Radar Grid Point Refinement module that reduces localization error.
arXiv Detail & Related papers (2023-07-17T07:22:25Z) - Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection [78.59426158981108]
We introduce a bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the challenges and improve 3D detection for dynamic objects.
We conduct extensive experiments on nuScenes and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art performance for detecting dynamic objects.
arXiv Detail & Related papers (2023-06-02T10:57:41Z) - Multi-Modal 3D Object Detection by Box Matching [109.43430123791684]
We propose a novel Fusion network by Box Matching (FBMNet) for multi-modal 3D detection.
With the learned assignments between 3D and 2D object proposals, the fusion for detection can be effectively performed by combing their ROI features.
arXiv Detail & Related papers (2023-05-12T18:08:51Z) - TransCAR: Transformer-based Camera-And-Radar Fusion for 3D Object
Detection [13.986963122264633]
TransCAR is a Transformer-based Camera-And-Radar fusion solution for 3D object detection.
Our model estimates a bounding box per query using set-to-set Hungarian loss.
arXiv Detail & Related papers (2023-04-30T05:35:03Z) - CramNet: Camera-Radar Fusion with Ray-Constrained Cross-Attention for
Robust 3D Object Detection [12.557361522985898]
We propose a camera-radar matching network CramNet to fuse the sensor readings from camera and radar in a joint 3D space.
Our method supports training with sensor modality dropout, which leads to robust 3D object detection, even when a camera or radar sensor suddenly malfunctions on a vehicle.
arXiv Detail & Related papers (2022-10-17T17:18:47Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.