Cross-Dataset Experimental Study of Radar-Camera Fusion in Bird's-Eye
View
- URL: http://arxiv.org/abs/2309.15465v1
- Date: Wed, 27 Sep 2023 08:02:58 GMT
- Title: Cross-Dataset Experimental Study of Radar-Camera Fusion in Bird's-Eye
View
- Authors: Lukas St\"acker, Philipp Heidenreich, Jason Rambach, Didier Stricker
- Abstract summary: Radar and camera fusion systems have the potential to provide a highly robust and reliable perception system.
Recent advances in camera-based object detection offer new radar-camera fusion possibilities with bird's eye view feature maps.
We propose a novel and flexible fusion network and evaluate its performance on two datasets.
- Score: 12.723455775659414
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: By exploiting complementary sensor information, radar and camera fusion
systems have the potential to provide a highly robust and reliable perception
system for advanced driver assistance systems and automated driving functions.
Recent advances in camera-based object detection offer new radar-camera fusion
possibilities with bird's eye view feature maps. In this work, we propose a
novel and flexible fusion network and evaluate its performance on two datasets:
nuScenes and View-of-Delft. Our experiments reveal that while the camera branch
needs large and diverse training data, the radar branch benefits more from a
high-performance radar. Using transfer learning, we improve the camera's
performance on the smaller dataset. Our results further demonstrate that the
radar-camera fusion approach significantly outperforms the camera-only and
radar-only baselines.
Related papers
- Boosting Online 3D Multi-Object Tracking through Camera-Radar Cross Check [24.764602040003403]
CRAFTBooster is a pioneering effort to enhance radar-camera fusion in the tracking stage, contributing to improved 3D MOT accuracy.
The superior experimental results on the K-Radaar dataset, which exhibit 5-6% on IDF1 tracking performance gain, validate the potential of effective sensor fusion in advancing autonomous driving.
arXiv Detail & Related papers (2024-07-18T23:32:27Z) - Exploring Radar Data Representations in Autonomous Driving: A Comprehensive Review [9.68427762815025]
Review focuses on exploring different radar data representations utilized in autonomous driving systems.
We introduce the capabilities and limitations of the radar sensor.
For each radar representation, we examine the related datasets, methods, advantages and limitations.
arXiv Detail & Related papers (2023-12-08T06:31:19Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - RC-BEVFusion: A Plug-In Module for Radar-Camera Bird's Eye View Feature
Fusion [11.646949644683755]
We present RC-BEVFusion, a modular radar-camera fusion network on the BEV plane.
We show significant performance gains of up to 28% increase in the nuScenes detection score.
arXiv Detail & Related papers (2023-05-25T09:26:04Z) - MVFusion: Multi-View 3D Object Detection with Semantic-aligned Radar and
Camera Fusion [6.639648061168067]
Multi-view radar-camera fused 3D object detection provides a farther detection range and more helpful features for autonomous driving.
Current radar-camera fusion methods deliver kinds of designs to fuse radar information with camera data.
We present MVFusion, a novel Multi-View radar-camera Fusion method to achieve semantic-aligned radar features.
arXiv Detail & Related papers (2023-02-21T08:25:50Z) - CramNet: Camera-Radar Fusion with Ray-Constrained Cross-Attention for
Robust 3D Object Detection [12.557361522985898]
We propose a camera-radar matching network CramNet to fuse the sensor readings from camera and radar in a joint 3D space.
Our method supports training with sensor modality dropout, which leads to robust 3D object detection, even when a camera or radar sensor suddenly malfunctions on a vehicle.
arXiv Detail & Related papers (2022-10-17T17:18:47Z) - Drone Detection and Tracking in Real-Time by Fusion of Different Sensing
Modalities [66.4525391417921]
We design and evaluate a multi-sensor drone detection system.
Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest.
The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution.
arXiv Detail & Related papers (2022-07-05T10:00:58Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.