Radar-Camera Fusion for Object Detection and Semantic Segmentation in
Autonomous Driving: A Comprehensive Review
- URL: http://arxiv.org/abs/2304.10410v2
- Date: Wed, 23 Aug 2023 15:15:59 GMT
- Title: Radar-Camera Fusion for Object Detection and Semantic Segmentation in
Autonomous Driving: A Comprehensive Review
- Authors: Shanliang Yao, Runwei Guan, Xiaoyu Huang, Zhuoxiao Li, Xiangyu Sha,
Yong Yue, Eng Gee Lim, Hyungjoon Seo, Ka Lok Man, Xiaohui Zhu, Yutao Yue
- Abstract summary: This review focuses on perception tasks related to object detection and semantic segmentation.
In the review, we address interrogative questions, including "why to fuse", "what to fuse", "where to fuse", "when to fuse", and "how to fuse"
- Score: 7.835577409160127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Driven by deep learning techniques, perception technology in autonomous
driving has developed rapidly in recent years, enabling vehicles to accurately
detect and interpret surrounding environment for safe and efficient navigation.
To achieve accurate and robust perception capabilities, autonomous vehicles are
often equipped with multiple sensors, making sensor fusion a crucial part of
the perception system. Among these fused sensors, radars and cameras enable a
complementary and cost-effective perception of the surrounding environment
regardless of lighting and weather conditions. This review aims to provide a
comprehensive guideline for radar-camera fusion, particularly concentrating on
perception tasks related to object detection and semantic segmentation.Based on
the principles of the radar and camera sensors, we delve into the data
processing process and representations, followed by an in-depth analysis and
summary of radar-camera fusion datasets. In the review of methodologies in
radar-camera fusion, we address interrogative questions, including "why to
fuse", "what to fuse", "where to fuse", "when to fuse", and "how to fuse",
subsequently discussing various challenges and potential research directions
within this domain. To ease the retrieval and comparison of datasets and fusion
methods, we also provide an interactive website:
https://radar-camera-fusion.github.io.
Related papers
- Radar and Camera Fusion for Object Detection and Tracking: A Comprehensive Survey [26.812387135057584]
We focus on the fundamental principles, methodologies, and applications of radar-camera fusion perception.
We provide a detailed taxonomy covering the research topics related to object detection and tracking in the context of radar and camera technologies.
arXiv Detail & Related papers (2024-10-24T07:37:57Z) - Exploring Radar Data Representations in Autonomous Driving: A Comprehensive Review [9.68427762815025]
Review focuses on exploring different radar data representations utilized in autonomous driving systems.
We introduce the capabilities and limitations of the radar sensor.
For each radar representation, we examine the related datasets, methods, advantages and limitations.
arXiv Detail & Related papers (2023-12-08T06:31:19Z) - Echoes Beyond Points: Unleashing the Power of Raw Radar Data in
Multi-modality Fusion [74.84019379368807]
We propose a novel method named EchoFusion to skip the existing radar signal processing pipeline.
Specifically, we first generate the Bird's Eye View (BEV) queries and then take corresponding spectrum features from radar to fuse with other sensors.
arXiv Detail & Related papers (2023-07-31T09:53:50Z) - ROFusion: Efficient Object Detection using Hybrid Point-wise
Radar-Optical Fusion [14.419658061805507]
We propose a hybrid point-wise Radar-Optical fusion approach for object detection in autonomous driving scenarios.
The framework benefits from dense contextual information from both the range-doppler spectrum and images which are integrated to learn a multi-modal feature representation.
arXiv Detail & Related papers (2023-07-17T04:25:46Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - Drone Detection and Tracking in Real-Time by Fusion of Different Sensing
Modalities [66.4525391417921]
We design and evaluate a multi-sensor drone detection system.
Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest.
The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution.
arXiv Detail & Related papers (2022-07-05T10:00:58Z) - MmWave Radar and Vision Fusion based Object Detection for Autonomous
Driving: A Survey [15.316597644398188]
Millimeter wave (mmWave) radar and vision fusion is a mainstream solution for accurate obstacle detection.
This article presents a detailed survey on mmWave radar and vision fusion based obstacle detection methods.
arXiv Detail & Related papers (2021-08-06T08:38:42Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.