MmWave Radar and Vision Fusion based Object Detection for Autonomous
Driving: A Survey
- URL: http://arxiv.org/abs/2108.03004v1
- Date: Fri, 6 Aug 2021 08:38:42 GMT
- Title: MmWave Radar and Vision Fusion based Object Detection for Autonomous
Driving: A Survey
- Authors: Zhiqing Wei, Fengkai Zhang, Shuo Chang, Yangyang Liu, Huici Wu,
Zhiyong Feng
- Abstract summary: Millimeter wave (mmWave) radar and vision fusion is a mainstream solution for accurate obstacle detection.
This article presents a detailed survey on mmWave radar and vision fusion based obstacle detection methods.
- Score: 15.316597644398188
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With autonomous driving developing in a booming stage, accurate object
detection in complex scenarios attract wide attention to ensure the safety of
autonomous driving. Millimeter wave (mmWave) radar and vision fusion is a
mainstream solution for accurate obstacle detection. This article presents a
detailed survey on mmWave radar and vision fusion based obstacle detection
methods. Firstly, we introduce the tasks, evaluation criteria and datasets of
object detection for autonomous driving. Then, the process of mmWave radar and
vision fusion is divided into three parts: sensor deployment, sensor
calibration and sensor fusion, which are reviewed comprehensively. Especially,
we classify the fusion methods into data level, decision level and feature
level fusion methods. Besides, we introduce the fusion of lidar and vision in
autonomous driving in the aspects of obstacle detection, object classification
and road segmentation, which is promising in the future. Finally, we summarize
this article.
Related papers
- A Survey of Deep Learning Based Radar and Vision Fusion for 3D Object Detection in Autonomous Driving [9.962648957398923]
This paper focuses on a comprehensive survey of radar-vision (RV) fusion based on deep learning methods for 3D object detection in autonomous driving.
As the most promising fusion strategy at present, we provide a deeper classification of end-to-end fusion methods, including those 3D bounding box prediction based and BEV based approaches.
arXiv Detail & Related papers (2024-06-02T11:37:50Z) - Exploring Adversarial Robustness of LiDAR-Camera Fusion Model in
Autonomous Driving [17.618527727914163]
This study assesses the adversarial robustness of LiDAR-camera fusion models in 3D object detection.
We introduce an attack technique that, by simply adding a limited number of physically constrained adversarial points above a car, can make the car undetectable by the fusion model.
arXiv Detail & Related papers (2023-12-03T17:48:40Z) - ROFusion: Efficient Object Detection using Hybrid Point-wise
Radar-Optical Fusion [14.419658061805507]
We propose a hybrid point-wise Radar-Optical fusion approach for object detection in autonomous driving scenarios.
The framework benefits from dense contextual information from both the range-doppler spectrum and images which are integrated to learn a multi-modal feature representation.
arXiv Detail & Related papers (2023-07-17T04:25:46Z) - Radar-Camera Fusion for Object Detection and Semantic Segmentation in
Autonomous Driving: A Comprehensive Review [7.835577409160127]
This review focuses on perception tasks related to object detection and semantic segmentation.
In the review, we address interrogative questions, including "why to fuse", "what to fuse", "where to fuse", "when to fuse", and "how to fuse"
arXiv Detail & Related papers (2023-04-20T15:48:50Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - 3D Object Detection for Autonomous Driving: A Comprehensive Survey [48.30753402458884]
3D object detection, which intelligently predicts the locations, sizes, and categories of the critical 3D objects near an autonomous vehicle, is an important part of a perception system.
This paper reviews the advances in 3D object detection for autonomous driving.
arXiv Detail & Related papers (2022-06-19T19:43:11Z) - Multi-Modal 3D Object Detection in Autonomous Driving: a Survey [10.913958563906931]
Self-driving cars are equipped with a suite of sensors to conduct robust and accurate environment perception.
As the number and type of sensors keep increasing, combining them for better perception is becoming a natural trend.
This survey devotes to review recent fusion-based 3D detection deep learning models that leverage multiple sensor data sources.
arXiv Detail & Related papers (2021-06-24T02:52:12Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Towards Autonomous Driving: a Multi-Modal 360$^{\circ}$ Perception
Proposal [87.11988786121447]
This paper presents a framework for 3D object detection and tracking for autonomous vehicles.
The solution, based on a novel sensor fusion configuration, provides accurate and reliable road environment detection.
A variety of tests of the system, deployed in an autonomous vehicle, have successfully assessed the suitability of the proposed perception stack.
arXiv Detail & Related papers (2020-08-21T20:36:21Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.