Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics
- URL: http://arxiv.org/abs/2303.04302v1
- Date: Wed, 8 Mar 2023 00:48:32 GMT
- Title: Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics
- Authors: Felipe Manfio Barbosa, Fernando Santos Os\'orio
- Abstract summary: This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
- Score: 77.34726150561087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the main paths towards the reduction of traffic accidents is the
increase in vehicle safety through driver assistance systems or even systems
with a complete level of autonomy. In these types of systems, tasks such as
obstacle detection and segmentation, especially the Deep Learning-based ones,
play a fundamental role in scene understanding for correct and safe navigation.
Besides that, the wide variety of sensors in vehicles nowadays provides a rich
set of alternatives for improvement in the robustness of perception in
challenging situations, such as navigation under lighting and weather adverse
conditions. Despite the current focus given to the subject, the literature
lacks studies on radar-based and radar-camera fusion-based perception. Hence,
this work aims to carry out a study on the current scenario of camera and
radar-based perception for ADAS and autonomous vehicles. Concepts and
characteristics related to both sensors, as well as to their fusion, are
presented. Additionally, we give an overview of the Deep Learning-based
detection and segmentation tasks, and the main datasets, metrics, challenges,
and open questions in vehicle perception.
Related papers
- OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Exploring Radar Data Representations in Autonomous Driving: A Comprehensive Review [9.68427762815025]
Review focuses on exploring different radar data representations utilized in autonomous driving systems.
We introduce the capabilities and limitations of the radar sensor.
For each radar representation, we examine the related datasets, methods, advantages and limitations.
arXiv Detail & Related papers (2023-12-08T06:31:19Z) - Radars for Autonomous Driving: A Review of Deep Learning Methods and
Challenges [0.021665899581403605]
Radar is a key component of the suite of perception sensors used for autonomous vehicles.
It is characterized by low resolution, sparsity, clutter, high uncertainty, and lack of good datasets.
Current radar models are often influenced by lidar and vision models, which are focused on optical features that are relatively weak in radar data.
arXiv Detail & Related papers (2023-06-15T17:37:52Z) - Radar-Camera Fusion for Object Detection and Semantic Segmentation in
Autonomous Driving: A Comprehensive Review [7.835577409160127]
This review focuses on perception tasks related to object detection and semantic segmentation.
In the review, we address interrogative questions, including "why to fuse", "what to fuse", "where to fuse", "when to fuse", and "how to fuse"
arXiv Detail & Related papers (2023-04-20T15:48:50Z) - 3D Object Detection for Autonomous Driving: A Comprehensive Survey [48.30753402458884]
3D object detection, which intelligently predicts the locations, sizes, and categories of the critical 3D objects near an autonomous vehicle, is an important part of a perception system.
This paper reviews the advances in 3D object detection for autonomous driving.
arXiv Detail & Related papers (2022-06-19T19:43:11Z) - Object Detection in Autonomous Vehicles: Status and Open Challenges [4.226118870861363]
Object detection is a computer vision task that has become an integral part of many consumer applications today.
Deep learning-based object detectors play a vital role in finding and localizing these objects in real-time.
This article discusses the state-of-the-art in object detectors and open challenges for their integration into autonomous vehicles.
arXiv Detail & Related papers (2022-01-19T16:45:16Z) - Complex-valued Convolutional Neural Networks for Enhanced Radar Signal
Denoising and Interference Mitigation [73.0103413636673]
We propose the use of Complex-Valued Convolutional Neural Networks (CVCNNs) to address the issue of mutual interference between radar sensors.
CVCNNs increase data efficiency, speeds up network training and substantially improves the conservation of phase information during interference removal.
arXiv Detail & Related papers (2021-04-29T10:06:29Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Road obstacles positional and dynamic features extraction combining
object detection, stereo disparity maps and optical flow data [0.0]
It is important that a visual perception system for navigation purposes identifies obstacles.
We present an approach for the identification of obstacles and extraction of class, position, depth and motion information.
arXiv Detail & Related papers (2020-06-24T19:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.