Object Detection Under Rainy Conditions for Autonomous Vehicles: A
Review of State-of-the-Art and Emerging Techniques
- URL: http://arxiv.org/abs/2006.16471v4
- Date: Fri, 12 Feb 2021 02:16:15 GMT
- Title: Object Detection Under Rainy Conditions for Autonomous Vehicles: A
Review of State-of-the-Art and Emerging Techniques
- Authors: Mazin Hnewa and Hayder Radha
- Abstract summary: This paper presents a tutorial on state-of-the-art techniques for mitigating the influence of rainy conditions on an autonomous vehicle's ability to detect objects.
Our goal includes surveying and analyzing the performance of object detection methods trained and tested using visual data captured under clear and rainy conditions.
- Score: 5.33024001730262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advanced automotive active-safety systems, in general, and autonomous
vehicles, in particular, rely heavily on visual data to classify and localize
objects such as pedestrians, traffic signs and lights, and other nearby cars,
to assist the corresponding vehicles maneuver safely in their environments.
However, the performance of object detection methods could degrade rather
significantly under challenging weather scenarios including rainy conditions.
Despite major advancements in the development of deraining approaches, the
impact of rain on object detection has largely been understudied, especially in
the context of autonomous driving. The main objective of this paper is to
present a tutorial on state-of-the-art and emerging techniques that represent
leading candidates for mitigating the influence of rainy conditions on an
autonomous vehicle's ability to detect objects. Our goal includes surveying and
analyzing the performance of object detection methods trained and tested using
visual data captured under clear and rainy conditions. Moreover, we survey and
evaluate the efficacy and limitations of leading deraining approaches,
deep-learning based domain adaptation, and image translation frameworks that
are being considered for addressing the problem of object detection under rainy
conditions. Experimental results of a variety of the surveyed techniques are
presented as part of this tutorial.
Related papers
- Fairness in Autonomous Driving: Towards Understanding Confounding Factors in Object Detection under Challenging Weather [7.736445799116692]
This study provides an empirical analysis of fairness in detecting pedestrians in a state-of-the-art transformer-based object detector.
In addition to classical metrics, we introduce novel probability-based metrics to measure various intricate properties of object detection.
Our quantitative analysis reveals how the previously overlooked yet intuitive factors, such as the distribution of demographic groups in the scene, the severity of weather, the pedestrians' proximity to the AV, among others, affect object detection performance.
arXiv Detail & Related papers (2024-05-31T22:35:10Z) - Challenges of YOLO Series for Object Detection in Extremely Heavy Rain:
CALRA Simulator based Synthetic Evaluation Dataset [0.0]
Object detection by diverse sensors (e.g., LiDAR, radar, and camera) should be prioritized for autonomous vehicles.
These sensors require to detect objects accurately and quickly in diverse weather conditions, but they tend to have challenges to consistently detect objects in bad weather conditions with rain, snow, or fog.
In this study, based on experimentally obtained raindrop data from precipitation conditions, we constructed a novel dataset that could test diverse network model in various precipitation conditions.
arXiv Detail & Related papers (2023-12-13T08:45:57Z) - DRUformer: Enhancing the driving scene Important object detection with
driving relationship self-understanding [50.81809690183755]
Traffic accidents frequently lead to fatal injuries, contributing to over 50 million deaths until 2023.
Previous research primarily assessed the importance of individual participants, treating them as independent entities.
We introduce Driving scene Relationship self-Understanding transformer (DRUformer) to enhance the important object detection task.
arXiv Detail & Related papers (2023-11-11T07:26:47Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Domain Adaptation based Object Detection for Autonomous Driving in Foggy and Rainy Weather [44.711384869027775]
Due to the domain gap, a detection model trained under clear weather may not perform well in foggy and rainy conditions.
To bridge the domain gap and improve the performance of object detection in foggy and rainy weather, this paper presents a novel framework for domain-adaptive object detection.
arXiv Detail & Related papers (2023-07-18T23:06:47Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Studying Person-Specific Pointing and Gaze Behavior for Multimodal
Referencing of Outside Objects from a Moving Vehicle [58.720142291102135]
Hand pointing and eye gaze have been extensively investigated in automotive applications for object selection and referencing.
Existing outside-the-vehicle referencing methods focus on a static situation, whereas the situation in a moving vehicle is highly dynamic and subject to safety-critical constraints.
We investigate the specific characteristics of each modality and the interaction between them when used in the task of referencing outside objects.
arXiv Detail & Related papers (2020-09-23T14:56:19Z) - Probabilistic End-to-End Vehicle Navigation in Complex Dynamic
Environments with Multimodal Sensor Fusion [16.018962965273495]
All-day and all-weather navigation is a critical capability for autonomous driving.
We propose a probabilistic driving model with ultiperception capability utilizing the information from the camera, lidar and radar.
The results suggest that our proposed model outperforms baselines and achieves excellent generalization performance in unseen environments.
arXiv Detail & Related papers (2020-05-05T03:48:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.