Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing
- URL: http://arxiv.org/abs/2201.03246v2
- Date: Tue, 11 Jan 2022 22:41:09 GMT
- Title: Vision in adverse weather: Augmentation using CycleGANs with various
object detectors for robust perception in autonomous racing
- Authors: Izzeddin Teeti, Valentina Musat, Salman Khan, Alexander Rast, Fabio
Cuzzolin, Andrew Bradley
- Abstract summary: In autonomous racing, the weather can change abruptly, causing significant degradation in perception, resulting in ineffective manoeuvres.
In order to improve detection in adverse weather, deep-learning-based models typically require extensive datasets captured in such conditions.
We introduce an approach of using synthesised adverse condition datasets in autonomous racing (generated using CycleGAN) to improve the performance of four out of five state-of-the-art detectors.
- Score: 70.16043883381677
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In an autonomous driving system, perception - identification of features and
objects from the environment - is crucial. In autonomous racing, high speeds
and small margins demand rapid and accurate detection systems. During the race,
the weather can change abruptly, causing significant degradation in perception,
resulting in ineffective manoeuvres. In order to improve detection in adverse
weather, deep-learning-based models typically require extensive datasets
captured in such conditions - the collection of which is a tedious, laborious,
and costly process. However, recent developments in CycleGAN architectures
allow the synthesis of highly realistic scenes in multiple weather conditions.
To this end, we introduce an approach of using synthesised adverse condition
datasets in autonomous racing (generated using CycleGAN) to improve the
performance of four out of five state-of-the-art detectors by an average of
42.7 and 4.4 mAP percentage points in the presence of night-time conditions and
droplets, respectively. Furthermore, we present a comparative analysis of five
object detectors - identifying the optimal pairing of detector and training
data for use during autonomous racing in challenging conditions.
Related papers
- Digital twins to alleviate the need for real field data in vision-based vehicle speed detection systems [0.9899633398596672]
Accurate vision-based speed estimation is more cost-effective than traditional methods based on radar or LiDAR.
Deep learning approaches are very limited in this context due to the lack of available data.
In this work, we propose the use of digital-twins using CARLA simulator to generate a large dataset representative of a specific real-world camera.
arXiv Detail & Related papers (2024-07-11T10:41:20Z) - Enhancing Lidar-based Object Detection in Adverse Weather using Offset
Sequences in Time [1.1725016312484975]
Lidar-based object detection is significantly affected by adverse weather conditions such as rain and fog.
Our research provides a comprehensive study of effective methods for mitigating the effects of adverse weather on the reliability of lidar-based object detection.
arXiv Detail & Related papers (2024-01-17T08:31:58Z) - Challenges of YOLO Series for Object Detection in Extremely Heavy Rain:
CALRA Simulator based Synthetic Evaluation Dataset [0.0]
Object detection by diverse sensors (e.g., LiDAR, radar, and camera) should be prioritized for autonomous vehicles.
These sensors require to detect objects accurately and quickly in diverse weather conditions, but they tend to have challenges to consistently detect objects in bad weather conditions with rain, snow, or fog.
In this study, based on experimentally obtained raindrop data from precipitation conditions, we constructed a novel dataset that could test diverse network model in various precipitation conditions.
arXiv Detail & Related papers (2023-12-13T08:45:57Z) - 4Seasons: Benchmarking Visual SLAM and Long-Term Localization for
Autonomous Driving in Challenging Conditions [54.59279160621111]
We present a novel visual SLAM and long-term localization benchmark for autonomous driving in challenging conditions based on the large-scale 4Seasons dataset.
The proposed benchmark provides drastic appearance variations caused by seasonal changes and diverse weather and illumination conditions.
We introduce a new unified benchmark for jointly evaluating visual odometry, global place recognition, and map-based visual localization performance.
arXiv Detail & Related papers (2022-12-31T13:52:36Z) - Recurrent Vision Transformers for Object Detection with Event Cameras [62.27246562304705]
We present Recurrent Vision Transformers (RVTs), a novel backbone for object detection with event cameras.
RVTs can be trained from scratch to reach state-of-the-art performance on event-based object detection.
Our study brings new insights into effective design choices that can be fruitful for research beyond event-based vision.
arXiv Detail & Related papers (2022-12-11T20:28:59Z) - SHIFT: A Synthetic Driving Dataset for Continuous Multi-Task Domain
Adaptation [152.60469768559878]
SHIFT is the largest multi-task synthetic dataset for autonomous driving.
It presents discrete and continuous shifts in cloudiness, rain and fog intensity, time of day, and vehicle and pedestrian density.
Our dataset and benchmark toolkit are publicly available at www.vis.xyz/shift.
arXiv Detail & Related papers (2022-06-16T17:59:52Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Worsening Perception: Real-time Degradation of Autonomous Vehicle
Perception Performance for Simulation of Adverse Weather Conditions [47.529411576737644]
This study explores the potential of using a simple, lightweight image augmentation system in an autonomous racing vehicle.
With minimal adjustment, the prototype system can replicate the effects of both water droplets on the camera lens, and fading light conditions.
arXiv Detail & Related papers (2021-03-03T23:49:02Z) - Probabilistic End-to-End Vehicle Navigation in Complex Dynamic
Environments with Multimodal Sensor Fusion [16.018962965273495]
All-day and all-weather navigation is a critical capability for autonomous driving.
We propose a probabilistic driving model with ultiperception capability utilizing the information from the camera, lidar and radar.
The results suggest that our proposed model outperforms baselines and achieves excellent generalization performance in unseen environments.
arXiv Detail & Related papers (2020-05-05T03:48:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.