Robust ADAS: Enhancing Robustness of Machine Learning-based Advanced Driver Assistance Systems for Adverse Weather
- URL: http://arxiv.org/abs/2407.02581v1
- Date: Tue, 2 Jul 2024 18:03:52 GMT
- Title: Robust ADAS: Enhancing Robustness of Machine Learning-based Advanced Driver Assistance Systems for Adverse Weather
- Authors: Muhammad Zaeem Shahzad, Muhammad Abdullah Hanif, Muhammad Shafique,
- Abstract summary: This paper employs a Denoising Deep Neural Network as a preprocessing step to transform adverse weather images into clear weather images.
It improves driver visualization, which is critical for safe navigation in adverse weather conditions.
- Score: 5.383130566626935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the realm of deploying Machine Learning-based Advanced Driver Assistance Systems (ML-ADAS) into real-world scenarios, adverse weather conditions pose a significant challenge. Conventional ML models trained on clear weather data falter when faced with scenarios like extreme fog or heavy rain, potentially leading to accidents and safety hazards. This paper addresses this issue by proposing a novel approach: employing a Denoising Deep Neural Network as a preprocessing step to transform adverse weather images into clear weather images, thereby enhancing the robustness of ML-ADAS systems. The proposed method eliminates the need for retraining all subsequent Depp Neural Networks (DNN) in the ML-ADAS pipeline, thus saving computational resources and time. Moreover, it improves driver visualization, which is critical for safe navigation in adverse weather conditions. By leveraging the UNet architecture trained on an augmented KITTI dataset with synthetic adverse weather images, we develop the Weather UNet (WUNet) DNN to remove weather artifacts. Our study demonstrates substantial performance improvements in object detection with WUNet preprocessing under adverse weather conditions. Notably, in scenarios involving extreme fog, our proposed solution improves the mean Average Precision (mAP) score of the YOLOv8n from 4% to 70%.
Related papers
- Real-Time Multi-Scene Visibility Enhancement for Promoting Navigational Safety of Vessels Under Complex Weather Conditions [48.529493393948435]
The visible-light camera has emerged as an essential imaging sensor for marine surface vessels in intelligent waterborne transportation systems.
The visual imaging quality inevitably suffers from several kinds of degradations under complex weather conditions.
We develop a general-purpose multi-scene visibility enhancement method to restore degraded images captured under different weather conditions.
arXiv Detail & Related papers (2024-09-02T23:46:27Z) - Snowy Scenes,Clear Detections: A Robust Model for Traffic Light Detection in Adverse Weather Conditions [5.208045772970408]
Adverse weather presents major challenges for current detection systems, often resulting in failures and potential safety risks.
This paper introduces a novel framework and pipeline designed to improve object detection under such conditions.
Results show a 40.8% improvement in average IoU and F1 scores compared to naive fine-tuning.
arXiv Detail & Related papers (2024-06-19T11:52:12Z) - FogGuard: guarding YOLO against fog using perceptual loss [5.868532677577194]
FogGuard is a fog-aware object detection network designed to address the challenges posed by foggy weather conditions.
FogGuard compensates for foggy conditions in the scene by incorporating YOLOv3 as the baseline algorithm.
Our network significantly improves performance, achieving a 69.43% mAP compared to YOLOv3's 57.78% on the RTTS dataset.
arXiv Detail & Related papers (2024-03-13T20:13:25Z) - Genuine Knowledge from Practice: Diffusion Test-Time Adaptation for
Video Adverse Weather Removal [53.15046196592023]
We introduce test-time adaptation into adverse weather removal in videos.
We propose the first framework that integrates test-time adaptation into the iterative diffusion reverse process.
arXiv Detail & Related papers (2024-03-12T14:21:30Z) - Exploring the Application of Large-scale Pre-trained Models on Adverse
Weather Removal [97.53040662243768]
We propose a CLIP embedding module to make the network handle different weather conditions adaptively.
This module integrates the sample specific weather prior extracted by CLIP image encoder together with the distribution specific information learned by a set of parameters.
arXiv Detail & Related papers (2023-06-15T10:06:13Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Unsupervised Restoration of Weather-affected Images using Deep Gaussian
Process-based CycleGAN [92.15895515035795]
We describe an approach for supervising deep networks that are based on CycleGAN.
We introduce new losses for training CycleGAN that lead to more effective training, resulting in high-quality reconstructions.
We demonstrate that the proposed method can be effectively applied to different restoration tasks like de-raining, de-hazing and de-snowing.
arXiv Detail & Related papers (2022-04-23T01:30:47Z) - How Do We Fail? Stress Testing Perception in Autonomous Vehicles [40.19326157052966]
This paper presents a method for characterizing failures of LiDAR-based perception systems for autonomous vehicles in adverse weather conditions.
We develop a methodology based in reinforcement learning to find likely failures in object tracking and trajectory prediction due to sequences of disturbances.
arXiv Detail & Related papers (2022-03-26T20:48:09Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - TRU-NET: A Deep Learning Approach to High Resolution Prediction of
Rainfall [21.399707529966474]
We present TRU-NET, an encoder-decoder model featuring a novel 2D cross attention mechanism between contiguous convolutional-recurrent layers.
We use a conditional-continuous loss function to capture the zero-skewed %extreme event patterns of rainfall.
Experiments show that our model consistently attains lower RMSE and MAE scores than a DL model prevalent in short term precipitation prediction.
arXiv Detail & Related papers (2020-08-20T17:27:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.