AllWeatherNet:Unified Image enhancement for autonomous driving under adverse weather and lowlight-conditions
- URL: http://arxiv.org/abs/2409.02045v1
- Date: Tue, 3 Sep 2024 16:47:01 GMT
- Title: AllWeatherNet:Unified Image enhancement for autonomous driving under adverse weather and lowlight-conditions
- Authors: Chenghao Qian, Mahdi Rezaei, Saeed Anwar, Wenjing Li, Tanveer Hussain, Mohsen Azarmi, Wei Wang,
- Abstract summary: We propose a method to improve the visual quality and clarity degraded by adverse conditions.
Our method, AllWeather-Net, utilizes a novel hierarchical architecture to enhance images across all adverse conditions.
We show our model's generalization ability by applying it to unseen domains without re-training, achieving up to 3.9% mIoU improvement.
- Score: 24.36482818960804
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adverse conditions like snow, rain, nighttime, and fog, pose challenges for autonomous driving perception systems. Existing methods have limited effectiveness in improving essential computer vision tasks, such as semantic segmentation, and often focus on only one specific condition, such as removing rain or translating nighttime images into daytime ones. To address these limitations, we propose a method to improve the visual quality and clarity degraded by such adverse conditions. Our method, AllWeather-Net, utilizes a novel hierarchical architecture to enhance images across all adverse conditions. This architecture incorporates information at three semantic levels: scene, object, and texture, by discriminating patches at each level. Furthermore, we introduce a Scaled Illumination-aware Attention Mechanism (SIAM) that guides the learning towards road elements critical for autonomous driving perception. SIAM exhibits robustness, remaining unaffected by changes in weather conditions or environmental scenes. AllWeather-Net effectively transforms images into normal weather and daytime scenes, demonstrating superior image enhancement results and subsequently enhancing the performance of semantic segmentation, with up to a 5.3% improvement in mIoU in the trained domain. We also show our model's generalization ability by applying it to unseen domains without re-training, achieving up to 3.9% mIoU improvement. Code can be accessed at: https://github.com/Jumponthemoon/AllWeatherNet.
Related papers
- Real-Time Multi-Scene Visibility Enhancement for Promoting Navigational Safety of Vessels Under Complex Weather Conditions [48.529493393948435]
The visible-light camera has emerged as an essential imaging sensor for marine surface vessels in intelligent waterborne transportation systems.
The visual imaging quality inevitably suffers from several kinds of degradations under complex weather conditions.
We develop a general-purpose multi-scene visibility enhancement method to restore degraded images captured under different weather conditions.
arXiv Detail & Related papers (2024-09-02T23:46:27Z) - Robust ADAS: Enhancing Robustness of Machine Learning-based Advanced Driver Assistance Systems for Adverse Weather [5.383130566626935]
This paper employs a Denoising Deep Neural Network as a preprocessing step to transform adverse weather images into clear weather images.
It improves driver visualization, which is critical for safe navigation in adverse weather conditions.
arXiv Detail & Related papers (2024-07-02T18:03:52Z) - LidaRF: Delving into Lidar for Neural Radiance Field on Street Scenes [73.65115834242866]
Photorealistic simulation plays a crucial role in applications such as autonomous driving.
However, reconstruction quality suffers on street scenes due to collinear camera motions and sparser samplings at higher speeds.
We propose several insights that allow a better utilization of Lidar data to improve NeRF quality on street scenes.
arXiv Detail & Related papers (2024-05-01T23:07:12Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Exploring the Application of Large-scale Pre-trained Models on Adverse
Weather Removal [97.53040662243768]
We propose a CLIP embedding module to make the network handle different weather conditions adaptively.
This module integrates the sample specific weather prior extracted by CLIP image encoder together with the distribution specific information learned by a set of parameters.
arXiv Detail & Related papers (2023-06-15T10:06:13Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Generating Clear Images From Images With Distortions Caused by Adverse
Weather Using Generative Adversarial Networks [0.0]
We presented a method for improving computer vision tasks on images affected by adverse weather conditions, including distortions caused by adherent raindrops.
We trained an appropriate generative adversarial network and showed that it was effective at removing the effect of the distortions.
arXiv Detail & Related papers (2022-11-01T05:02:44Z) - TransWeather: Transformer-based Restoration of Images Degraded by
Adverse Weather Conditions [77.20136060506906]
We propose TransWeather, a transformer-based end-to-end model with just a single encoder and a decoder.
TransWeather achieves significant improvements across multiple test datasets over both All-in-One network.
It is validated on real world test images and found to be more effective than previous methods.
arXiv Detail & Related papers (2021-11-29T18:57:09Z) - Task-Driven Deep Image Enhancement Network for Autonomous Driving in Bad
Weather [5.416049433853457]
In bad weather, visual perception is greatly affected by several degrading effects.
We introduce a new task-driven training strategy to guide the high-level task model suitable for both high-quality restoration of images and highly accurate perception.
Experiment results demonstrate that the proposed method improves the performance among lane and 2D object detection, and depth estimation largely under adverse weather.
arXiv Detail & Related papers (2021-10-14T08:03:33Z) - Weather and Light Level Classification for Autonomous Driving: Dataset,
Baseline and Active Learning [0.6445605125467573]
We build a new dataset for weather (fog, rain, and snow) classification and light level (bright, moderate, and low) classification.
Each image has three labels corresponding to weather, light level, and street type.
We implement an active learning framework to reduce the dataset's redundancy and find the optimal set of frames for training a model.
arXiv Detail & Related papers (2021-04-28T22:53:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.