Generating Clear Images From Images With Distortions Caused by Adverse
Weather Using Generative Adversarial Networks
- URL: http://arxiv.org/abs/2211.05234v1
- Date: Tue, 1 Nov 2022 05:02:44 GMT
- Title: Generating Clear Images From Images With Distortions Caused by Adverse
Weather Using Generative Adversarial Networks
- Authors: Nuriel Shalom Mor
- Abstract summary: We presented a method for improving computer vision tasks on images affected by adverse weather conditions, including distortions caused by adherent raindrops.
We trained an appropriate generative adversarial network and showed that it was effective at removing the effect of the distortions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We presented a method for improving computer vision tasks on images affected
by adverse weather conditions, including distortions caused by adherent
raindrops. Overcoming the challenge of applying computer vision to images
affected by adverse weather conditions is essential for autonomous vehicles
utilizing RGB cameras. For this purpose, we trained an appropriate generative
adversarial network and showed that it was effective at removing the effect of
the distortions, in the context of image reconstruction and computer vision
tasks. We showed that object recognition, a vital task for autonomous driving
vehicles, is completely impaired by the distortions and occlusions caused by
adherent raindrops and that performance can be restored by our de-raining
model. The approach described in this paper could be applied to all adverse
weather conditions.
Related papers
- Enhancing autonomous vehicle safety in rain: a data-centric approach for clear vision [0.0]
We developed a vision model that processes live vehicle camera feeds to eliminate rain-induced visual hindrances.
We employed a classic encoder-decoder architecture with skip connections and concatenation operations.
The results demonstrated notable improvements in steering accuracy, underscoring the model's potential to enhance navigation safety and reliability in rainy weather conditions.
arXiv Detail & Related papers (2024-12-29T20:27:12Z) - AllWeatherNet:Unified Image Enhancement for Autonomous Driving under Adverse Weather and Lowlight-conditions [24.36482818960804]
We propose a method to improve the visual quality and clarity degraded by adverse conditions.
Our method, AllWeather-Net, utilizes a novel hierarchical architecture to enhance images across all adverse conditions.
We show our model's generalization ability by applying it to unseen domains without re-training, achieving up to 3.9% mIoU improvement.
arXiv Detail & Related papers (2024-09-03T16:47:01Z) - SUSTechGAN: Image Generation for Object Detection in Adverse Conditions of Autonomous Driving [22.985889862182642]
generative adversarial networks (GANs) have been applied to augment data for autonomous driving.
We propose a novel framework, SUSTechGAN, with customized dual attention modules, multi-scale generators, and a novel loss function.
We test the SUSTechGAN and the well-known GANs to generate driving images in adverse conditions of rain and night and apply the generated images to retrain object detection networks.
arXiv Detail & Related papers (2024-07-18T15:32:25Z) - Continual All-in-One Adverse Weather Removal with Knowledge Replay on a
Unified Network Structure [92.8834309803903]
In real-world applications, image degeneration caused by adverse weather is always complex and changes with different weather conditions from days and seasons.
We develop a novel continual learning framework with effective knowledge replay (KR) on a unified network structure.
It considers the characteristics of the image restoration task with multiple degenerations in continual learning, and the knowledge for different degenerations can be shared and accumulated.
arXiv Detail & Related papers (2024-03-12T03:50:57Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - Video Waterdrop Removal via Spatio-Temporal Fusion in Driving Scenes [53.16726447796844]
The waterdrops on windshields during driving can cause severe visual obstructions, which may lead to car accidents.
We propose an attention-based framework that fuses the representations from multiple frames to restore visual information occluded by waterdrops.
arXiv Detail & Related papers (2023-02-12T13:47:26Z) - See Blue Sky: Deep Image Dehaze Using Paired and Unpaired Training
Images [73.23687409870656]
We propose a cycle generative adversarial network to construct a novel end-to-end image dehaze model.
We adopt outdoor image datasets to train our model, which includes a set of real-world unpaired image dataset and a set of paired image dataset.
Based on the cycle structure, our model adds four different kinds of loss function to constrain the effect including adversarial loss, cycle consistency loss, photorealism loss and paired L1 loss.
arXiv Detail & Related papers (2022-10-14T07:45:33Z) - Task-Driven Deep Image Enhancement Network for Autonomous Driving in Bad
Weather [5.416049433853457]
In bad weather, visual perception is greatly affected by several degrading effects.
We introduce a new task-driven training strategy to guide the high-level task model suitable for both high-quality restoration of images and highly accurate perception.
Experiment results demonstrate that the proposed method improves the performance among lane and 2D object detection, and depth estimation largely under adverse weather.
arXiv Detail & Related papers (2021-10-14T08:03:33Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - A Little Fog for a Large Turn [26.556198529742122]
We look at the field of Autonomous navigation wherein adverse weather conditions such as fog have a drastic effect on the predictions of these systems.
These weather conditions are capable of acting like natural adversaries that can help in testing models.
Our work also presents a more natural and general definition of Adversarial perturbations based on Perceptual Similarity.
arXiv Detail & Related papers (2020-01-16T15:09:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.