Why current rain denoising models fail on CycleGAN created rain images
in autonomous driving
- URL: http://arxiv.org/abs/2305.12983v1
- Date: Mon, 22 May 2023 12:42:32 GMT
- Title: Why current rain denoising models fail on CycleGAN created rain images
in autonomous driving
- Authors: Michael Kranl, Hubert Ramsauer and Bernhard Knapp
- Abstract summary: Rain is artificially added to a set of clear-weather condition images using a Generative Adversarial Network (GAN)
This artificial generation of rain images is sufficiently realistic as in 7 out of 10 cases, human test subjects believed the generated rain images to be real.
In a second step, this paired good/bad weather image data is used to train two rain denoising models, one based primarily on a Convolutional Neural Network (CNN) and the other using a Vision Transformer.
- Score: 1.4831974871130875
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: One of the main tasks of an autonomous agent in a vehicle is to correctly
perceive its environment. Much of the data that needs to be processed is
collected by optical sensors such as cameras. Unfortunately, the data collected
in this way can be affected by a variety of factors, including environmental
influences such as inclement weather conditions (e.g., rain). Such noisy data
can cause autonomous agents to take wrong decisions with potentially fatal
outcomes. This paper addresses the rain image challenge by two steps: First,
rain is artificially added to a set of clear-weather condition images using a
Generative Adversarial Network (GAN). This yields good/bad weather image pairs
for training de-raining models. This artificial generation of rain images is
sufficiently realistic as in 7 out of 10 cases, human test subjects believed
the generated rain images to be real. In a second step, this paired good/bad
weather image data is used to train two rain denoising models, one based
primarily on a Convolutional Neural Network (CNN) and the other using a Vision
Transformer. This rain de-noising step showed limited performance as the
quality gain was only about 15%. This lack of performance on realistic rain
images as used in our study is likely due to current rain de-noising models
being developed for simplistic rain overlay data. Our study shows that there is
ample space for improvement of de-raining models in autonomous driving.
Related papers
- TRG-Net: An Interpretable and Controllable Rain Generator [61.2760968459789]
This study proposes a novel deep learning based rain generator, which fully takes the physical generation mechanism underlying rains into consideration.
Its significance lies in that the generator not only elaborately design essential elements of the rain to simulate expected rains, but also finely adapt to complicated and diverse practical rainy images.
Our unpaired generation experiments demonstrate that the rain generated by the proposed rain generator is not only of higher quality, but also more effective for deraining and downstream tasks.
arXiv Detail & Related papers (2024-03-15T03:27:39Z) - TPSeNCE: Towards Artifact-Free Realistic Rain Generation for Deraining
and Object Detection in Rain [23.050711662981655]
We propose an unpaired image-to-image translation framework for generating realistic rainy images.
We first introduce a Triangular Probability Similarity constraint to guide the generated images toward clear and rainy images in the discriminator manifold.
Experiments demonstrate realistic rain generation with minimal artifacts and distortions, which benefits image deraining and object detection in rain.
arXiv Detail & Related papers (2023-11-01T17:08:26Z) - Rethinking Real-world Image Deraining via An Unpaired Degradation-Conditioned Diffusion Model [51.49854435403139]
We propose RainDiff, the first real-world image deraining paradigm based on diffusion models.
We introduce a stable and non-adversarial unpaired cycle-consistent architecture that can be trained, end-to-end, with only unpaired data for supervision.
We also propose a degradation-conditioned diffusion model that refines the desired output via a diffusive generative process conditioned by learned priors of multiple rain degradations.
arXiv Detail & Related papers (2023-01-23T13:34:01Z) - Not Just Streaks: Towards Ground Truth for Single Image Deraining [42.15398478201746]
We propose a large-scale dataset of real-world rainy and clean image pairs.
We propose a deep neural network that reconstructs the underlying scene by minimizing a rain-robust loss between rainy and clean images.
arXiv Detail & Related papers (2022-06-22T00:10:06Z) - Deep Single Image Deraining using An Asymetric Cycle Generative and
Adversarial Framework [16.59494337699748]
We propose a novel Asymetric Cycle Generative and Adrial framework (ACGF) for single image deraining.
ACGF trains on both synthetic and real rainy images while simultaneously capturing both rain streaks and fog features.
Experiments on benchmark rain-fog and rain datasets show that ACGF outperforms state-of-the-art deraining methods.
arXiv Detail & Related papers (2022-02-19T16:14:10Z) - M2GAN: A Multi-Stage Self-Attention Network for Image Rain Removal on
Autonomous Vehicles [8.642603456626391]
We propose a new multi-stage multi-task recurrent generative adversarial network (M2GAN) to deal with challenging problems of raindrops hitting the car's windshield.
M2GAN is considered the first method to deal with challenging problems of real-world rains under unconstrained environments such as autonomous vehicles.
arXiv Detail & Related papers (2021-10-12T16:58:33Z) - UnfairGAN: An Enhanced Generative Adversarial Network for Raindrop
Removal from A Single Image [8.642603456626391]
UnfairGAN is an enhanced generative adversarial network that can utilize prior high-level information, such as edges and rain estimation, to boost deraining performance.
We show that our proposed method is superior to other state-of-the-art approaches of deraining raindrops regarding quantitative metrics and visual quality.
arXiv Detail & Related papers (2021-10-11T18:02:43Z) - RCDNet: An Interpretable Rain Convolutional Dictionary Network for
Single Image Deraining [49.99207211126791]
We specifically build a novel deep architecture, called rain convolutional dictionary network (RCDNet)
RCDNet embeds the intrinsic priors of rain streaks and has clear interpretability.
By end-to-end training such an interpretable network, all involved rain kernels and proximal operators can be automatically extracted.
arXiv Detail & Related papers (2021-07-14T16:08:11Z) - Semi-Supervised Video Deraining with Dynamic Rain Generator [59.71640025072209]
This paper proposes a new semi-supervised video deraining method, in which a dynamic rain generator is employed to fit the rain layer.
Specifically, such dynamic generator consists of one emission model and one transition model to simultaneously encode the spatially physical structure and temporally continuous changes of rain streaks.
Various prior formats are designed for the labeled synthetic and unlabeled real data, so as to fully exploit the common knowledge underlying them.
arXiv Detail & Related papers (2021-03-14T14:28:57Z) - From Rain Generation to Rain Removal [67.71728610434698]
We build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator.
We employ the variational inference framework to approximate the expected statistical distribution of rainy image.
Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution.
arXiv Detail & Related papers (2020-08-08T18:56:51Z) - Structural Residual Learning for Single Image Rain Removal [48.87977695398587]
This study proposes a new network architecture by enforcing the output residual of the network possess intrinsic rain structures.
Such a structural residual setting guarantees the rain layer extracted by the network finely comply with the prior knowledge of general rain streaks.
arXiv Detail & Related papers (2020-05-19T05:52:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.