M2GAN: A Multi-Stage Self-Attention Network for Image Rain Removal on
Autonomous Vehicles
- URL: http://arxiv.org/abs/2110.06164v1
- Date: Tue, 12 Oct 2021 16:58:33 GMT
- Title: M2GAN: A Multi-Stage Self-Attention Network for Image Rain Removal on
Autonomous Vehicles
- Authors: Duc Manh Nguyen, Sang-Woong Lee
- Abstract summary: We propose a new multi-stage multi-task recurrent generative adversarial network (M2GAN) to deal with challenging problems of raindrops hitting the car's windshield.
M2GAN is considered the first method to deal with challenging problems of real-world rains under unconstrained environments such as autonomous vehicles.
- Score: 8.642603456626391
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image deraining is a new challenging problem in applications of autonomous
vehicles. In a bad weather condition of heavy rainfall, raindrops, mainly
hitting the vehicle's windshield, can significantly reduce observation ability
even though the windshield wipers might be able to remove part of it. Moreover,
rain flows spreading over the windshield can yield the physical effect of
refraction, which seriously impede the sightline or undermine the machine
learning system equipped in the vehicle. In this paper, we propose a new
multi-stage multi-task recurrent generative adversarial network (M2GAN) to deal
with challenging problems of raindrops hitting the car's windshield. This
method is also applicable for removing raindrops appearing on a glass window or
lens. M2GAN is a multi-stage multi-task generative adversarial network that can
utilize prior high-level information, such as semantic segmentation, to boost
deraining performance. To demonstrate M2GAN, we introduce the first real-world
dataset for rain removal on autonomous vehicles. The experimental results show
that our proposed method is superior to other state-of-the-art approaches of
deraining raindrops in respect of quantitative metrics and visual quality.
M2GAN is considered the first method to deal with challenging problems of
real-world rains under unconstrained environments such as autonomous vehicles.
Related papers
- TRG-Net: An Interpretable and Controllable Rain Generator [61.2760968459789]
This study proposes a novel deep learning based rain generator, which fully takes the physical generation mechanism underlying rains into consideration.
Its significance lies in that the generator not only elaborately design essential elements of the rain to simulate expected rains, but also finely adapt to complicated and diverse practical rainy images.
Our unpaired generation experiments demonstrate that the rain generated by the proposed rain generator is not only of higher quality, but also more effective for deraining and downstream tasks.
arXiv Detail & Related papers (2024-03-15T03:27:39Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Why current rain denoising models fail on CycleGAN created rain images
in autonomous driving [1.4831974871130875]
Rain is artificially added to a set of clear-weather condition images using a Generative Adversarial Network (GAN)
This artificial generation of rain images is sufficiently realistic as in 7 out of 10 cases, human test subjects believed the generated rain images to be real.
In a second step, this paired good/bad weather image data is used to train two rain denoising models, one based primarily on a Convolutional Neural Network (CNN) and the other using a Vision Transformer.
arXiv Detail & Related papers (2023-05-22T12:42:32Z) - Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive
Benchmark Analysis and Beyond [85.06231315901505]
Rain removal aims to remove rain streaks from images/videos and reduce the disruptive effects caused by rain.
This paper makes the first attempt to conduct a comprehensive study on the robustness of deep learning-based rain removal methods against adversarial attacks.
arXiv Detail & Related papers (2022-03-31T10:22:24Z) - UnfairGAN: An Enhanced Generative Adversarial Network for Raindrop
Removal from A Single Image [8.642603456626391]
UnfairGAN is an enhanced generative adversarial network that can utilize prior high-level information, such as edges and rain estimation, to boost deraining performance.
We show that our proposed method is superior to other state-of-the-art approaches of deraining raindrops regarding quantitative metrics and visual quality.
arXiv Detail & Related papers (2021-10-11T18:02:43Z) - RCDNet: An Interpretable Rain Convolutional Dictionary Network for
Single Image Deraining [49.99207211126791]
We specifically build a novel deep architecture, called rain convolutional dictionary network (RCDNet)
RCDNet embeds the intrinsic priors of rain streaks and has clear interpretability.
By end-to-end training such an interpretable network, all involved rain kernels and proximal operators can be automatically extracted.
arXiv Detail & Related papers (2021-07-14T16:08:11Z) - Beyond Monocular Deraining: Parallel Stereo Deraining Network Via
Semantic Prior [103.49307603952144]
Most existing de-rain algorithms use only one single input image and aim to recover a clean image.
We present a Paired Rain Removal Network (PRRNet), which exploits both stereo images and semantic information.
Experiments on both monocular and the newly proposed stereo rainy datasets demonstrate that the proposed method achieves the state-of-the-art performance.
arXiv Detail & Related papers (2021-05-09T04:15:10Z) - Dual Attention-in-Attention Model for Joint Rain Streak and Raindrop
Removal [103.4067418083549]
We propose a Dual Attention-in-Attention Model (DAiAM) which includes two DAMs for removing both rain streaks and raindrops simultaneously.
The proposed method not only is capable of removing rain streaks and raindrops simultaneously, but also achieves the state-of-the-art performance on both tasks.
arXiv Detail & Related papers (2021-03-12T03:00:33Z) - MBA-RainGAN: Multi-branch Attention Generative Adversarial Network for
Mixture of Rain Removal from Single Images [24.60495609529114]
Rain severely hampers the visibility of scene objects when images are captured through glass in heavily rainy days.
We observe three intriguing phenomenons that, 1) rain is a mixture of raindrops, rain streaks and rainy haze; 2) the depth from the camera determines the degrees of object visibility; and 3) raindrops on the glass randomly affect the object visibility of the whole image space.
arXiv Detail & Related papers (2020-05-21T11:44:21Z) - Physical Model Guided Deep Image Deraining [10.14977592107907]
Single image deraining is an urgent task because the degraded rainy image makes many computer vision systems fail to work.
We propose a novel network based on physical model guided learning for single image deraining.
arXiv Detail & Related papers (2020-03-30T07:08:13Z) - Multi-Task Learning Enhanced Single Image De-Raining [9.207797392774465]
Rain removal in images is an important task in computer vision filed and attracting attentions of more and more people.
In this paper, we address a non-trivial issue of removing visual effect of rain streak from a single image.
Our method combines various semantic constraint task in a proposed multi-task regression model for rain removal.
arXiv Detail & Related papers (2020-03-21T16:19:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.