Deep Single Image Deraining using An Asymetric Cycle Generative and
Adversarial Framework
- URL: http://arxiv.org/abs/2202.09635v2
- Date: Fri, 19 May 2023 01:59:07 GMT
- Title: Deep Single Image Deraining using An Asymetric Cycle Generative and
Adversarial Framework
- Authors: Wei Liu, Rui Jiang, Cheng Chen, Tao Lu and Zixiang Xiong
- Abstract summary: We propose a novel Asymetric Cycle Generative and Adrial framework (ACGF) for single image deraining.
ACGF trains on both synthetic and real rainy images while simultaneously capturing both rain streaks and fog features.
Experiments on benchmark rain-fog and rain datasets show that ACGF outperforms state-of-the-art deraining methods.
- Score: 16.59494337699748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In reality, rain and fog are often present at the same time, which can
greatly reduce the clarity and quality of the scene image. However, most
unsupervised single image deraining methods mainly focus on rain streak removal
by disregarding the fog, which leads to low-quality deraining performance. In
addition, the samples are rather homogeneous generated by these methods and
lack diversity, resulting in poor results in the face of complex rain scenes.
To address the above issues, we propose a novel Asymetric Cycle Generative and
Adversarial framework (ACGF) for single image deraining that trains on both
synthetic and real rainy images while simultaneously capturing both rain
streaks and fog features. ACGF consists of a Rain-fog2Clean (R2C)
transformation block and a Clean2Rain-fog (C2R) transformation block. The
former consists of parallel rain removal path and rain-fog feature extraction
path by the rain and derain-fog network and the attention rain-fog feature
extraction network (ARFE) , while the latter only contains a synthetic rain
transformation path. In rain-fog feature extraction path, to better
characterize the rain-fog fusion feature, we employ an ARFE to exploit the
self-similarity of global and local rain-fog information by learning the
spatial feature correlations. Moreover, to improve the translational capacity
of C2R and the diversity of models, we design a rain-fog feature decoupling and
reorganization network (RFDR) by embedding a rainy image degradation model and
a mixed discriminator to preserve richer texture details in synthetic rain
conversion path. Extensive experiments on benchmark rain-fog and rain datasets
show that ACGF outperforms state-of-the-art deraining methods. We also conduct
defogging performance evaluation experiments to further demonstrate the
effectiveness of ACGF.
Related papers
- TRG-Net: An Interpretable and Controllable Rain Generator [61.2760968459789]
This study proposes a novel deep learning based rain generator, which fully takes the physical generation mechanism underlying rains into consideration.
Its significance lies in that the generator not only elaborately design essential elements of the rain to simulate expected rains, but also finely adapt to complicated and diverse practical rainy images.
Our unpaired generation experiments demonstrate that the rain generated by the proposed rain generator is not only of higher quality, but also more effective for deraining and downstream tasks.
arXiv Detail & Related papers (2024-03-15T03:27:39Z) - Sparse Sampling Transformer with Uncertainty-Driven Ranking for Unified
Removal of Raindrops and Rain Streaks [17.00078021737863]
In the real world, image degradations caused by rain often exhibit a combination of rain streaks and raindrops, thereby increasing the challenges of recovering the underlying clean image.
This paper aims to present an efficient and flexible mechanism to learn and model degradation relationships in a global view.
arXiv Detail & Related papers (2023-08-27T16:33:11Z) - Dual Degradation Representation for Joint Deraining and Low-Light Enhancement in the Dark [57.85378202032541]
Rain in the dark poses a significant challenge to deploying real-world applications such as autonomous driving, surveillance systems, and night photography.
Existing low-light enhancement or deraining methods struggle to brighten low-light conditions and remove rain simultaneously.
We introduce an end-to-end model called L$2$RIRNet, designed to manage both low-light enhancement and deraining in real-world settings.
arXiv Detail & Related papers (2023-05-06T10:17:42Z) - Semi-MoreGAN: A New Semi-supervised Generative Adversarial Network for
Mixture of Rain Removal [18.04268933542476]
We propose a new SEMI-supervised Mixture Of rain REmoval Generative Adversarial Network (Semi-MoreGAN)
Semi-MoreGAN consists of four key modules: (I) a novel attentional depth prediction network to provide precise depth estimation; (ii) a context feature prediction network composed of several well-designed detailed residual blocks to produce detailed image context features; (iii) a pyramid depth-guided non-local network to effectively integrate the image context with the depth information, and produce the final rain-free images; and (iv) a comprehensive semi-supervised loss function to make the model not limited
arXiv Detail & Related papers (2022-04-28T11:35:26Z) - UnfairGAN: An Enhanced Generative Adversarial Network for Raindrop
Removal from A Single Image [8.642603456626391]
UnfairGAN is an enhanced generative adversarial network that can utilize prior high-level information, such as edges and rain estimation, to boost deraining performance.
We show that our proposed method is superior to other state-of-the-art approaches of deraining raindrops regarding quantitative metrics and visual quality.
arXiv Detail & Related papers (2021-10-11T18:02:43Z) - Closing the Loop: Joint Rain Generation and Removal via Disentangled
Image Translation [12.639320247831181]
We argue that the rain generation and removal are the two sides of the same coin and should be tightly coupled.
We propose a bidirectional disentangled translation network, in which each unidirectional network contains two loops of joint rain generation and removal.
Experiments on synthetic and real-world rain datasets show the superiority of proposed method compared to state-of-the-arts.
arXiv Detail & Related papers (2021-03-25T08:21:43Z) - Dual Attention-in-Attention Model for Joint Rain Streak and Raindrop
Removal [103.4067418083549]
We propose a Dual Attention-in-Attention Model (DAiAM) which includes two DAMs for removing both rain streaks and raindrops simultaneously.
The proposed method not only is capable of removing rain streaks and raindrops simultaneously, but also achieves the state-of-the-art performance on both tasks.
arXiv Detail & Related papers (2021-03-12T03:00:33Z) - From Rain Generation to Rain Removal [67.71728610434698]
We build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator.
We employ the variational inference framework to approximate the expected statistical distribution of rainy image.
Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution.
arXiv Detail & Related papers (2020-08-08T18:56:51Z) - Structural Residual Learning for Single Image Rain Removal [48.87977695398587]
This study proposes a new network architecture by enforcing the output residual of the network possess intrinsic rain structures.
Such a structural residual setting guarantees the rain layer extracted by the network finely comply with the prior knowledge of general rain streaks.
arXiv Detail & Related papers (2020-05-19T05:52:13Z) - Multi-Scale Progressive Fusion Network for Single Image Deraining [84.0466298828417]
Rain streaks in the air appear in various blurring degrees and resolutions due to different distances from their positions to the camera.
Similar rain patterns are visible in a rain image as well as its multi-scale (or multi-resolution) versions.
In this work, we explore the multi-scale collaborative representation for rain streaks from the perspective of input image scales and hierarchical deep features.
arXiv Detail & Related papers (2020-03-24T17:22:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.