From Rain Generation to Rain Removal
- URL: http://arxiv.org/abs/2008.03580v2
- Date: Fri, 4 Dec 2020 06:57:25 GMT
- Title: From Rain Generation to Rain Removal
- Authors: Hong Wang, Zongsheng Yue, Qi Xie, Qian Zhao, Yefeng Zheng, Deyu Meng
- Abstract summary: We build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator.
We employ the variational inference framework to approximate the expected statistical distribution of rainy image.
Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution.
- Score: 67.71728610434698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For the single image rain removal (SIRR) task, the performance of deep
learning (DL)-based methods is mainly affected by the designed deraining models
and training datasets. Most of current state-of-the-art focus on constructing
powerful deep models to obtain better deraining results. In this paper, to
further improve the deraining performance, we novelly attempt to handle the
SIRR task from the perspective of training datasets by exploring a more
efficient way to synthesize rainy images. Specifically, we build a full
Bayesian generative model for rainy image where the rain layer is parameterized
as a generator with the input as some latent variables representing the
physical structural rain factors, e.g., direction, scale, and thickness. To
solve this model, we employ the variational inference framework to approximate
the expected statistical distribution of rainy image in a data-driven manner.
With the learned generator, we can automatically and sufficiently generate
diverse and non-repetitive training pairs so as to efficiently enrich and
augment the existing benchmark datasets. User study qualitatively and
quantitatively evaluates the realism of generated rainy images. Comprehensive
experiments substantiate that the proposed model can faithfully extract the
complex rain distribution that not only helps significantly improve the
deraining performance of current deep single image derainers, but also largely
loosens the requirement of large training sample pre-collection for the SIRR
task.
Related papers
- Harnessing Joint Rain-/Detail-aware Representations to Eliminate Intricate Rains [9.6606245317525]
We develop a Context-based Instance-level Modulation mechanism adept at efficiently modulating CNN- or Transformer-based models.
We also devise a rain-/detail-aware contrastive learning strategy to help extract joint rain-/detail-aware representations.
By integrating CoI-M with the rain-/detail-aware Contrastive learning, we develop CoIC, an innovative and potent algorithm tailored for training models on mixed datasets.
arXiv Detail & Related papers (2024-04-18T11:20:53Z) - TRG-Net: An Interpretable and Controllable Rain Generator [61.2760968459789]
This study proposes a novel deep learning based rain generator, which fully takes the physical generation mechanism underlying rains into consideration.
Its significance lies in that the generator not only elaborately design essential elements of the rain to simulate expected rains, but also finely adapt to complicated and diverse practical rainy images.
Our unpaired generation experiments demonstrate that the rain generated by the proposed rain generator is not only of higher quality, but also more effective for deraining and downstream tasks.
arXiv Detail & Related papers (2024-03-15T03:27:39Z) - Contrastive Learning Based Recursive Dynamic Multi-Scale Network for
Image Deraining [47.764883957379745]
Rain streaks significantly decrease the visibility of captured images.
Existing deep learning-based image deraining methods employ manually crafted networks and learn a straightforward projection from rainy images to clear images.
We propose a contrastive learning-based image deraining method that investigates the correlation between rainy and clear images.
arXiv Detail & Related papers (2023-05-29T13:51:41Z) - RCDNet: An Interpretable Rain Convolutional Dictionary Network for
Single Image Deraining [49.99207211126791]
We specifically build a novel deep architecture, called rain convolutional dictionary network (RCDNet)
RCDNet embeds the intrinsic priors of rain streaks and has clear interpretability.
By end-to-end training such an interpretable network, all involved rain kernels and proximal operators can be automatically extracted.
arXiv Detail & Related papers (2021-07-14T16:08:11Z) - Semi-Supervised Video Deraining with Dynamic Rain Generator [59.71640025072209]
This paper proposes a new semi-supervised video deraining method, in which a dynamic rain generator is employed to fit the rain layer.
Specifically, such dynamic generator consists of one emission model and one transition model to simultaneously encode the spatially physical structure and temporally continuous changes of rain streaks.
Various prior formats are designed for the labeled synthetic and unlabeled real data, so as to fully exploit the common knowledge underlying them.
arXiv Detail & Related papers (2021-03-14T14:28:57Z) - Structural Residual Learning for Single Image Rain Removal [48.87977695398587]
This study proposes a new network architecture by enforcing the output residual of the network possess intrinsic rain structures.
Such a structural residual setting guarantees the rain layer extracted by the network finely comply with the prior knowledge of general rain streaks.
arXiv Detail & Related papers (2020-05-19T05:52:13Z) - Rainy screens: Collecting rainy datasets, indoors [19.71705192452036]
We present a simple method for generating diverse rainy images from existing clear ground-truth images.
This setup allows us to leverage the diversity of existing datasets with auxiliary task ground-truth data.
We generate rainy images with real adherent droplets and rain streaks based on Cityscapes and BDD, and train a de-raining model.
arXiv Detail & Related papers (2020-03-10T13:57:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.