RCDNet: An Interpretable Rain Convolutional Dictionary Network for
Single Image Deraining
- URL: http://arxiv.org/abs/2107.06808v1
- Date: Wed, 14 Jul 2021 16:08:11 GMT
- Title: RCDNet: An Interpretable Rain Convolutional Dictionary Network for
Single Image Deraining
- Authors: Hong Wang, Qi Xie, Qian Zhao, Yong Liang, Deyu Meng
- Abstract summary: We specifically build a novel deep architecture, called rain convolutional dictionary network (RCDNet)
RCDNet embeds the intrinsic priors of rain streaks and has clear interpretability.
By end-to-end training such an interpretable network, all involved rain kernels and proximal operators can be automatically extracted.
- Score: 49.99207211126791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a common weather, rain streaks adversely degrade the image quality. Hence,
removing rains from an image has become an important issue in the field. To
handle such an ill-posed single image deraining task, in this paper, we
specifically build a novel deep architecture, called rain convolutional
dictionary network (RCDNet), which embeds the intrinsic priors of rain streaks
and has clear interpretability. In specific, we first establish a RCD model for
representing rain streaks and utilize the proximal gradient descent technique
to design an iterative algorithm only containing simple operators for solving
the model. By unfolding it, we then build the RCDNet in which every network
module has clear physical meanings and corresponds to each operation involved
in the algorithm. This good interpretability greatly facilitates an easy
visualization and analysis on what happens inside the network and why it works
well in inference process. Moreover, taking into account the domain gap issue
in real scenarios, we further design a novel dynamic RCDNet, where the rain
kernels can be dynamically inferred corresponding to input rainy images and
then help shrink the space for rain layer estimation with few rain maps so as
to ensure a fine generalization performance in the inconsistent scenarios of
rain types between training and testing data. By end-to-end training such an
interpretable network, all involved rain kernels and proximal operators can be
automatically extracted, faithfully characterizing the features of both rain
and clean background layers, and thus naturally lead to better deraining
performance. Comprehensive experiments substantiate the superiority of our
method, especially on its well generality to diverse testing scenarios and good
interpretability for all its modules. Code is available in
\emph{\url{https://github.com/hongwang01/DRCDNet}}.
Related papers
- Single Image Deraining Network with Rain Embedding Consistency and
Layered LSTM [14.310541943673181]
We introduce the idea of "Rain Embedding Consistency" by regarding the encoded embedding by the autoencoder as an ideal rain embedding.
A Rain Embedding Loss is applied to directly supervise the encoding process, with a Rectified Local Contrast Normalization as the guide.
We also propose Layered LSTM for recurrent deraining and fine-grained encoder feature refinement considering different scales.
arXiv Detail & Related papers (2021-11-05T17:03:08Z) - Structure-Preserving Deraining with Residue Channel Prior Guidance [33.41254475191555]
Single image deraining is important for many high-level computer vision tasks.
We propose a Structure-Preserving Deraining Network (SPDNet) with RCP guidance.
SPDNet directly generates high-quality rain-free images with clear and accurate structures under RCP guidance.
arXiv Detail & Related papers (2021-08-20T09:09:56Z) - SDNet: mutil-branch for single image deraining using swin [14.574622548559269]
We introduce Swin-transformer into the field of image deraining for the first time.
Specifically, we improve the basic module of Swin-transformer and design a three-branch model to implement single-image rain removal.
Our proposed method has performance and inference speed advantages over the current mainstream single-image rain streaks removal models.
arXiv Detail & Related papers (2021-05-31T16:06:02Z) - Beyond Monocular Deraining: Parallel Stereo Deraining Network Via
Semantic Prior [103.49307603952144]
Most existing de-rain algorithms use only one single input image and aim to recover a clean image.
We present a Paired Rain Removal Network (PRRNet), which exploits both stereo images and semantic information.
Experiments on both monocular and the newly proposed stereo rainy datasets demonstrate that the proposed method achieves the state-of-the-art performance.
arXiv Detail & Related papers (2021-05-09T04:15:10Z) - From Rain Generation to Rain Removal [67.71728610434698]
We build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator.
We employ the variational inference framework to approximate the expected statistical distribution of rainy image.
Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution.
arXiv Detail & Related papers (2020-08-08T18:56:51Z) - Structural Residual Learning for Single Image Rain Removal [48.87977695398587]
This study proposes a new network architecture by enforcing the output residual of the network possess intrinsic rain structures.
Such a structural residual setting guarantees the rain layer extracted by the network finely comply with the prior knowledge of general rain streaks.
arXiv Detail & Related papers (2020-05-19T05:52:13Z) - A Model-driven Deep Neural Network for Single Image Rain Removal [52.787356046951494]
We propose a model-driven deep neural network for the task, with fully interpretable network structures.
Based on the convolutional dictionary learning mechanism for representing rain, we propose a novel single image deraining model.
All the rain kernels and operators can be automatically extracted, faithfully characterizing the features of both rain and clean background layers.
arXiv Detail & Related papers (2020-05-04T09:13:25Z) - Conditional Variational Image Deraining [158.76814157115223]
Conditional Variational Image Deraining (CVID) network for better deraining performance.
We propose a spatial density estimation (SDE) module to estimate a rain density map for each image.
Experiments on synthesized and real-world datasets show that the proposed CVID network achieves much better performance than previous deterministic methods on image deraining.
arXiv Detail & Related papers (2020-04-23T11:51:38Z) - Physical Model Guided Deep Image Deraining [10.14977592107907]
Single image deraining is an urgent task because the degraded rainy image makes many computer vision systems fail to work.
We propose a novel network based on physical model guided learning for single image deraining.
arXiv Detail & Related papers (2020-03-30T07:08:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.