Single Image Deraining Network with Rain Embedding Consistency and
Layered LSTM
- URL: http://arxiv.org/abs/2111.03615v1
- Date: Fri, 5 Nov 2021 17:03:08 GMT
- Title: Single Image Deraining Network with Rain Embedding Consistency and
Layered LSTM
- Authors: Yizhou Li and Yusuke Monno and Masatoshi Okutomi
- Abstract summary: We introduce the idea of "Rain Embedding Consistency" by regarding the encoded embedding by the autoencoder as an ideal rain embedding.
A Rain Embedding Loss is applied to directly supervise the encoding process, with a Rectified Local Contrast Normalization as the guide.
We also propose Layered LSTM for recurrent deraining and fine-grained encoder feature refinement considering different scales.
- Score: 14.310541943673181
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single image deraining is typically addressed as residual learning to predict
the rain layer from an input rainy image. For this purpose, an encoder-decoder
network draws wide attention, where the encoder is required to encode a
high-quality rain embedding which determines the performance of the subsequent
decoding stage to reconstruct the rain layer. However, most of existing studies
ignore the significance of rain embedding quality, thus leading to limited
performance with over/under-deraining. In this paper, with our observation of
the high rain layer reconstruction performance by an rain-to-rain autoencoder,
we introduce the idea of "Rain Embedding Consistency" by regarding the encoded
embedding by the autoencoder as an ideal rain embedding and aim at enhancing
the deraining performance by improving the consistency between the ideal rain
embedding and the rain embedding derived by the encoder of the deraining
network. To achieve this, a Rain Embedding Loss is applied to directly
supervise the encoding process, with a Rectified Local Contrast Normalization
(RLCN) as the guide that effectively extracts the candidate rain pixels. We
also propose Layered LSTM for recurrent deraining and fine-grained encoder
feature refinement considering different scales. Qualitative and quantitative
experiments demonstrate that our proposed method outperforms previous
state-of-the-art methods particularly on a real-world dataset. Our source code
is available at http://www.ok.sc.e.titech.ac.jp/res/SIR/.
Related papers
- MDeRainNet: An Efficient Neural Network for Rain Streak Removal from Macro-pixel Images [44.83349966064718]
We propose an efficient network, called MDeRainNet, for rain streak removal from LF images.
The proposed network adopts a multi-scale encoder-decoder architecture, which directly works on Macro-pixel images (MPIs) to improve the rain removal performance.
To improve the performance of our network on real-world rainy scenes, we propose a novel semi-supervised learning framework for our MDeRainNet.
arXiv Detail & Related papers (2024-06-15T14:47:02Z) - RainyScape: Unsupervised Rainy Scene Reconstruction using Decoupled Neural Rendering [50.14860376758962]
We propose RainyScape, an unsupervised framework for reconstructing clean scenes from a collection of multi-view rainy images.
Based on the spectral bias property of neural networks, we first optimize the neural rendering pipeline to obtain a low-frequency scene representation.
We jointly optimize the two modules, driven by the proposed adaptive direction-sensitive gradient-based reconstruction loss.
arXiv Detail & Related papers (2024-04-17T14:07:22Z) - TRG-Net: An Interpretable and Controllable Rain Generator [61.2760968459789]
This study proposes a novel deep learning based rain generator, which fully takes the physical generation mechanism underlying rains into consideration.
Its significance lies in that the generator not only elaborately design essential elements of the rain to simulate expected rains, but also finely adapt to complicated and diverse practical rainy images.
Our unpaired generation experiments demonstrate that the rain generated by the proposed rain generator is not only of higher quality, but also more effective for deraining and downstream tasks.
arXiv Detail & Related papers (2024-03-15T03:27:39Z) - Deep Single Image Deraining using An Asymetric Cycle Generative and
Adversarial Framework [16.59494337699748]
We propose a novel Asymetric Cycle Generative and Adrial framework (ACGF) for single image deraining.
ACGF trains on both synthetic and real rainy images while simultaneously capturing both rain streaks and fog features.
Experiments on benchmark rain-fog and rain datasets show that ACGF outperforms state-of-the-art deraining methods.
arXiv Detail & Related papers (2022-02-19T16:14:10Z) - Rain Removal and Illumination Enhancement Done in One Go [1.0323063834827415]
We propose a novel entangled network, namely EMNet, which can remove the rain and enhance illumination in one go.
We present a new synthetic dataset, namely DarkRain, to boost the development of rain image restoration algorithms.
EMNet is extensively evaluated on the proposed benchmark and achieves state-of-the-art results.
arXiv Detail & Related papers (2021-08-09T08:46:15Z) - RCDNet: An Interpretable Rain Convolutional Dictionary Network for
Single Image Deraining [49.99207211126791]
We specifically build a novel deep architecture, called rain convolutional dictionary network (RCDNet)
RCDNet embeds the intrinsic priors of rain streaks and has clear interpretability.
By end-to-end training such an interpretable network, all involved rain kernels and proximal operators can be automatically extracted.
arXiv Detail & Related papers (2021-07-14T16:08:11Z) - Semi-Supervised Video Deraining with Dynamic Rain Generator [59.71640025072209]
This paper proposes a new semi-supervised video deraining method, in which a dynamic rain generator is employed to fit the rain layer.
Specifically, such dynamic generator consists of one emission model and one transition model to simultaneously encode the spatially physical structure and temporally continuous changes of rain streaks.
Various prior formats are designed for the labeled synthetic and unlabeled real data, so as to fully exploit the common knowledge underlying them.
arXiv Detail & Related papers (2021-03-14T14:28:57Z) - From Rain Generation to Rain Removal [67.71728610434698]
We build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator.
We employ the variational inference framework to approximate the expected statistical distribution of rainy image.
Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution.
arXiv Detail & Related papers (2020-08-08T18:56:51Z) - Structural Residual Learning for Single Image Rain Removal [48.87977695398587]
This study proposes a new network architecture by enforcing the output residual of the network possess intrinsic rain structures.
Such a structural residual setting guarantees the rain layer extracted by the network finely comply with the prior knowledge of general rain streaks.
arXiv Detail & Related papers (2020-05-19T05:52:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.