Conditional Variational Image Deraining
- URL: http://arxiv.org/abs/2004.11373v2
- Date: Fri, 8 May 2020 15:29:51 GMT
- Title: Conditional Variational Image Deraining
- Authors: Ying-Jun Du, Jun Xu, Xian-Tong Zhen, Ming-Ming Cheng, Ling Shao
- Abstract summary: Conditional Variational Image Deraining (CVID) network for better deraining performance.
We propose a spatial density estimation (SDE) module to estimate a rain density map for each image.
Experiments on synthesized and real-world datasets show that the proposed CVID network achieves much better performance than previous deterministic methods on image deraining.
- Score: 158.76814157115223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image deraining is an important yet challenging image processing task. Though
deterministic image deraining methods are developed with encouraging
performance, they are infeasible to learn flexible representations for
probabilistic inference and diverse predictions. Besides, rain intensity varies
both in spatial locations and across color channels, making this task more
difficult. In this paper, we propose a Conditional Variational Image Deraining
(CVID) network for better deraining performance, leveraging the exclusive
generative ability of Conditional Variational Auto-Encoder (CVAE) on providing
diverse predictions for the rainy image. To perform spatially adaptive
deraining, we propose a spatial density estimation (SDE) module to estimate a
rain density map for each image. Since rain density varies across different
color channels, we also propose a channel-wise (CW) deraining scheme.
Experiments on synthesized and real-world datasets show that the proposed CVID
network achieves much better performance than previous deterministic methods on
image deraining. Extensive ablation studies validate the effectiveness of the
proposed SDE module and CW scheme in our CVID network. The code is available at
\url{https://github.com/Yingjun-Du/VID}.
Related papers
- DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Single Image Deraining via Feature-based Deep Convolutional Neural
Network [13.39233717329633]
A single image deraining algorithm based on the combination of data-driven and model-based approaches is proposed.
Experiments show that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both qualitative and quantitative measures.
arXiv Detail & Related papers (2023-05-03T13:12:51Z) - Adaptive Uncertainty Distribution in Deep Learning for Unsupervised
Underwater Image Enhancement [1.9249287163937976]
One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data.
We propose a novel unsupervised underwater image enhancement framework that employs a conditional variational autoencoder (cVAE) to train a deep learning model.
We show that our proposed framework yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics.
arXiv Detail & Related papers (2022-12-18T01:07:20Z) - SAPNet: Segmentation-Aware Progressive Network for Perceptual
Contrastive Deraining [2.615176171489612]
We present a segmentation-aware progressive network (SAPNet) based upon contrastive learning for single image deraining.
Our model surpasses top-performing methods and aids object detection and semantic segmentation with considerable efficacy.
arXiv Detail & Related papers (2021-11-17T03:57:11Z) - RCDNet: An Interpretable Rain Convolutional Dictionary Network for
Single Image Deraining [49.99207211126791]
We specifically build a novel deep architecture, called rain convolutional dictionary network (RCDNet)
RCDNet embeds the intrinsic priors of rain streaks and has clear interpretability.
By end-to-end training such an interpretable network, all involved rain kernels and proximal operators can be automatically extracted.
arXiv Detail & Related papers (2021-07-14T16:08:11Z) - From Rain Generation to Rain Removal [67.71728610434698]
We build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator.
We employ the variational inference framework to approximate the expected statistical distribution of rainy image.
Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution.
arXiv Detail & Related papers (2020-08-08T18:56:51Z) - A Model-driven Deep Neural Network for Single Image Rain Removal [52.787356046951494]
We propose a model-driven deep neural network for the task, with fully interpretable network structures.
Based on the convolutional dictionary learning mechanism for representing rain, we propose a novel single image deraining model.
All the rain kernels and operators can be automatically extracted, faithfully characterizing the features of both rain and clean background layers.
arXiv Detail & Related papers (2020-05-04T09:13:25Z) - Multi-Scale Progressive Fusion Network for Single Image Deraining [84.0466298828417]
Rain streaks in the air appear in various blurring degrees and resolutions due to different distances from their positions to the camera.
Similar rain patterns are visible in a rain image as well as its multi-scale (or multi-resolution) versions.
In this work, we explore the multi-scale collaborative representation for rain streaks from the perspective of input image scales and hierarchical deep features.
arXiv Detail & Related papers (2020-03-24T17:22:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.