Joint Depth Estimation and Mixture of Rain Removal From a Single Image
- URL: http://arxiv.org/abs/2303.17766v1
- Date: Fri, 31 Mar 2023 02:05:45 GMT
- Title: Joint Depth Estimation and Mixture of Rain Removal From a Single Image
- Authors: Yongzhen Wang, Xuefeng Yan, Yanbiao Niu, Lina Gong, Yanwen Guo,
Mingqiang Wei
- Abstract summary: We propose an effective image deraining paradigm for Mixture of rain REmoval, called DEMore-Net.
This study explores normalization approaches in image deraining tasks and introduces a new Hybrid Normalization Block (HNB) to enhance the deraining performance of DEMore-Net.
- Score: 24.009353523566162
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rainy weather significantly deteriorates the visibility of scene objects,
particularly when images are captured through outdoor camera lenses or
windshields. Through careful observation of numerous rainy photos, we have
found that the images are generally affected by various rainwater artifacts
such as raindrops, rain streaks, and rainy haze, which impact the image quality
from both near and far distances, resulting in a complex and intertwined
process of image degradation. However, current deraining techniques are limited
in their ability to address only one or two types of rainwater, which poses a
challenge in removing the mixture of rain (MOR). In this study, we propose an
effective image deraining paradigm for Mixture of rain REmoval, called
DEMore-Net, which takes full account of the MOR effect. Going beyond the
existing deraining wisdom, DEMore-Net is a joint learning paradigm that
integrates depth estimation and MOR removal tasks to achieve superior rain
removal. The depth information can offer additional meaningful guidance
information based on distance, thus better helping DEMore-Net remove different
types of rainwater. Moreover, this study explores normalization approaches in
image deraining tasks and introduces a new Hybrid Normalization Block (HNB) to
enhance the deraining performance of DEMore-Net. Extensive experiments
conducted on synthetic datasets and real-world MOR photos fully validate the
superiority of the proposed DEMore-Net. Code is available at
https://github.com/yz-wang/DEMore-Net.
Related papers
- Image Deraining via Self-supervised Reinforcement Learning [15.41116945679692]
The work aims to recover rain images by removing rain streaks via Self-supervised Reinforcement Learning (RL)
We locate rain streak pixels from the input rain image via dictionary learning and use pixel-wise RL agents to take multiple inpainting actions to remove rain progressively.
Experimental results on several benchmark image-deraining datasets show that the proposed SRL-Derain performs favorably against state-of-the-art few-shot and self-supervised deraining and denoising methods.
arXiv Detail & Related papers (2024-03-27T05:52:39Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Contrastive Learning Based Recursive Dynamic Multi-Scale Network for
Image Deraining [47.764883957379745]
Rain streaks significantly decrease the visibility of captured images.
Existing deep learning-based image deraining methods employ manually crafted networks and learn a straightforward projection from rainy images to clear images.
We propose a contrastive learning-based image deraining method that investigates the correlation between rainy and clear images.
arXiv Detail & Related papers (2023-05-29T13:51:41Z) - Dual Degradation Representation for Joint Deraining and Low-Light Enhancement in the Dark [57.85378202032541]
Rain in the dark poses a significant challenge to deploying real-world applications such as autonomous driving, surveillance systems, and night photography.
Existing low-light enhancement or deraining methods struggle to brighten low-light conditions and remove rain simultaneously.
We introduce an end-to-end model called L$2$RIRNet, designed to manage both low-light enhancement and deraining in real-world settings.
arXiv Detail & Related papers (2023-05-06T10:17:42Z) - Single Image Deraining via Rain-Steaks Aware Deep Convolutional Neural
Network [16.866000078306815]
An improved weighted guided image filter (iWGIF) is proposed to extract high frequency information from a rainy image.
The high frequency information mainly includes rain steaks and noise, and it can guide the rain steaks aware deep convolutional neural network (RSADCNN) to pay more attention to rain steaks.
arXiv Detail & Related papers (2022-09-16T09:16:03Z) - Semi-MoreGAN: A New Semi-supervised Generative Adversarial Network for
Mixture of Rain Removal [18.04268933542476]
We propose a new SEMI-supervised Mixture Of rain REmoval Generative Adversarial Network (Semi-MoreGAN)
Semi-MoreGAN consists of four key modules: (I) a novel attentional depth prediction network to provide precise depth estimation; (ii) a context feature prediction network composed of several well-designed detailed residual blocks to produce detailed image context features; (iii) a pyramid depth-guided non-local network to effectively integrate the image context with the depth information, and produce the final rain-free images; and (iv) a comprehensive semi-supervised loss function to make the model not limited
arXiv Detail & Related papers (2022-04-28T11:35:26Z) - UnfairGAN: An Enhanced Generative Adversarial Network for Raindrop
Removal from A Single Image [8.642603456626391]
UnfairGAN is an enhanced generative adversarial network that can utilize prior high-level information, such as edges and rain estimation, to boost deraining performance.
We show that our proposed method is superior to other state-of-the-art approaches of deraining raindrops regarding quantitative metrics and visual quality.
arXiv Detail & Related papers (2021-10-11T18:02:43Z) - Dual Attention-in-Attention Model for Joint Rain Streak and Raindrop
Removal [103.4067418083549]
We propose a Dual Attention-in-Attention Model (DAiAM) which includes two DAMs for removing both rain streaks and raindrops simultaneously.
The proposed method not only is capable of removing rain streaks and raindrops simultaneously, but also achieves the state-of-the-art performance on both tasks.
arXiv Detail & Related papers (2021-03-12T03:00:33Z) - From Rain Generation to Rain Removal [67.71728610434698]
We build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator.
We employ the variational inference framework to approximate the expected statistical distribution of rainy image.
Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution.
arXiv Detail & Related papers (2020-08-08T18:56:51Z) - MBA-RainGAN: Multi-branch Attention Generative Adversarial Network for
Mixture of Rain Removal from Single Images [24.60495609529114]
Rain severely hampers the visibility of scene objects when images are captured through glass in heavily rainy days.
We observe three intriguing phenomenons that, 1) rain is a mixture of raindrops, rain streaks and rainy haze; 2) the depth from the camera determines the degrees of object visibility; and 3) raindrops on the glass randomly affect the object visibility of the whole image space.
arXiv Detail & Related papers (2020-05-21T11:44:21Z) - Structural Residual Learning for Single Image Rain Removal [48.87977695398587]
This study proposes a new network architecture by enforcing the output residual of the network possess intrinsic rain structures.
Such a structural residual setting guarantees the rain layer extracted by the network finely comply with the prior knowledge of general rain streaks.
arXiv Detail & Related papers (2020-05-19T05:52:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.