TPSeNCE: Towards Artifact-Free Realistic Rain Generation for Deraining
and Object Detection in Rain
- URL: http://arxiv.org/abs/2311.00660v3
- Date: Wed, 8 Nov 2023 02:46:34 GMT
- Title: TPSeNCE: Towards Artifact-Free Realistic Rain Generation for Deraining
and Object Detection in Rain
- Authors: Shen Zheng, Changjie Lu, Srinivasa G. Narasimhan
- Abstract summary: We propose an unpaired image-to-image translation framework for generating realistic rainy images.
We first introduce a Triangular Probability Similarity constraint to guide the generated images toward clear and rainy images in the discriminator manifold.
Experiments demonstrate realistic rain generation with minimal artifacts and distortions, which benefits image deraining and object detection in rain.
- Score: 23.050711662981655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rain generation algorithms have the potential to improve the generalization
of deraining methods and scene understanding in rainy conditions. However, in
practice, they produce artifacts and distortions and struggle to control the
amount of rain generated due to a lack of proper constraints. In this paper, we
propose an unpaired image-to-image translation framework for generating
realistic rainy images. We first introduce a Triangular Probability Similarity
(TPS) constraint to guide the generated images toward clear and rainy images in
the discriminator manifold, thereby minimizing artifacts and distortions during
rain generation. Unlike conventional contrastive learning approaches, which
indiscriminately push negative samples away from the anchors, we propose a
Semantic Noise Contrastive Estimation (SeNCE) strategy and reassess the pushing
force of negative samples based on the semantic similarity between the clear
and the rainy images and the feature similarity between the anchor and the
negative samples. Experiments demonstrate realistic rain generation with
minimal artifacts and distortions, which benefits image deraining and object
detection in rain. Furthermore, the method can be used to generate realistic
snowy and night images, underscoring its potential for broader applicability.
Code is available at https://github.com/ShenZheng2000/TPSeNCE.
Related papers
- TRG-Net: An Interpretable and Controllable Rain Generator [61.2760968459789]
This study proposes a novel deep learning based rain generator, which fully takes the physical generation mechanism underlying rains into consideration.
Its significance lies in that the generator not only elaborately design essential elements of the rain to simulate expected rains, but also finely adapt to complicated and diverse practical rainy images.
Our unpaired generation experiments demonstrate that the rain generated by the proposed rain generator is not only of higher quality, but also more effective for deraining and downstream tasks.
arXiv Detail & Related papers (2024-03-15T03:27:39Z) - Contrastive Learning Based Recursive Dynamic Multi-Scale Network for
Image Deraining [47.764883957379745]
Rain streaks significantly decrease the visibility of captured images.
Existing deep learning-based image deraining methods employ manually crafted networks and learn a straightforward projection from rainy images to clear images.
We propose a contrastive learning-based image deraining method that investigates the correlation between rainy and clear images.
arXiv Detail & Related papers (2023-05-29T13:51:41Z) - Single Image Deraining via Feature-based Deep Convolutional Neural
Network [13.39233717329633]
A single image deraining algorithm based on the combination of data-driven and model-based approaches is proposed.
Experiments show that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both qualitative and quantitative measures.
arXiv Detail & Related papers (2023-05-03T13:12:51Z) - Not Just Streaks: Towards Ground Truth for Single Image Deraining [42.15398478201746]
We propose a large-scale dataset of real-world rainy and clean image pairs.
We propose a deep neural network that reconstructs the underlying scene by minimizing a rain-robust loss between rainy and clean images.
arXiv Detail & Related papers (2022-06-22T00:10:06Z) - Unsupervised Deraining: Where Contrastive Learning Meets Self-similarity [0.0]
We propose a novel non-local contrastive learning (NLCL) method for unsupervised image deraining.
The proposed method obtains state-of-the-art performance in real deraining.
arXiv Detail & Related papers (2022-03-22T07:37:08Z) - Deep Single Image Deraining using An Asymetric Cycle Generative and
Adversarial Framework [16.59494337699748]
We propose a novel Asymetric Cycle Generative and Adrial framework (ACGF) for single image deraining.
ACGF trains on both synthetic and real rainy images while simultaneously capturing both rain streaks and fog features.
Experiments on benchmark rain-fog and rain datasets show that ACGF outperforms state-of-the-art deraining methods.
arXiv Detail & Related papers (2022-02-19T16:14:10Z) - RCDNet: An Interpretable Rain Convolutional Dictionary Network for
Single Image Deraining [49.99207211126791]
We specifically build a novel deep architecture, called rain convolutional dictionary network (RCDNet)
RCDNet embeds the intrinsic priors of rain streaks and has clear interpretability.
By end-to-end training such an interpretable network, all involved rain kernels and proximal operators can be automatically extracted.
arXiv Detail & Related papers (2021-07-14T16:08:11Z) - Closing the Loop: Joint Rain Generation and Removal via Disentangled
Image Translation [12.639320247831181]
We argue that the rain generation and removal are the two sides of the same coin and should be tightly coupled.
We propose a bidirectional disentangled translation network, in which each unidirectional network contains two loops of joint rain generation and removal.
Experiments on synthetic and real-world rain datasets show the superiority of proposed method compared to state-of-the-arts.
arXiv Detail & Related papers (2021-03-25T08:21:43Z) - From Rain Generation to Rain Removal [67.71728610434698]
We build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator.
We employ the variational inference framework to approximate the expected statistical distribution of rainy image.
Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution.
arXiv Detail & Related papers (2020-08-08T18:56:51Z) - Structural Residual Learning for Single Image Rain Removal [48.87977695398587]
This study proposes a new network architecture by enforcing the output residual of the network possess intrinsic rain structures.
Such a structural residual setting guarantees the rain layer extracted by the network finely comply with the prior knowledge of general rain streaks.
arXiv Detail & Related papers (2020-05-19T05:52:13Z) - Conditional Variational Image Deraining [158.76814157115223]
Conditional Variational Image Deraining (CVID) network for better deraining performance.
We propose a spatial density estimation (SDE) module to estimate a rain density map for each image.
Experiments on synthesized and real-world datasets show that the proposed CVID network achieves much better performance than previous deterministic methods on image deraining.
arXiv Detail & Related papers (2020-04-23T11:51:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.