Hierarchical-level rain image generative model based on GAN
- URL: http://arxiv.org/abs/2309.02964v1
- Date: Wed, 6 Sep 2023 12:59:52 GMT
- Title: Hierarchical-level rain image generative model based on GAN
- Authors: Zhenyuan Liu, Tong Jia, Xingyu Xing, Jianfeng Wu, Junyi Chen
- Abstract summary: hierarchical-level rain image generative model, rain conditional CycleGAN, is constructed.
Different rain intensities are introduced as labels in conditional GAN.
Model structure is optimized and the training strategy is adjusted to alleviate the problem of mode collapse.
- Score: 4.956959291938016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous vehicles are exposed to various weather during operation, which is
likely to trigger the performance limitations of the perception system, leading
to the safety of the intended functionality (SOTIF) problems. To efficiently
generate data for testing the performance of visual perception algorithms under
various weather conditions, a hierarchical-level rain image generative model,
rain conditional CycleGAN (RCCycleGAN), is constructed. RCCycleGAN is based on
the generative adversarial network (GAN) and can generate images of light,
medium, and heavy rain. Different rain intensities are introduced as labels in
conditional GAN (CGAN). Meanwhile, the model structure is optimized and the
training strategy is adjusted to alleviate the problem of mode collapse. In
addition, natural rain images of different intensities are collected and
processed for model training and validation. Compared with the two baseline
models, CycleGAN and DerainCycleGAN, the peak signal-to-noise ratio (PSNR) of
RCCycleGAN on the test dataset is improved by 2.58 dB and 0.74 dB, and the
structural similarity (SSIM) is improved by 18% and 8%, respectively. The
ablation experiments are also carried out to validate the effectiveness of the
model tuning.
Related papers
- Exploring Physics-Informed Neural Networks for Crop Yield Loss Forecasting [4.707950656037167]
In response to climate change, assessing crop productivity under extreme weather conditions is essential to enhance food security.
We propose a novel method that combines the strengths of both approaches by estimating the water use and the crop sensitivity to water scarcity at the pixel level.
Our model demonstrates high accuracy, achieving an R2 of up to 0.77, matching or surpassing state-of-the-art models like RNNs and Transformers.
arXiv Detail & Related papers (2024-12-31T15:21:50Z) - SeaDAG: Semi-autoregressive Diffusion for Conditional Directed Acyclic Graph Generation [83.52157311471693]
We introduce SeaDAG, a semi-autoregressive diffusion model for conditional generation of Directed Acyclic Graphs (DAGs)
Unlike conventional autoregressive generation that lacks a global graph structure view, our method maintains a complete graph structure at each diffusion step.
We explicitly train the model to learn graph conditioning with a condition loss, which enhances the diffusion model's capacity to generate realistic DAGs.
arXiv Detail & Related papers (2024-10-21T15:47:03Z) - Harnessing Joint Rain-/Detail-aware Representations to Eliminate Intricate Rains [9.6606245317525]
We develop a Context-based Instance-level Modulation mechanism adept at efficiently modulating CNN- or Transformer-based models.
We also devise a rain-/detail-aware contrastive learning strategy to help extract joint rain-/detail-aware representations.
By integrating CoI-M with the rain-/detail-aware Contrastive learning, we develop CoIC, an innovative and potent algorithm tailored for training models on mixed datasets.
arXiv Detail & Related papers (2024-04-18T11:20:53Z) - TRG-Net: An Interpretable and Controllable Rain Generator [61.2760968459789]
This study proposes a novel deep learning based rain generator, which fully takes the physical generation mechanism underlying rains into consideration.
Its significance lies in that the generator not only elaborately design essential elements of the rain to simulate expected rains, but also finely adapt to complicated and diverse practical rainy images.
Our unpaired generation experiments demonstrate that the rain generated by the proposed rain generator is not only of higher quality, but also more effective for deraining and downstream tasks.
arXiv Detail & Related papers (2024-03-15T03:27:39Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - DuDGAN: Improving Class-Conditional GANs via Dual-Diffusion [2.458437232470188]
Class-conditional image generation using generative adversarial networks (GANs) has been investigated through various techniques.
We propose a novel approach for class-conditional image generation using GANs called DuDGAN, which incorporates a dual diffusion-based noise injection process.
Our method outperforms state-of-the-art conditional GAN models for image generation in terms of performance.
arXiv Detail & Related papers (2023-05-24T07:59:44Z) - A Generative Deep Learning Approach to Stochastic Downscaling of
Precipitation Forecasts [0.5906031288935515]
Generative adversarial networks (GANs) have been demonstrated by the computer vision community to be successful at super-resolution problems.
We show that GANs and VAE-GANs can match the statistical properties of state-of-the-art pointwise post-processing methods whilst creating high-resolution, spatially coherent precipitation maps.
arXiv Detail & Related papers (2022-04-05T07:19:42Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Semi-Supervised Video Deraining with Dynamic Rain Generator [59.71640025072209]
This paper proposes a new semi-supervised video deraining method, in which a dynamic rain generator is employed to fit the rain layer.
Specifically, such dynamic generator consists of one emission model and one transition model to simultaneously encode the spatially physical structure and temporally continuous changes of rain streaks.
Various prior formats are designed for the labeled synthetic and unlabeled real data, so as to fully exploit the common knowledge underlying them.
arXiv Detail & Related papers (2021-03-14T14:28:57Z) - From Rain Generation to Rain Removal [67.71728610434698]
We build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator.
We employ the variational inference framework to approximate the expected statistical distribution of rainy image.
Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution.
arXiv Detail & Related papers (2020-08-08T18:56:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.