DestripeCycleGAN: Stripe Simulation CycleGAN for Unsupervised Infrared
Image Destriping
- URL: http://arxiv.org/abs/2402.09101v1
- Date: Wed, 14 Feb 2024 11:22:20 GMT
- Title: DestripeCycleGAN: Stripe Simulation CycleGAN for Unsupervised Infrared
Image Destriping
- Authors: Shiqi Yang, Hanlin Qin, Shuai Yuan, Xiang Yan, Hossein Rahmani
- Abstract summary: CycleGAN has been proven to be an advanced approach for unsupervised image restoration.
We present a novel framework for single-frame infrared image destriping, named DestripeCycleGAN.
- Score: 15.797480466799222
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: CycleGAN has been proven to be an advanced approach for unsupervised image
restoration. This framework consists of two generators: a denoising one for
inference and an auxiliary one for modeling noise to fulfill cycle-consistency
constraints. However, when applied to the infrared destriping task, it becomes
challenging for the vanilla auxiliary generator to consistently produce
vertical noise under unsupervised constraints. This poses a threat to the
effectiveness of the cycle-consistency loss, leading to stripe noise residual
in the denoised image. To address the above issue, we present a novel framework
for single-frame infrared image destriping, named DestripeCycleGAN. In this
model, the conventional auxiliary generator is replaced with a priori stripe
generation model (SGM) to introduce vertical stripe noise in the clean data,
and the gradient map is employed to re-establish cycle-consistency. Meanwhile,
a Haar wavelet background guidance module (HBGM) has been designed to minimize
the divergence of background details between the different domains. To preserve
vertical edges, a multi-level wavelet U-Net (MWUNet) is proposed as the
denoising generator, which utilizes the Haar wavelet transform as the sampler
to decline directional information loss. Moreover, it incorporates the group
fusion block (GFB) into skip connections to fuse the multi-scale features and
build the context of long-distance dependencies. Extensive experiments on real
and synthetic data demonstrate that our DestripeCycleGAN surpasses the
state-of-the-art methods in terms of visual quality and quantitative
evaluation. Our code will be made public at
https://github.com/0wuji/DestripeCycleGAN.
Related papers
- Diffusion-Aided Joint Source Channel Coding For High Realism Wireless Image Transmission [24.372996233209854]
DiffJSCC is a novel framework that produces high-realism images via the conditional diffusion denoising process.
It can achieve highly realistic reconstructions for 768x512 pixel Kodak images with only 3072 symbols.
arXiv Detail & Related papers (2024-04-27T00:12:13Z) - SGDFormer: One-stage Transformer-based Architecture for Cross-Spectral Stereo Image Guided Denoising [11.776198596143931]
We propose a one-stage transformer-based architecture, named SGDFormer, for cross-spectral Stereo image Guided Denoising.
Our transformer block contains a noise-robust cross-attention (NRCA) module and a spatially variant feature fusion (SVFF) module.
Thanks to the above design, our SGDFormer can restore artifact-free images with fine structures, and achieves state-of-the-art performance on various datasets.
arXiv Detail & Related papers (2024-03-30T12:55:19Z) - Unsupervised Denoising for Signal-Dependent and Row-Correlated Imaging Noise [54.0185721303932]
We present the first fully unsupervised deep learning-based denoiser capable of handling imaging noise that is row-correlated.
Our approach uses a Variational Autoencoder with a specially designed autoregressive decoder.
Our method does not require a pre-trained noise model and can be trained from scratch using unpaired noisy data.
arXiv Detail & Related papers (2023-10-11T20:48:20Z) - Reconstruct-and-Generate Diffusion Model for Detail-Preserving Image
Denoising [16.43285056788183]
We propose a novel approach called the Reconstruct-and-Generate Diffusion Model (RnG)
Our method leverages a reconstructive denoising network to recover the majority of the underlying clean signal.
It employs a diffusion algorithm to generate residual high-frequency details, thereby enhancing visual quality.
arXiv Detail & Related papers (2023-09-19T16:01:20Z) - DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior [70.46245698746874]
We present DiffBIR, a general restoration pipeline that could handle different blind image restoration tasks.
DiffBIR decouples blind image restoration problem into two stages: 1) degradation removal: removing image-independent content; 2) information regeneration: generating the lost image content.
In the first stage, we use restoration modules to remove degradations and obtain high-fidelity restored results.
For the second stage, we propose IRControlNet that leverages the generative ability of latent diffusion models to generate realistic details.
arXiv Detail & Related papers (2023-08-29T07:11:52Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - Degradation-Noise-Aware Deep Unfolding Transformer for Hyperspectral
Image Denoising [9.119226249676501]
Hyperspectral images (HSIs) are often quite noisy because of narrow band spectral filtering.
To reduce the noise in HSI data cubes, both model-driven and learning-based denoising algorithms have been proposed.
This paper proposes a Degradation-Noise-Aware Unfolding Network (DNA-Net) that addresses these issues.
arXiv Detail & Related papers (2023-05-06T13:28:20Z) - CVF-SID: Cyclic multi-Variate Function for Self-Supervised Image
Denoising by Disentangling Noise from Image [53.76319163746699]
We propose a novel and powerful self-supervised denoising method called CVF-SID.
CVF-SID can disentangle a clean image and noise maps from the input by leveraging various self-supervised loss terms.
It achieves state-of-the-art self-supervised image denoising performance and is comparable to other existing approaches.
arXiv Detail & Related papers (2022-03-24T11:59:28Z) - Cycle-free CycleGAN using Invertible Generator for Unsupervised Low-Dose
CT Denoising [33.79188588182528]
CycleGAN provides high-performance, ultra-fast denoising for low-dose X-ray computed tomography (CT) images.
CycleGAN requires two generators and two discriminators to enforce cycle consistency.
We present a novel cycle-free Cycle-GAN architecture, which consists of a single generator and a discriminator but still guarantees cycle consistency.
arXiv Detail & Related papers (2021-04-17T13:23:36Z) - Conditioning Trick for Training Stable GANs [70.15099665710336]
We propose a conditioning trick, called difference departure from normality, applied on the generator network in response to instability issues during GAN training.
We force the generator to get closer to the departure from normality function of real samples computed in the spectral domain of Schur decomposition.
arXiv Detail & Related papers (2020-10-12T16:50:22Z) - Blur, Noise, and Compression Robust Generative Adversarial Networks [85.68632778835253]
We propose blur, noise, and compression robust GAN (BNCR-GAN) to learn a clean image generator directly from degraded images.
Inspired by NR-GAN, BNCR-GAN uses a multiple-generator model composed of image, blur- Kernel, noise, and quality-factor generators.
We demonstrate the effectiveness of BNCR-GAN through large-scale comparative studies on CIFAR-10 and a generality analysis on FFHQ.
arXiv Detail & Related papers (2020-03-17T17:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.