IDF-CR: Iterative Diffusion Process for Divide-and-Conquer Cloud Removal in Remote-sensing Images
- URL: http://arxiv.org/abs/2403.11870v1
- Date: Mon, 18 Mar 2024 15:23:48 GMT
- Title: IDF-CR: Iterative Diffusion Process for Divide-and-Conquer Cloud Removal in Remote-sensing Images
- Authors: Meilin Wang, Yexing Song, Pengxu Wei, Xiaoyu Xian, Yukai Shi, Liang Lin,
- Abstract summary: We present an iterative diffusion process for cloud removal (IDF-CR)
IDF-CR is divided into two-stage models that address pixel space and latent space.
In the latent space stage, the diffusion model transforms low-quality cloud removal into high-quality clean output.
- Score: 55.40601468843028
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning technologies have demonstrated their effectiveness in removing cloud cover from optical remote-sensing images. Convolutional Neural Networks (CNNs) exert dominance in the cloud removal tasks. However, constrained by the inherent limitations of convolutional operations, CNNs can address only a modest fraction of cloud occlusion. In recent years, diffusion models have achieved state-of-the-art (SOTA) proficiency in image generation and reconstruction due to their formidable generative capabilities. Inspired by the rapid development of diffusion models, we first present an iterative diffusion process for cloud removal (IDF-CR), which exhibits a strong generative capabilities to achieve component divide-and-conquer cloud removal. IDF-CR consists of a pixel space cloud removal module (Pixel-CR) and a latent space iterative noise diffusion network (IND). Specifically, IDF-CR is divided into two-stage models that address pixel space and latent space. The two-stage model facilitates a strategic transition from preliminary cloud reduction to meticulous detail refinement. In the pixel space stage, Pixel-CR initiates the processing of cloudy images, yielding a suboptimal cloud removal prior to providing the diffusion model with prior cloud removal knowledge. In the latent space stage, the diffusion model transforms low-quality cloud removal into high-quality clean output. We refine the Stable Diffusion by implementing ControlNet. In addition, an unsupervised iterative noise refinement (INR) module is introduced for diffusion model to optimize the distribution of the predicted noise, thereby enhancing advanced detail recovery. Our model performs best with other SOTA methods, including image reconstruction and optical remote-sensing cloud removal on the optical remote-sensing datasets.
Related papers
- Point Cloud Resampling with Learnable Heat Diffusion [58.050130177241186]
We propose a learnable heat diffusion framework for point cloud resampling.
Unlike previous diffusion models with a fixed prior, the adaptive conditional prior selectively preserves geometric features of the point cloud.
arXiv Detail & Related papers (2024-11-21T13:44:18Z) - Diffusion Enhancement for Cloud Removal in Ultra-Resolution Remote
Sensing Imagery [48.14610248492785]
Cloud layers severely compromise the quality and effectiveness of optical remote sensing (RS) images.
Existing deep-learning (DL)-based Cloud Removal (CR) techniques encounter difficulties in accurately reconstructing the original visual authenticity and detailed semantic content of the images.
This work proposes enhancements at the data and methodology fronts to tackle this challenge.
arXiv Detail & Related papers (2024-01-25T13:14:17Z) - Iterative Token Evaluation and Refinement for Real-World
Super-Resolution [77.74289677520508]
Real-world image super-resolution (RWSR) is a long-standing problem as low-quality (LQ) images often have complex and unidentified degradations.
We propose an Iterative Token Evaluation and Refinement framework for RWSR.
We show that ITER is easier to train than Generative Adversarial Networks (GANs) and more efficient than continuous diffusion models.
arXiv Detail & Related papers (2023-12-09T17:07:32Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - UnCRtainTS: Uncertainty Quantification for Cloud Removal in Optical
Satellite Time Series [19.32220113046804]
We introduce UnCRtainTS, a method for multi-temporal cloud removal combining a novel attention-based architecture.
We show how the well-calibrated predicted uncertainties enable a precise control of the reconstruction quality.
arXiv Detail & Related papers (2023-04-11T19:27:18Z) - ShadowDiffusion: When Degradation Prior Meets Diffusion Model for Shadow
Removal [74.86415440438051]
We propose a unified diffusion framework that integrates both the image and degradation priors for highly effective shadow removal.
Our model achieves a significant improvement in terms of PSNR, increasing from 31.69dB to 34.73dB over SRD dataset.
arXiv Detail & Related papers (2022-12-09T07:48:30Z) - Exploring the Potential of SAR Data for Cloud Removal in Optical
Satellite Imagery [41.40522618945897]
We propose a novel global-local fusion based cloud removal (GLF-CR) algorithm to leverage the complementary information embedded in SAR images.
The proposed algorithm can yield high quality cloud-free images and performs favorably against state-of-the-art cloud removal algorithms.
arXiv Detail & Related papers (2022-06-06T18:53:19Z) - Cloud removal in remote sensing images using generative adversarial
networks and SAR-to-optical image translation [0.618778092044887]
Cloud removal has received much attention due to the wide range of satellite image applications.
In this study, we attempt to solve the problem using two generative adversarial networks (GANs)
The first translates SAR images into optical images, and the second removes clouds using the translated images of prior GAN.
arXiv Detail & Related papers (2020-12-22T17:19:14Z) - Multi-Head Linear Attention Generative Adversarial Network for Thin
Cloud Removal [5.753245638190626]
thin cloud removal is an indispensable procedure to enhance the utilization of remote sensing images.
We propose a Multi-Head Linear Attention Generative Adversarial Network (MLAGAN) for Thin Cloud Removal.
Compared with six deep learning-based thin cloud removal benchmarks, the experimental results on the RICE1 and RICE2 datasets demonstrate that the proposed framework MLA-GAN has dominant advantages in thin cloud removal.
arXiv Detail & Related papers (2020-12-20T11:50:54Z) - Thick Cloud Removal of Remote Sensing Images Using Temporal Smoothness
and Sparsity-Regularized Tensor Optimization [3.65794756599491]
In remote sensing images, the presence of thick cloud accompanying cloud shadow is a high probability event.
A novel thick cloud removal method for remote sensing images based on temporal smoothness and sparsity-regularized tensor optimization is proposed.
arXiv Detail & Related papers (2020-08-11T05:59:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.