Diffusion Enhancement for Cloud Removal in Ultra-Resolution Remote
Sensing Imagery
- URL: http://arxiv.org/abs/2401.15105v1
- Date: Thu, 25 Jan 2024 13:14:17 GMT
- Title: Diffusion Enhancement for Cloud Removal in Ultra-Resolution Remote
Sensing Imagery
- Authors: Jialu Sui, Yiyang Ma, Wenhan Yang, Xiaokang Zhang, Man-On Pun and
Jiaying Liu
- Abstract summary: Cloud layers severely compromise the quality and effectiveness of optical remote sensing (RS) images.
Existing deep-learning (DL)-based Cloud Removal (CR) techniques encounter difficulties in accurately reconstructing the original visual authenticity and detailed semantic content of the images.
This work proposes enhancements at the data and methodology fronts to tackle this challenge.
- Score: 48.14610248492785
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The presence of cloud layers severely compromises the quality and
effectiveness of optical remote sensing (RS) images. However, existing
deep-learning (DL)-based Cloud Removal (CR) techniques encounter difficulties
in accurately reconstructing the original visual authenticity and detailed
semantic content of the images. To tackle this challenge, this work proposes to
encompass enhancements at the data and methodology fronts. On the data side, an
ultra-resolution benchmark named CUHK Cloud Removal (CUHK-CR) of 0.5m spatial
resolution is established. This benchmark incorporates rich detailed textures
and diverse cloud coverage, serving as a robust foundation for designing and
assessing CR models. From the methodology perspective, a novel diffusion-based
framework for CR called Diffusion Enhancement (DE) is proposed to perform
progressive texture detail recovery, which mitigates the training difficulty
with improved inference accuracy. Additionally, a Weight Allocation (WA)
network is developed to dynamically adjust the weights for feature fusion,
thereby further improving performance, particularly in the context of
ultra-resolution image generation. Furthermore, a coarse-to-fine training
strategy is applied to effectively expedite training convergence while reducing
the computational complexity required to handle ultra-resolution images.
Extensive experiments on the newly established CUHK-CR and existing datasets
such as RICE confirm that the proposed DE framework outperforms existing
DL-based methods in terms of both perceptual quality and signal fidelity.
Related papers
- Realistic Extreme Image Rescaling via Generative Latent Space Learning [51.85790402171696]
We propose a novel framework called Latent Space Based Image Rescaling (LSBIR) for extreme image rescaling tasks.
LSBIR effectively leverages powerful natural image priors learned by a pre-trained text-to-image diffusion model to generate realistic HR images.
In the first stage, a pseudo-invertible encoder-decoder models the bidirectional mapping between the latent features of the HR image and the target-sized LR image.
In the second stage, the reconstructed features from the first stage are refined by a pre-trained diffusion model to generate more faithful and visually pleasing details.
arXiv Detail & Related papers (2024-08-17T09:51:42Z) - Sewer Image Super-Resolution with Depth Priors and Its Lightweight Network [11.13549330516683]
Quick-view (QV) technique serves as a primary method for detecting defects within sewerage systems.
Super-resolution is an effective way to improve image quality and has been applied in a variety of scenes.
This study introduces a novel Depth-guided, Reference-based Super-Resolution framework denoted as DSRNet.
arXiv Detail & Related papers (2024-07-27T14:45:34Z) - Mitigating Data Consistency Induced Discrepancy in Cascaded Diffusion Models for Sparse-view CT Reconstruction [4.227116189483428]
This study introduces a novel Cascaded Diffusion with Discrepancy Mitigation framework.
It includes the low-quality image generation in latent space and the high-quality image generation in pixel space.
It minimizes computational costs by moving some inference steps from pixel space to latent space.
arXiv Detail & Related papers (2024-03-14T12:58:28Z) - Reti-Diff: Illumination Degradation Image Restoration with Retinex-based
Latent Diffusion Model [59.08821399652483]
Illumination degradation image restoration (IDIR) techniques aim to improve the visibility of degraded images and mitigate the adverse effects of deteriorated illumination.
Among these algorithms, diffusion model (DM)-based methods have shown promising performance but are often burdened by heavy computational demands and pixel misalignment issues when predicting the image-level distribution.
We propose to leverage DM within a compact latent space to generate concise guidance priors and introduce a novel solution called Reti-Diff for the IDIR task.
Reti-Diff comprises two key components: the Retinex-based latent DM (RLDM) and the Retinex-guided transformer (RG
arXiv Detail & Related papers (2023-11-20T09:55:06Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - Multi-Modal and Multi-Resolution Data Fusion for High-Resolution Cloud Removal: A Novel Baseline and Benchmark [21.255966041023083]
We introduce M3R-CR, a benchmark dataset for high-resolution Cloud Removal with Multi-Modal and Multi-Resolution data fusion.
We consider the problem of cloud removal in high-resolution optical remote sensing imagery by integrating multi-modal and multi-resolution information.
We design a new baseline named Align-CR to perform the low-resolution SAR image guided high-resolution optical image cloud removal.
arXiv Detail & Related papers (2023-01-09T15:31:28Z) - Single Image Internal Distribution Measurement Using Non-Local
Variational Autoencoder [11.985083962982909]
This paper proposes a novel image-specific solution, namely non-local variational autoencoder (textttNLVAE)
textttNLVAE is introduced as a self-supervised strategy that reconstructs high-resolution images using disentangled information from the non-local neighbourhood.
Experimental results from seven benchmark datasets demonstrate the effectiveness of the textttNLVAE model.
arXiv Detail & Related papers (2022-04-02T18:43:55Z) - Deep Generative Adversarial Residual Convolutional Networks for
Real-World Super-Resolution [31.934084942626257]
We propose a deep Super-Resolution Residual Convolutional Generative Adversarial Network (SRResCGAN)
It follows the real-world degradation settings by adversarial training the model with pixel-wise supervision in the HR domain from its generated LR counterpart.
The proposed network exploits the residual learning by minimizing the energy-based objective function with powerful image regularization and convex optimization techniques.
arXiv Detail & Related papers (2020-05-03T00:12:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.