SSDiff: Spatial-spectral Integrated Diffusion Model for Remote Sensing Pansharpening
- URL: http://arxiv.org/abs/2404.11537v1
- Date: Wed, 17 Apr 2024 16:30:56 GMT
- Title: SSDiff: Spatial-spectral Integrated Diffusion Model for Remote Sensing Pansharpening
- Authors: Yu Zhong, Xiao Wu, Liang-Jian Deng, Zihan Cao,
- Abstract summary: We introduce a spatial-spectral integrated diffusion model for the remote sensing pansharpening task, called SSDiff.
SSDiff considers the pansharpening process as the fusion process of spatial and spectral components from the perspective of subspace decomposition.
- Score: 14.293042131263924
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pansharpening is a significant image fusion technique that merges the spatial content and spectral characteristics of remote sensing images to generate high-resolution multispectral images. Recently, denoising diffusion probabilistic models have been gradually applied to visual tasks, enhancing controllable image generation through low-rank adaptation (LoRA). In this paper, we introduce a spatial-spectral integrated diffusion model for the remote sensing pansharpening task, called SSDiff, which considers the pansharpening process as the fusion process of spatial and spectral components from the perspective of subspace decomposition. Specifically, SSDiff utilizes spatial and spectral branches to learn spatial details and spectral features separately, then employs a designed alternating projection fusion module (APFM) to accomplish the fusion. Furthermore, we propose a frequency modulation inter-branch module (FMIM) to modulate the frequency distribution between branches. The two components of SSDiff can perform favorably against the APFM when utilizing a LoRA-like branch-wise alternative fine-tuning method. It refines SSDiff to capture component-discriminating features more sufficiently. Finally, extensive experiments on four commonly used datasets, i.e., WorldView-3, WorldView-2, GaoFen-2, and QuickBird, demonstrate the superiority of SSDiff both visually and quantitatively. The code will be made open source after possible acceptance.
Related papers
- Unsupervised Hyperspectral and Multispectral Image Blind Fusion Based on Deep Tucker Decomposition Network with Spatial-Spectral Manifold Learning [15.86617273658407]
We propose an unsupervised blind fusion method for hyperspectral and multispectral images based on Tucker decomposition and spatial spectral manifold learning (DTDNML)
We show that this method enhances the accuracy and efficiency of hyperspectral and multispectral fusion on different remote sensing datasets.
arXiv Detail & Related papers (2024-09-15T08:58:26Z) - A Hybrid Transformer-Mamba Network for Single Image Deraining [70.64069487982916]
Existing deraining Transformers employ self-attention mechanisms with fixed-range windows or along channel dimensions.
We introduce a novel dual-branch hybrid Transformer-Mamba network, denoted as TransMamba, aimed at effectively capturing long-range rain-related dependencies.
arXiv Detail & Related papers (2024-08-31T10:03:19Z) - A Dual Domain Multi-exposure Image Fusion Network based on the
Spatial-Frequency Integration [57.14745782076976]
Multi-exposure image fusion aims to generate a single high-dynamic image by integrating images with different exposures.
We propose a novelty perspective on multi-exposure image fusion via the Spatial-Frequency Integration Framework, named MEF-SFI.
Our method achieves visual-appealing fusion results against state-of-the-art multi-exposure image fusion approaches.
arXiv Detail & Related papers (2023-12-17T04:45:15Z) - SS-MAE: Spatial-Spectral Masked Auto-Encoder for Multi-Source Remote
Sensing Image Classification [35.52272615695294]
We propose a spatial-spectral masked auto-encoder (SS-MAE) for HSI and LiDAR/SAR data joint classification.
Our SS-MAE fully exploits the spatial and spectral representations of the input data.
To complement local features in the training stage, we add two lightweight CNNs for feature extraction.
arXiv Detail & Related papers (2023-11-08T03:54:44Z) - Unified Frequency-Assisted Transformer Framework for Detecting and
Grounding Multi-Modal Manipulation [109.1912721224697]
We present the Unified Frequency-Assisted transFormer framework, named UFAFormer, to address the DGM4 problem.
By leveraging the discrete wavelet transform, we decompose images into several frequency sub-bands, capturing rich face forgery artifacts.
Our proposed frequency encoder, incorporating intra-band and inter-band self-attentions, explicitly aggregates forgery features within and across diverse sub-bands.
arXiv Detail & Related papers (2023-09-18T11:06:42Z) - SpectralDiff: A Generative Framework for Hyperspectral Image
Classification with Diffusion Models [18.391049303136715]
We propose a generative framework for HSI classification with diffusion models (SpectralDiff)
SpectralDiff effectively mines the distribution information of high-dimensional and highly redundant data.
Experiments on three public HSI datasets demonstrate that the proposed method can achieve better performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-12T16:32:34Z) - CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for
Multi-Modality Image Fusion [138.40422469153145]
We propose a novel Correlation-Driven feature Decomposition Fusion (CDDFuse) network.
We show that CDDFuse achieves promising results in multiple fusion tasks, including infrared-visible image fusion and medical image fusion.
arXiv Detail & Related papers (2022-11-26T02:40:28Z) - PC-GANs: Progressive Compensation Generative Adversarial Networks for
Pan-sharpening [50.943080184828524]
We propose a novel two-step model for pan-sharpening that sharpens the MS image through the progressive compensation of the spatial and spectral information.
The whole model is composed of triple GANs, and based on the specific architecture, a joint compensation loss function is designed to enable the triple GANs to be trained simultaneously.
arXiv Detail & Related papers (2022-07-29T03:09:21Z) - Decoupled-and-Coupled Networks: Self-Supervised Hyperspectral Image
Super-Resolution with Subpixel Fusion [67.35540259040806]
We propose a subpixel-level HS super-resolution framework by devising a novel decoupled-and-coupled network, called DCNet.
As the name suggests, DC-Net first decouples the input into common (or cross-sensor) and sensor-specific components.
We append a self-supervised learning module behind the CSU net by guaranteeing the material consistency to enhance the detailed appearances of the restored HS product.
arXiv Detail & Related papers (2022-05-07T23:40:36Z) - MESSFN : a Multi-level and Enhanced Spectral-Spatial Fusion Network for
Pan-sharpening [17.129956512200454]
We propose a Multi-level and Enhanced Spectral-Spatial Fusion Network (MESSFN) with the following innovations.
A novel Spectral-Spatial stream is established to hierarchically derive and fuse the multi-level prior spectral and spatial expertise from the MS stream and the PAN stream.
Experiments on two datasets demonstrate that the network is competitive with or better than state-of-the-art methods.
arXiv Detail & Related papers (2021-09-21T03:38:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.