Unsupervised Hyperspectral Pansharpening via Low-rank Diffusion Model
- URL: http://arxiv.org/abs/2305.10925v2
- Date: Sun, 19 Nov 2023 14:09:45 GMT
- Title: Unsupervised Hyperspectral Pansharpening via Low-rank Diffusion Model
- Authors: Xiangyu Rui, Xiangyong Cao, Li Pang, Zeyu Zhu, Zongsheng Yue, and Deyu
Meng
- Abstract summary: Hyperspectral pansharpening is a process of merging a high-resolution panchromatic (PAN) image and a low-resolution hyperspectral (LRHS) image to create a single high-resolution hyperspectral (HRHS) image.
Existing Bayesian-based HS pansharpening methods require designing handcraft image prior to characterize the image features.
We propose a low-rank diffusion model for hyperspectral pansharpening by simultaneously leveraging the power of the pre-trained deep diffusion model and better generalization ability of Bayesian methods.
- Score: 43.71116483554516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperspectral pansharpening is a process of merging a high-resolution
panchromatic (PAN) image and a low-resolution hyperspectral (LRHS) image to
create a single high-resolution hyperspectral (HRHS) image. Existing
Bayesian-based HS pansharpening methods require designing handcraft image prior
to characterize the image features, and deep learning-based HS pansharpening
methods usually require a large number of paired training data and suffer from
poor generalization ability. To address these issues, in this work, we propose
a low-rank diffusion model for hyperspectral pansharpening by simultaneously
leveraging the power of the pre-trained deep diffusion model and better
generalization ability of Bayesian methods. Specifically, we assume that the
HRHS image can be recovered from the product of two low-rank tensors, i.e., the
base tensor and the coefficient matrix. The base tensor lies on the image field
and has a low spectral dimension. Thus, we can conveniently utilize a
pre-trained remote sensing diffusion model to capture its image structures.
Additionally, we derive a simple yet quite effective way to pre-estimate the
coefficient matrix from the observed LRHS image, which preserves the spectral
information of the HRHS. Experimental results demonstrate that the proposed
method performs better than some popular traditional approaches and gains
better generalization ability than some DL-based methods. The code is released
in https://github.com/xyrui/PLRDiff.
Related papers
- Multi-Head Attention Residual Unfolded Network for Model-Based Pansharpening [2.874893537471256]
Unfolding fusion methods integrate the powerful representation capabilities of deep learning with the robustness of model-based approaches.
In this paper, we propose a model-based deep unfolded method for satellite image fusion.
Experimental results on PRISMA, Quickbird, and WorldView2 datasets demonstrate the superior performance of our method.
arXiv Detail & Related papers (2024-09-04T13:05:00Z) - Variational Zero-shot Multispectral Pansharpening [43.881891055500496]
Pansharpening aims to generate a high spatial resolution multispectral image (HRMS) by fusing a low spatial resolution multispectral image (LRMS) and a panchromatic image (PAN)
Existing deep learning-based methods are unsuitable since they rely on many training pairs.
We propose a zero-shot pansharpening method by introducing a neural network into the optimization objective.
arXiv Detail & Related papers (2024-07-09T07:59:34Z) - CrossDiff: Exploring Self-Supervised Representation of Pansharpening via
Cross-Predictive Diffusion Model [42.39485365164292]
Fusion of a panchromatic (PAN) image and corresponding multispectral (MS) image is also known as pansharpening.
Due to the absence of high-resolution MS images, available deep-learning-based methods usually follow the paradigm of training at reduced resolution and testing at both reduced and full resolution.
We propose to explore the self-supervised representation of pansharpening by designing a cross-predictive diffusion model, named CrossDiff.
arXiv Detail & Related papers (2024-01-10T13:32:47Z) - Band-wise Hyperspectral Image Pansharpening using CNN Model Propagation [4.246657212475299]
We propose a new deep learning method for hyperspectral pansharpening.
It inherits a simple single-band unsupervised pansharpening model nested in a sequential band-wise adaptive scheme.
The proposed method achieves very good results on our datasets, outperforming both traditional and deep learning reference methods.
arXiv Detail & Related papers (2023-11-11T08:53:54Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - PC-GANs: Progressive Compensation Generative Adversarial Networks for
Pan-sharpening [50.943080184828524]
We propose a novel two-step model for pan-sharpening that sharpens the MS image through the progressive compensation of the spatial and spectral information.
The whole model is composed of triple GANs, and based on the specific architecture, a joint compensation loss function is designed to enable the triple GANs to be trained simultaneously.
arXiv Detail & Related papers (2022-07-29T03:09:21Z) - Hyperspectral Pansharpening Based on Improved Deep Image Prior and
Residual Reconstruction [64.10636296274168]
Hyperspectral pansharpening aims to synthesize a low-resolution hyperspectral image (LR-HSI) with a registered panchromatic image (PAN) to generate an enhanced HSI with high spectral and spatial resolution.
Recently proposed HS pansharpening methods have obtained remarkable results using deep convolutional networks (ConvNets)
We propose a novel over-complete network, called HyperKite, which focuses on learning high-level features by constraining the receptive from increasing in the deep layers.
arXiv Detail & Related papers (2021-07-06T14:11:03Z) - PGMAN: An Unsupervised Generative Multi-adversarial Network for
Pan-sharpening [46.84573725116611]
We propose an unsupervised framework that learns directly from the full-resolution images without any preprocessing.
We use a two-stream generator to extract the modality-specific features from the PAN and MS images, respectively, and develop a dual-discriminator to preserve the spectral and spatial information of the inputs when performing fusion.
arXiv Detail & Related papers (2020-12-16T16:21:03Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z) - PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of
Generative Models [77.32079593577821]
PULSE (Photo Upsampling via Latent Space Exploration) generates high-resolution, realistic images at resolutions previously unseen in the literature.
Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.
arXiv Detail & Related papers (2020-03-08T16:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.