Physics-Inspired Degradation Models for Hyperspectral Image Fusion
- URL: http://arxiv.org/abs/2402.02411v1
- Date: Sun, 4 Feb 2024 09:07:28 GMT
- Title: Physics-Inspired Degradation Models for Hyperspectral Image Fusion
- Authors: Jie Lian and Lizhi Wang and Lin Zhu and Renwei Dian and Zhiwei Xiong
and Hua Huang
- Abstract summary: Most fusion methods solely focus on the fusion algorithm itself and overlook the degradation models.
We propose physics-inspired degradation models (PIDM) to model the degradation of LR-HSI and HR-MSI.
Our proposed PIDM can boost the fusion performance of existing fusion methods in practical scenarios.
- Score: 61.743696362028246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The fusion of a low-spatial-resolution hyperspectral image (LR-HSI) with a
high-spatial-resolution multispectral image (HR-MSI) has garnered increasing
research interest. However, most fusion methods solely focus on the fusion
algorithm itself and overlook the degradation models, which results in
unsatisfactory performance in practical scenarios. To fill this gap, we propose
physics-inspired degradation models (PIDM) to model the degradation of LR-HSI
and HR-MSI, which comprises a spatial degradation network (SpaDN) and a
spectral degradation network (SpeDN). SpaDN and SpeDN are designed based on two
insights. First, we employ spatial warping and spectral modulation operations
to simulate lens aberrations, thereby introducing non-uniformity into the
spatial and spectral degradation processes. Second, we utilize asymmetric
downsampling and parallel downsampling operations to separately reduce the
spatial and spectral resolutions of the images, thus ensuring the matching of
spatial and spectral degradation processes with specific physical
characteristics. Once SpaDN and SpeDN are established, we adopt a
self-supervised training strategy to optimize the network parameters and
provide a plug-and-play solution for fusion methods. Comprehensive experiments
demonstrate that our proposed PIDM can boost the fusion performance of existing
fusion methods in practical scenarios.
Related papers
- Unsupervised Hyperspectral and Multispectral Image Blind Fusion Based on Deep Tucker Decomposition Network with Spatial-Spectral Manifold Learning [15.86617273658407]
We propose an unsupervised blind fusion method for hyperspectral and multispectral images based on Tucker decomposition and spatial spectral manifold learning (DTDNML)
We show that this method enhances the accuracy and efficiency of hyperspectral and multispectral fusion on different remote sensing datasets.
arXiv Detail & Related papers (2024-09-15T08:58:26Z) - Empowering Snapshot Compressive Imaging: Spatial-Spectral State Space Model with Across-Scanning and Local Enhancement [51.557804095896174]
We introduce a State Space Model with Across-Scanning and Local Enhancement, named ASLE-SSM, that employs a Spatial-Spectral SSM for global-local balanced context encoding and cross-channel interaction promoting.
Experimental results illustrate ASLE-SSM's superiority over existing state-of-the-art methods, with an inference speed 2.4 times faster than Transformer-based MST and saving 0.12 (M) of parameters.
arXiv Detail & Related papers (2024-08-01T15:14:10Z) - A Spectral Diffusion Prior for Hyperspectral Image Super-Resolution [14.405562058304074]
Fusion-based hyperspectral image (HSI) super-resolution aims to produce a high-spatial-resolution HSI by fusing a low-spatial-resolution HSI and a high-spatial-resolution multispectral image.
Motivated by the success of diffusion models, we propose a novel spectral diffusion prior for fusion-based HSI super-resolution.
arXiv Detail & Related papers (2023-11-15T13:40:58Z) - SSIF: Learning Continuous Image Representation for Spatial-Spectral
Super-Resolution [73.46167948298041]
We propose a neural implicit model that represents an image as a function of both continuous pixel coordinates in the spatial domain and continuous wavelengths in the spectral domain.
We show that SSIF generalizes well to both unseen spatial resolutions and spectral resolutions.
It can generate high-resolution images that improve the performance of downstream tasks by 1.7%-7%.
arXiv Detail & Related papers (2023-09-30T15:23:30Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - Hyperspectral and Multispectral Image Fusion Using the Conditional
Denoising Diffusion Probabilistic Model [18.915369996829984]
We propose a deep fusion method based on the conditional denoising diffusion probabilistic model, called DDPM-Fus.
Experiments conducted on one indoor and two remote sensing datasets show the superiority of the proposed model when compared with other advanced deep learningbased fusion methods.
arXiv Detail & Related papers (2023-07-07T07:08:52Z) - Unsupervised Hyperspectral and Multispectral Images Fusion Based on the
Cycle Consistency [21.233354336608205]
We propose an unsupervised HSI and MSI fusion model based on the cycle consistency, called CycFusion.
The CycFusion learns the domain transformation between low spatial resolution HSI (LrHSI) and high spatial resolution MSI (HrMSI)
Experiments conducted on several datasets show that our proposed model outperforms all compared unsupervised fusion methods.
arXiv Detail & Related papers (2023-07-07T06:47:15Z) - HDNet: High-resolution Dual-domain Learning for Spectral Compressive
Imaging [138.04956118993934]
We propose a high-resolution dual-domain learning network (HDNet) for HSI reconstruction.
On the one hand, the proposed HR spatial-spectral attention module with its efficient feature fusion provides continuous and fine pixel-level features.
On the other hand, frequency domain learning (FDL) is introduced for HSI reconstruction to narrow the frequency domain discrepancy.
arXiv Detail & Related papers (2022-03-04T06:37:45Z) - A Latent Encoder Coupled Generative Adversarial Network (LE-GAN) for
Efficient Hyperspectral Image Super-resolution [3.1023808510465627]
generative adversarial network (GAN) has proven to be an effective deep learning framework for image super-resolution.
To alleviate the problem of mode collapse, this work has proposed a novel GAN model coupled with a latent encoder (LE-GAN)
LE-GAN can map the generated spectral-spatial features from the image space to the latent space and produce a coupling component to regularise the generated samples.
arXiv Detail & Related papers (2021-11-16T18:40:19Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.