Unsupervised Hyperspectral and Multispectral Images Fusion Based on the
Cycle Consistency
- URL: http://arxiv.org/abs/2307.03413v1
- Date: Fri, 7 Jul 2023 06:47:15 GMT
- Title: Unsupervised Hyperspectral and Multispectral Images Fusion Based on the
Cycle Consistency
- Authors: Shuaikai Shi, Lijun Zhang, Yoann Altmann, Jie Chen
- Abstract summary: We propose an unsupervised HSI and MSI fusion model based on the cycle consistency, called CycFusion.
The CycFusion learns the domain transformation between low spatial resolution HSI (LrHSI) and high spatial resolution MSI (HrMSI)
Experiments conducted on several datasets show that our proposed model outperforms all compared unsupervised fusion methods.
- Score: 21.233354336608205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperspectral images (HSI) with abundant spectral information reflected
materials property usually perform low spatial resolution due to the hardware
limits. Meanwhile, multispectral images (MSI), e.g., RGB images, have a high
spatial resolution but deficient spectral signatures. Hyperspectral and
multispectral image fusion can be cost-effective and efficient for acquiring
both high spatial resolution and high spectral resolution images. Many of the
conventional HSI and MSI fusion algorithms rely on known spatial degradation
parameters, i.e., point spread function, spectral degradation parameters,
spectral response function, or both of them. Another class of deep
learning-based models relies on the ground truth of high spatial resolution HSI
and needs large amounts of paired training images when working in a supervised
manner. Both of these models are limited in practical fusion scenarios. In this
paper, we propose an unsupervised HSI and MSI fusion model based on the cycle
consistency, called CycFusion. The CycFusion learns the domain transformation
between low spatial resolution HSI (LrHSI) and high spatial resolution MSI
(HrMSI), and the desired high spatial resolution HSI (HrHSI) are considered to
be intermediate feature maps in the transformation networks. The CycFusion can
be trained with the objective functions of marginal matching in single
transform and cycle consistency in double transforms. Moreover, the estimated
PSF and SRF are embedded in the model as the pre-training weights, which
further enhances the practicality of our proposed model. Experiments conducted
on several datasets show that our proposed model outperforms all compared
unsupervised fusion methods. The codes of this paper will be available at this
address: https: //github.com/shuaikaishi/CycFusion for reproducibility.
Related papers
- Unsupervised Hyperspectral and Multispectral Image Blind Fusion Based on Deep Tucker Decomposition Network with Spatial-Spectral Manifold Learning [15.86617273658407]
We propose an unsupervised blind fusion method for hyperspectral and multispectral images based on Tucker decomposition and spatial spectral manifold learning (DTDNML)
We show that this method enhances the accuracy and efficiency of hyperspectral and multispectral fusion on different remote sensing datasets.
arXiv Detail & Related papers (2024-09-15T08:58:26Z) - Empowering Snapshot Compressive Imaging: Spatial-Spectral State Space Model with Across-Scanning and Local Enhancement [51.557804095896174]
We introduce a State Space Model with Across-Scanning and Local Enhancement, named ASLE-SSM, that employs a Spatial-Spectral SSM for global-local balanced context encoding and cross-channel interaction promoting.
Experimental results illustrate ASLE-SSM's superiority over existing state-of-the-art methods, with an inference speed 2.4 times faster than Transformer-based MST and saving 0.12 (M) of parameters.
arXiv Detail & Related papers (2024-08-01T15:14:10Z) - HyperSIGMA: Hyperspectral Intelligence Comprehension Foundation Model [88.13261547704444]
Hyper SIGMA is a vision transformer-based foundation model for HSI interpretation.
It integrates spatial and spectral features using a specially designed spectral enhancement module.
It shows significant advantages in scalability, robustness, cross-modal transferring capability, and real-world applicability.
arXiv Detail & Related papers (2024-06-17T13:22:58Z) - Physics-Inspired Degradation Models for Hyperspectral Image Fusion [61.743696362028246]
Most fusion methods solely focus on the fusion algorithm itself and overlook the degradation models.
We propose physics-inspired degradation models (PIDM) to model the degradation of LR-HSI and HR-MSI.
Our proposed PIDM can boost the fusion performance of existing fusion methods in practical scenarios.
arXiv Detail & Related papers (2024-02-04T09:07:28Z) - SSIF: Learning Continuous Image Representation for Spatial-Spectral
Super-Resolution [73.46167948298041]
We propose a neural implicit model that represents an image as a function of both continuous pixel coordinates in the spatial domain and continuous wavelengths in the spectral domain.
We show that SSIF generalizes well to both unseen spatial resolutions and spectral resolutions.
It can generate high-resolution images that improve the performance of downstream tasks by 1.7%-7%.
arXiv Detail & Related papers (2023-09-30T15:23:30Z) - Hyperspectral and Multispectral Image Fusion Using the Conditional
Denoising Diffusion Probabilistic Model [18.915369996829984]
We propose a deep fusion method based on the conditional denoising diffusion probabilistic model, called DDPM-Fus.
Experiments conducted on one indoor and two remote sensing datasets show the superiority of the proposed model when compared with other advanced deep learningbased fusion methods.
arXiv Detail & Related papers (2023-07-07T07:08:52Z) - Hyperspectral Image Super-Resolution via Dual-domain Network Based on
Hybrid Convolution [6.3814314790000415]
This paper proposes a novel HSI super-resolution algorithm, termed dual-domain network based on hybrid convolution (SRDNet)
To capture inter-spectral self-similarity, a self-attention learning mechanism (HSL) is devised in the spatial domain.
To further improve the perceptual quality of HSI, a frequency loss(HFL) is introduced to optimize the model in the frequency domain.
arXiv Detail & Related papers (2023-04-10T13:51:28Z) - Decoupled-and-Coupled Networks: Self-Supervised Hyperspectral Image
Super-Resolution with Subpixel Fusion [67.35540259040806]
We propose a subpixel-level HS super-resolution framework by devising a novel decoupled-and-coupled network, called DCNet.
As the name suggests, DC-Net first decouples the input into common (or cross-sensor) and sensor-specific components.
We append a self-supervised learning module behind the CSU net by guaranteeing the material consistency to enhance the detailed appearances of the restored HS product.
arXiv Detail & Related papers (2022-05-07T23:40:36Z) - HDNet: High-resolution Dual-domain Learning for Spectral Compressive
Imaging [138.04956118993934]
We propose a high-resolution dual-domain learning network (HDNet) for HSI reconstruction.
On the one hand, the proposed HR spatial-spectral attention module with its efficient feature fusion provides continuous and fine pixel-level features.
On the other hand, frequency domain learning (FDL) is introduced for HSI reconstruction to narrow the frequency domain discrepancy.
arXiv Detail & Related papers (2022-03-04T06:37:45Z) - A Latent Encoder Coupled Generative Adversarial Network (LE-GAN) for
Efficient Hyperspectral Image Super-resolution [3.1023808510465627]
generative adversarial network (GAN) has proven to be an effective deep learning framework for image super-resolution.
To alleviate the problem of mode collapse, this work has proposed a novel GAN model coupled with a latent encoder (LE-GAN)
LE-GAN can map the generated spectral-spatial features from the image space to the latent space and produce a coupling component to regularise the generated samples.
arXiv Detail & Related papers (2021-11-16T18:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.