HSRMamba: Contextual Spatial-Spectral State Space Model for Single Hyperspectral Super-Resolution
- URL: http://arxiv.org/abs/2501.18500v1
- Date: Thu, 30 Jan 2025 17:10:53 GMT
- Title: HSRMamba: Contextual Spatial-Spectral State Space Model for Single Hyperspectral Super-Resolution
- Authors: Shi Chen, Lefei Zhang, Liangpei Zhang,
- Abstract summary: Mamba has demonstrated exceptional performance in visual tasks due to its powerful global modeling capabilities and linear computational complexity.
In HSISR, Mamba faces challenges as transforming images into 1D sequences neglects the spatial-spectral structural relationships between locally adjacent pixels.
We propose HSRMamba, a contextual spatial-spectral modeling state space model for HSISR, to address these issues both locally and globally.
- Score: 41.93421212397078
- License:
- Abstract: Mamba has demonstrated exceptional performance in visual tasks due to its powerful global modeling capabilities and linear computational complexity, offering considerable potential in hyperspectral image super-resolution (HSISR). However, in HSISR, Mamba faces challenges as transforming images into 1D sequences neglects the spatial-spectral structural relationships between locally adjacent pixels, and its performance is highly sensitive to input order, which affects the restoration of both spatial and spectral details. In this paper, we propose HSRMamba, a contextual spatial-spectral modeling state space model for HSISR, to address these issues both locally and globally. Specifically, a local spatial-spectral partitioning mechanism is designed to establish patch-wise causal relationships among adjacent pixels in 3D features, mitigating the local forgetting issue. Furthermore, a global spectral reordering strategy based on spectral similarity is employed to enhance the causal representation of similar pixels across both spatial and spectral dimensions. Finally, experimental results demonstrate our HSRMamba outperforms the state-of-the-art methods in quantitative quality and visual results. Code will be available soon.
Related papers
- S$^2$Mamba: A Spatial-spectral State Space Model for Hyperspectral Image Classification [44.99672241508994]
Land cover analysis using hyperspectral images (HSI) remains an open problem due to their low spatial resolution and complex spectral information.
We propose S$2$Mamba, a spatial-spectral state space model for hyperspectral image classification, to excavate spatial-spectral contextual features.
arXiv Detail & Related papers (2024-04-28T15:12:56Z) - Physics-Inspired Degradation Models for Hyperspectral Image Fusion [61.743696362028246]
Most fusion methods solely focus on the fusion algorithm itself and overlook the degradation models.
We propose physics-inspired degradation models (PIDM) to model the degradation of LR-HSI and HR-MSI.
Our proposed PIDM can boost the fusion performance of existing fusion methods in practical scenarios.
arXiv Detail & Related papers (2024-02-04T09:07:28Z) - Cross-Scope Spatial-Spectral Information Aggregation for Hyperspectral
Image Super-Resolution [47.12985199570964]
We propose a novel cross-scope spatial-spectral Transformer (CST) to investigate long-range spatial and spectral similarities for single hyperspectral image super-resolution.
Specifically, we devise cross-attention mechanisms in spatial and spectral dimensions to comprehensively model the long-range spatial-spectral characteristics.
Experiments over three hyperspectral datasets demonstrate that the proposed CST is superior to other state-of-the-art methods both quantitatively and visually.
arXiv Detail & Related papers (2023-11-29T03:38:56Z) - SSIF: Learning Continuous Image Representation for Spatial-Spectral
Super-Resolution [73.46167948298041]
We propose a neural implicit model that represents an image as a function of both continuous pixel coordinates in the spatial domain and continuous wavelengths in the spectral domain.
We show that SSIF generalizes well to both unseen spatial resolutions and spectral resolutions.
It can generate high-resolution images that improve the performance of downstream tasks by 1.7%-7%.
arXiv Detail & Related papers (2023-09-30T15:23:30Z) - Unsupervised Hyperspectral and Multispectral Images Fusion Based on the
Cycle Consistency [21.233354336608205]
We propose an unsupervised HSI and MSI fusion model based on the cycle consistency, called CycFusion.
The CycFusion learns the domain transformation between low spatial resolution HSI (LrHSI) and high spatial resolution MSI (HrMSI)
Experiments conducted on several datasets show that our proposed model outperforms all compared unsupervised fusion methods.
arXiv Detail & Related papers (2023-07-07T06:47:15Z) - Spectral Enhanced Rectangle Transformer for Hyperspectral Image
Denoising [64.11157141177208]
We propose a spectral enhanced rectangle Transformer to model the spatial and spectral correlation in hyperspectral images.
For the former, we exploit the rectangle self-attention horizontally and vertically to capture the non-local similarity in the spatial domain.
For the latter, we design a spectral enhancement module that is capable of extracting global underlying low-rank property of spatial-spectral cubes to suppress noise.
arXiv Detail & Related papers (2023-04-03T09:42:13Z) - Implicit Neural Representation Learning for Hyperspectral Image
Super-Resolution [0.0]
Implicit Neural Representations (INRs) are making strides as a novel and effective representation.
We propose a novel HSI reconstruction model based on INR which represents HSI by a continuous function mapping a spatial coordinate to its corresponding spectral radiance values.
arXiv Detail & Related papers (2021-12-20T14:07:54Z) - A Latent Encoder Coupled Generative Adversarial Network (LE-GAN) for
Efficient Hyperspectral Image Super-resolution [3.1023808510465627]
generative adversarial network (GAN) has proven to be an effective deep learning framework for image super-resolution.
To alleviate the problem of mode collapse, this work has proposed a novel GAN model coupled with a latent encoder (LE-GAN)
LE-GAN can map the generated spectral-spatial features from the image space to the latent space and produce a coupling component to regularise the generated samples.
arXiv Detail & Related papers (2021-11-16T18:40:19Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.