HSSDCT: Factorized Spatial-Spectral Correlation for Hyperspectral Image Fusion
- URL: http://arxiv.org/abs/2602.00490v1
- Date: Sat, 31 Jan 2026 03:24:03 GMT
- Title: HSSDCT: Factorized Spatial-Spectral Correlation for Hyperspectral Image Fusion
- Authors: Chia-Ming Lee, Yu-Hao Ho, Yu-Fan Lin, Jen-Wei Lee, Li-Wei Kang, Chih-Chung Hsu,
- Abstract summary: Hyperspectral image (HSI) fusion aims to reconstruct a high-resolution HSI (HR-HSI) by combining the rich spectral information of a low-resolution HSI with the fine details of a high-resolution multispectral image (HR-MSI)<n>Recent deep learning methods have achieved notable progress, but they still suffer from limited receptive fields, redundant spectral bands, and the quadratic complexity of self-attention.<n>We propose the Hierarchical Spatial-Spectral Dense Correlation Network (HSSDCT) to overcome these challenges.
- Score: 11.994592153994482
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperspectral image (HSI) fusion aims to reconstruct a high-resolution HSI (HR-HSI) by combining the rich spectral information of a low-resolution HSI (LR-HSI) with the fine spatial details of a high-resolution multispectral image (HR-MSI). Although recent deep learning methods have achieved notable progress, they still suffer from limited receptive fields, redundant spectral bands, and the quadratic complexity of self-attention, which restrict both efficiency and robustness. To overcome these challenges, we propose the Hierarchical Spatial-Spectral Dense Correlation Network (HSSDCT). The framework introduces two key modules: (i) a Hierarchical Dense-Residue Transformer Block (HDRTB) that progressively enlarges windows and employs dense-residue connections for multi-scale feature aggregation, and (ii) a Spatial-Spectral Correlation Layer (SSCL) that explicitly factorizes spatial and spectral dependencies, reducing self-attention to linear complexity while mitigating spectral redundancy. Extensive experiments on benchmark datasets demonstrate that HSSDCT delivers superior reconstruction quality with significantly lower computational costs, achieving new state-of-the-art performance in HSI fusion. Our code is available at https://github.com/jemmyleee/HSSDCT.
Related papers
- SR$^{2}$-Net: A General Plug-and-Play Model for Spectral Refinement in Hyperspectral Image Super-Resolution [3.4888894498274747]
HSI-SR aims to enhance spatial resolution while preserving spectrally faithful and physically plausible characteristics.<n>These methods often neglect spectral consistency across bands, leading to spurious oscillations and physically implausible artifacts.<n>We propose a lightweight plug-and-play, physically priors Spectral Rectification Super-Resolution Network (SR$2$-Net) to address this issue.
arXiv Detail & Related papers (2026-01-29T07:00:00Z) - HyFusion: Enhanced Reception Field Transformer for Hyperspectral Image Fusion [8.701181531082781]
Hyperspectral image (HSI) fusion addresses the challenge of reconstructing High-Resolution HSIs (HR-HSIs) from High-Resolution Multispectral images (HR-MSIs) and Low-Resolution HSIs (LR-HSIs)
arXiv Detail & Related papers (2025-01-08T18:22:44Z) - Unleashing Correlation and Continuity for Hyperspectral Reconstruction from RGB Images [64.80875911446937]
We propose a Correlation and Continuity Network (CCNet) for HSI reconstruction from RGB images.<n>For the correlation of local spectrum, we introduce the Group-wise Spectral Correlation Modeling (GrSCM) module.<n>For the continuity of global spectrum, we design the Neighborhood-wise Spectral Continuity Modeling (NeSCM) module.
arXiv Detail & Related papers (2025-01-02T15:14:40Z) - CSAKD: Knowledge Distillation with Cross Self-Attention for Hyperspectral and Multispectral Image Fusion [9.3350274016294]
This paper introduces a novel knowledge distillation (KD) framework for HR-MSI/LR-HSI fusion to achieve SR of LR-HSI.
To fully exploit the spatial and spectral feature representations of LR-HSI and HR-MSI, we propose a novel Cross Self-Attention (CSA) fusion module.
Our experimental results demonstrate that the student model achieves comparable or superior LR-HSI SR performance.
arXiv Detail & Related papers (2024-06-28T05:25:57Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - Hyperspectral Image Super-Resolution via Dual-domain Network Based on
Hybrid Convolution [6.3814314790000415]
This paper proposes a novel HSI super-resolution algorithm, termed dual-domain network based on hybrid convolution (SRDNet)
To capture inter-spectral self-similarity, a self-attention learning mechanism (HSL) is devised in the spatial domain.
To further improve the perceptual quality of HSI, a frequency loss(HFL) is introduced to optimize the model in the frequency domain.
arXiv Detail & Related papers (2023-04-10T13:51:28Z) - HDNet: High-resolution Dual-domain Learning for Spectral Compressive
Imaging [138.04956118993934]
We propose a high-resolution dual-domain learning network (HDNet) for HSI reconstruction.
On the one hand, the proposed HR spatial-spectral attention module with its efficient feature fusion provides continuous and fine pixel-level features.
On the other hand, frequency domain learning (FDL) is introduced for HSI reconstruction to narrow the frequency domain discrepancy.
arXiv Detail & Related papers (2022-03-04T06:37:45Z) - Cross-Attention in Coupled Unmixing Nets for Unsupervised Hyperspectral
Super-Resolution [79.97180849505294]
We propose a novel coupled unmixing network with a cross-attention mechanism, CUCaNet, to enhance the spatial resolution of HSI.
Experiments are conducted on three widely-used HS-MS datasets in comparison with state-of-the-art HSI-SR models.
arXiv Detail & Related papers (2020-07-10T08:08:20Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.