DCT-Mamba3D: Spectral Decorrelation and Spatial-Spectral Feature Extraction for Hyperspectral Image Classification
- URL: http://arxiv.org/abs/2502.01986v1
- Date: Tue, 04 Feb 2025 04:00:08 GMT
- Title: DCT-Mamba3D: Spectral Decorrelation and Spatial-Spectral Feature Extraction for Hyperspectral Image Classification
- Authors: Weijia Cao, Xiaofei Yang, Yicong Zhou, Zheng Zhang,
- Abstract summary: Hyperspectral image classification presents challenges due to spectral redundancy and complex spatial-spectral dependencies.
This paper proposes a novel framework, DCT-Mamba3D, for hyperspectral image classification.
- Score: 38.538268270711534
- License:
- Abstract: Hyperspectral image classification presents challenges due to spectral redundancy and complex spatial-spectral dependencies. This paper proposes a novel framework, DCT-Mamba3D, for hyperspectral image classification. DCT-Mamba3D incorporates: (1) a 3D spectral-spatial decorrelation module that applies 3D discrete cosine transform basis functions to reduce both spectral and spatial redundancy, enhancing feature clarity across dimensions; (2) a 3D-Mamba module that leverages a bidirectional state-space model to capture intricate spatial-spectral dependencies; and (3) a global residual enhancement module that stabilizes feature representation, improving robustness and convergence. Extensive experiments on benchmark datasets show that our DCT-Mamba3D outperforms the state-of-the-art methods in challenging scenarios such as the same object in different spectra and different objects in the same spectra.
Related papers
- HSRMamba: Contextual Spatial-Spectral State Space Model for Single Hyperspectral Super-Resolution [41.93421212397078]
Mamba has demonstrated exceptional performance in visual tasks due to its powerful global modeling capabilities and linear computational complexity.
In HSISR, Mamba faces challenges as transforming images into 1D sequences neglects the spatial-spectral structural relationships between locally adjacent pixels.
We propose HSRMamba, a contextual spatial-spectral modeling state space model for HSISR, to address these issues both locally and globally.
arXiv Detail & Related papers (2025-01-30T17:10:53Z) - DiffFormer: a Differential Spatial-Spectral Transformer for Hyperspectral Image Classification [3.271106943956333]
Hyperspectral image classification (HSIC) has gained significant attention because of its potential in analyzing high-dimensional data with rich spectral and spatial information.
We propose the Differential Spatial-Spectral Transformer (DiffFormer) to address the inherent challenges of HSIC, such as spectral redundancy and spatial discontinuity.
Experiments on benchmark hyperspectral datasets demonstrate the superiority of DiffFormer in terms of classification accuracy, computational efficiency, and generalizability.
arXiv Detail & Related papers (2024-12-23T07:21:41Z) - 3DSS-Mamba: 3D-Spectral-Spatial Mamba for Hyperspectral Image Classification [14.341510793163138]
We propose a novel 3D-Spectral-Spatial Mamba framework for HSI classification.
A 3D-Spectral-Spatial Selective Scanning mechanism is introduced, which performs pixel-wise selective scanning on 3D hyperspectral tokens.
Experimental results and analysis demonstrate that the proposed method outperforms the state-of-the-art methods on HSI classification benchmarks.
arXiv Detail & Related papers (2024-05-21T04:10:26Z) - Learning Exhaustive Correlation for Spectral Super-Resolution: Where Spatial-Spectral Attention Meets Linear Dependence [26.1694389791047]
Spectral super-resolution aims to recover hyperspectral image (HSI) from easily obtainable RGB image.
Two types of bottlenecks in existing Transformers limit performance improvement and practical applications.
We propose a novel Exhaustive Correlation Transformer (ECT) for spectral super-resolution.
arXiv Detail & Related papers (2023-12-20T08:30:07Z) - Cross-Scope Spatial-Spectral Information Aggregation for Hyperspectral
Image Super-Resolution [47.12985199570964]
We propose a novel cross-scope spatial-spectral Transformer (CST) to investigate long-range spatial and spectral similarities for single hyperspectral image super-resolution.
Specifically, we devise cross-attention mechanisms in spatial and spectral dimensions to comprehensively model the long-range spatial-spectral characteristics.
Experiments over three hyperspectral datasets demonstrate that the proposed CST is superior to other state-of-the-art methods both quantitatively and visually.
arXiv Detail & Related papers (2023-11-29T03:38:56Z) - Unsupervised Spectral Demosaicing with Lightweight Spectral Attention
Networks [6.7433262627741914]
This paper presents a deep learning-based spectral demosaicing technique trained in an unsupervised manner.
The proposed method outperforms conventional unsupervised methods in terms of spatial distortion suppression, spectral fidelity, robustness, and computational cost.
arXiv Detail & Related papers (2023-07-05T02:45:44Z) - Object Detection in Hyperspectral Image via Unified Spectral-Spatial
Feature Aggregation [55.9217962930169]
We present S2ADet, an object detector that harnesses the rich spectral and spatial complementary information inherent in hyperspectral images.
S2ADet surpasses existing state-of-the-art methods, achieving robust and reliable results.
arXiv Detail & Related papers (2023-06-14T09:01:50Z) - Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image
Reconstruction [127.20208645280438]
Hyperspectral image (HSI) reconstruction aims to recover the 3D spatial-spectral signal from a 2D measurement.
Modeling the inter-spectra interactions is beneficial for HSI reconstruction.
Mask-guided Spectral-wise Transformer (MST) proposes a novel framework for HSI reconstruction.
arXiv Detail & Related papers (2021-11-15T16:59:48Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.