WaveMamba: Spatial-Spectral Wavelet Mamba for Hyperspectral Image Classification
- URL: http://arxiv.org/abs/2408.01231v1
- Date: Fri, 2 Aug 2024 12:44:07 GMT
- Title: WaveMamba: Spatial-Spectral Wavelet Mamba for Hyperspectral Image Classification
- Authors: Muhammad Ahmad, Muhammad Usama, Manual Mazzara,
- Abstract summary: This paper introduces WaveMamba, a novel approach that integrates wavelet transformation with the Spatial-Spectral Mamba architecture to enhance HSI classification.
WaveMamba surpasses existing models, achieving an accuracy improvement of 4.5% on the University of Houston dataset and a 2.0% increase on the Pavia University dataset.
- Score: 1.2074785551319294
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperspectral Imaging (HSI) has proven to be a powerful tool for capturing detailed spectral and spatial information across diverse applications. Despite the advancements in Deep Learning (DL) and Transformer architectures for HSI Classification (HSIC), challenges such as computational efficiency and the need for extensive labeled data persist. This paper introduces WaveMamba, a novel approach that integrates wavelet transformation with the Spatial-Spectral Mamba architecture to enhance HSIC. WaveMamba captures both local texture patterns and global contextual relationships in an end-to-end trainable model. The Wavelet-based enhanced features are then processed through the state-space architecture to model spatial-spectral relationships and temporal dependencies. The experimental results indicate that WaveMamba surpasses existing models, achieving an accuracy improvement of 4.5\% on the University of Houston dataset and a 2.0\% increase on the Pavia University dataset. These findings validate its effectiveness in addressing the complex data interactions inherent in HSIs.
Related papers
- TransMamba: Fast Universal Architecture Adaption from Transformers to Mamba [88.31117598044725]
We explore cross-architecture training to transfer the ready knowledge in existing Transformer models to alternative architecture Mamba, termed TransMamba.
Our approach employs a two-stage strategy to expedite training new Mamba models, ensuring effectiveness in across uni-modal and cross-modal tasks.
For cross-modal learning, we propose a cross-Mamba module that integrates language awareness into Mamba's visual features, enhancing the cross-modal interaction capabilities of Mamba architecture.
arXiv Detail & Related papers (2025-02-21T01:22:01Z) - WMamba: Wavelet-based Mamba for Face Forgery Detection [34.216401304665816]
Wavelet analysis can uncover subtle forgery artifacts that remain imperceptible in the spatial domain.
We introduce WMamba, a novel wavelet-based feature extractor built upon the Mamba architecture.
We show that WMamba achieves state-of-the-art (SOTA) performance, highlighting its effectiveness and superiority in face forgery detection.
arXiv Detail & Related papers (2025-01-16T15:44:24Z) - MambaHSI: Spatial-Spectral Mamba for Hyperspectral Image Classification [46.111607032455225]
We propose a novel HSI classification model based on a Mamba model, named MambaHSI.
Specifically, we design a spatial Mamba block (SpaMB) to model the long-range interaction of the whole image at the pixel-level.
We propose a spectral Mamba block (SpeMB) to split the spectral vector into multiple groups, mine the relations across different spectral groups, and extract spectral features.
arXiv Detail & Related papers (2025-01-09T03:27:47Z) - Mamba-SEUNet: Mamba UNet for Monaural Speech Enhancement [54.427965535613886]
Mamba, as a novel state-space model (SSM), has gained widespread application in natural language processing and computer vision.
In this work, we introduce Mamba-SEUNet, an innovative architecture that integrates Mamba with U-Net for SE tasks.
arXiv Detail & Related papers (2024-12-21T13:43:51Z) - MobileMamba: Lightweight Multi-Receptive Visual Mamba Network [51.33486891724516]
Previous research on lightweight models has primarily focused on CNNs and Transformer-based designs.
We propose the MobileMamba framework, which balances efficiency and performance.
MobileMamba achieves up to 83.6% on Top-1, surpassing existing state-of-the-art methods.
arXiv Detail & Related papers (2024-11-24T18:01:05Z) - DiMSUM: Diffusion Mamba -- A Scalable and Unified Spatial-Frequency Method for Image Generation [4.391439322050918]
We introduce a novel state-space architecture for diffusion models.
We harness spatial and frequency information to enhance the inductive bias towards local features in input images.
arXiv Detail & Related papers (2024-11-06T18:59:17Z) - Spatial-Mamba: Effective Visual State Space Models via Structure-Aware State Fusion [46.82975707531064]
Selective state space models (SSMs) excel at capturing long-range dependencies in 1D sequential data.
We propose Spatial-Mamba, a novel approach that establishes neighborhood connectivity directly in the state space.
We show that Spatial-Mamba, even with a single scan, attains or surpasses the state-of-the-art SSM-based models in image classification, detection and segmentation.
arXiv Detail & Related papers (2024-10-19T12:56:58Z) - MambaVT: Spatio-Temporal Contextual Modeling for robust RGB-T Tracking [51.28485682954006]
We propose a pure Mamba-based framework (MambaVT) to fully exploit intrinsic-temporal contextual modeling for robust visible-thermal tracking.
Specifically, we devise the long-range cross-frame integration component to globally adapt to target appearance variations.
Experiments show the significant potential of vision Mamba for RGB-T tracking, with MambaVT achieving state-of-the-art performance on four mainstream benchmarks.
arXiv Detail & Related papers (2024-08-15T02:29:00Z) - Spatial-Spectral Morphological Mamba for Hyperspectral Image Classification [27.04370747400184]
This paper introduces the Spatial-Spectral Morphological Mamba (MorpMamba) model in which, a token generation module first converts the hyperspectral image patch into spatial-spectral tokens.
These tokens are processed by morphological operations, which compute structural and shape information using depthwise separable convolutional operations.
Experiments on widely used HSI datasets demonstrate that the MorpMamba model outperforms (parametric efficiency) both CNN and Transformer models.
arXiv Detail & Related papers (2024-08-02T16:28:51Z) - Multi-head Spatial-Spectral Mamba for Hyperspectral Image Classification [3.105394345970172]
Spatial-Spectral Mamba (SSM) improves computational efficiency and captures long-range dependencies.
We propose the SSM with multi-head self-attention and token enhancement (MHSSMamba)
MHSSMamba achieved remarkable classification accuracies of 97.62% on Pavia University, 96.92% on the University of Houston, 96.85% on Salinas, and 99.49% on Wuhan-longKou datasets.
arXiv Detail & Related papers (2024-08-02T12:27:15Z) - Empowering Snapshot Compressive Imaging: Spatial-Spectral State Space Model with Across-Scanning and Local Enhancement [51.557804095896174]
We introduce a State Space Model with Across-Scanning and Local Enhancement, named ASLE-SSM, that employs a Spatial-Spectral SSM for global-local balanced context encoding and cross-channel interaction promoting.
Experimental results illustrate ASLE-SSM's superiority over existing state-of-the-art methods, with an inference speed 2.4 times faster than Transformer-based MST and saving 0.12 (M) of parameters.
arXiv Detail & Related papers (2024-08-01T15:14:10Z) - Wavelet-based Bi-dimensional Aggregation Network for SAR Image Change Detection [53.842568573251214]
Experimental results on three SAR datasets demonstrate that our WBANet significantly outperforms contemporary state-of-the-art methods.
Our WBANet achieves 98.33%, 96.65%, and 96.62% of percentage of correct classification (PCC) on the respective datasets.
arXiv Detail & Related papers (2024-07-18T04:36:10Z) - GraphMamba: An Efficient Graph Structure Learning Vision Mamba for Hyperspectral Image Classification [19.740333867168108]
GraphMamba is an efficient graph structure learning vision Mamba classification framework to achieve deep spatial-spectral information mining.
The core components of GraphMamba include the HyperMamba module for improving computational efficiency and the SpectralGCN module for adaptive spatial context awareness.
arXiv Detail & Related papers (2024-07-11T07:56:08Z) - HSIMamba: Hyperpsectral Imaging Efficient Feature Learning with Bidirectional State Space for Classification [16.742768644585684]
HSIMamba is a novel framework that uses bidirectional reversed convolutional neural network pathways to extract spectral features more efficiently.
Our approach combines the operational efficiency of CNNs with the dynamic feature extraction capability of attention mechanisms found in Transformers.
This approach improves classification accuracy beyond current benchmarks and addresses computational inefficiencies encountered with advanced models like Transformers.
arXiv Detail & Related papers (2024-03-30T07:27:36Z) - Point Cloud Mamba: Point Cloud Learning via State Space Model [73.7454734756626]
We show that Mamba-based point cloud methods can outperform previous methods based on transformer or multi-layer perceptrons (MLPs)
In particular, we demonstrate that Mamba-based point cloud methods can outperform previous methods based on transformer or multi-layer perceptrons (MLPs)
Point Cloud Mamba surpasses the state-of-the-art (SOTA) point-based method PointNeXt and achieves new SOTA performance on the ScanNN, ModelNet40, ShapeNetPart, and S3DIS datasets.
arXiv Detail & Related papers (2024-03-01T18:59:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.