SaaFormer: Spectral-spatial Axial Aggregation Transformer for
Hyperspectral Image Classification
- URL: http://arxiv.org/abs/2306.16759v2
- Date: Tue, 4 Jul 2023 05:51:58 GMT
- Title: SaaFormer: Spectral-spatial Axial Aggregation Transformer for
Hyperspectral Image Classification
- Authors: Enzhe Zhao, Zhichang Guo, Yao Li, Dazhi Zhang
- Abstract summary: Hyperspectral images (HSI) captured from earth observing satellites and aircraft is becoming increasingly important for applications in agriculture, environmental monitoring, mining, etc.
Due to the limited available hyperspectral datasets, the pixel-wise random sampling is the most commonly used training-test dataset partition approach.
We propose a block-wise sampling method to minimize the potential for data leakage.
- Score: 2.4723464787484812
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperspectral images (HSI) captured from earth observing satellites and
aircraft is becoming increasingly important for applications in agriculture,
environmental monitoring, mining, etc. Due to the limited available
hyperspectral datasets, the pixel-wise random sampling is the most commonly
used training-test dataset partition approach, which has significant overlap
between samples in training and test datasets. Furthermore, our experimental
observations indicates that regions with larger overlap often exhibit higher
classification accuracy. Consequently, the pixel-wise random sampling approach
poses a risk of data leakage. Thus, we propose a block-wise sampling method to
minimize the potential for data leakage. Our experimental findings also confirm
the presence of data leakage in models such as 2DCNN. Further, We propose a
spectral-spatial axial aggregation transformer model, namely SaaFormer, to
address the challenges associated with hyperspectral image classifier that
considers HSI as long sequential three-dimensional images. The model comprises
two primary components: axial aggregation attention and multi-level
spectral-spatial extraction. The axial aggregation attention mechanism
effectively exploits the continuity and correlation among spectral bands at
each pixel position in hyperspectral images, while aggregating spatial
dimension features. This enables SaaFormer to maintain high precision even
under block-wise sampling. The multi-level spectral-spatial extraction
structure is designed to capture the sensitivity of different material
components to specific spectral bands, allowing the model to focus on a broader
range of spectral details. The results on six publicly available datasets
demonstrate that our model exhibits comparable performance when using random
sampling, while significantly outperforming other methods when employing
block-wise sampling partition.
Related papers
- Spectral-Spatial Self-Supervised Learning for Few-Shot Hyperspectral Image Classification [3.5876461566779]
Few-shot classification of hyperspectral images (HSI) faces the challenge of scarce labeled samples.<n>We propose a method, Spectral-Spatial Self-Supervised Learning for Few-Shot Hyperspectral Image Classification (S4L-FSC)
arXiv Detail & Related papers (2025-05-18T15:56:35Z) - CARL: Camera-Agnostic Representation Learning for Spectral Image Analysis [75.25966323298003]
Spectral imaging offers promising applications across diverse domains, including medicine and urban scene understanding.
variability in channel dimensionality and captured wavelengths among spectral cameras impede the development of AI-driven methodologies.
We introduce $textbfCARL$, a model for $textbfC$amera-$textbfA$gnostic $textbfR$esupervised $textbfL$ across RGB, multispectral, and hyperspectral imaging modalities.
arXiv Detail & Related papers (2025-04-27T13:06:40Z) - DiffFormer: a Differential Spatial-Spectral Transformer for Hyperspectral Image Classification [3.271106943956333]
Hyperspectral image classification (HSIC) has gained significant attention because of its potential in analyzing high-dimensional data with rich spectral and spatial information.
We propose the Differential Spatial-Spectral Transformer (DiffFormer) to address the inherent challenges of HSIC, such as spectral redundancy and spatial discontinuity.
Experiments on benchmark hyperspectral datasets demonstrate the superiority of DiffFormer in terms of classification accuracy, computational efficiency, and generalizability.
arXiv Detail & Related papers (2024-12-23T07:21:41Z) - Point-Calibrated Spectral Neural Operators [54.13671100638092]
We introduce Point-Calibrated Spectral Transform, which learns operator mappings by approximating functions with the point-level adaptive spectral basis.
Point-Calibrated Spectral Neural Operators learn operator mappings by approximating functions with the point-level adaptive spectral basis.
arXiv Detail & Related papers (2024-10-15T08:19:39Z) - GLADformer: A Mixed Perspective for Graph-level Anomaly Detection [24.961973151394826]
We propose a multi-perspective hybrid graph-level anomaly detector namely GLADformer.
Specifically, we first design a Graph Transformer module with global spectrum enhancement.
To uncover local anomalous attributes, we customize a band-pass spectral GNN message passing module.
arXiv Detail & Related papers (2024-06-02T12:51:48Z) - Datacube segmentation via Deep Spectral Clustering [76.48544221010424]
Extended Vision techniques often pose a challenge in their interpretation.
The huge dimensionality of data cube spectra poses a complex task in its statistical interpretation.
In this paper, we explore the possibility of applying unsupervised clustering methods in encoded space.
A statistical dimensional reduction is performed by an ad hoc trained (Variational) AutoEncoder, while the clustering process is performed by a (learnable) iterative K-Means clustering algorithm.
arXiv Detail & Related papers (2024-01-31T09:31:28Z) - Multifractal-spectral features enhance classification of anomalous
diffusion [0.0]
Anomalous diffusion processes pose a unique challenge in classification and characterization.
The present study delves into the potential of multifractal spectral features for effectively distinguishing anomalous diffusion trajectories.
Our findings underscore the diverse and potent efficacy of multifractal spectral features in enhancing classification of anomalous diffusion.
arXiv Detail & Related papers (2024-01-15T12:42:15Z) - SpectralGPT: Spectral Remote Sensing Foundation Model [60.023956954916414]
A universal RS foundation model, named SpectralGPT, is purpose-built to handle spectral RS images using a novel 3D generative pretrained transformer (GPT)
Compared to existing foundation models, SpectralGPT accommodates input images with varying sizes, resolutions, time series, and regions in a progressive training fashion, enabling full utilization of extensive RS big data.
Our evaluation highlights significant performance improvements with pretrained SpectralGPT models, signifying substantial potential in advancing spectral RS big data applications within the field of geoscience.
arXiv Detail & Related papers (2023-11-13T07:09:30Z) - DiffSpectralNet : Unveiling the Potential of Diffusion Models for
Hyperspectral Image Classification [6.521187080027966]
We propose a new network called DiffSpectralNet, which combines diffusion and transformer techniques.
First, we use an unsupervised learning framework based on the diffusion model to extract both high-level and low-level spectral-spatial features.
The diffusion method is capable of extracting diverse and meaningful spectral-spatial features, leading to improvement in HSI classification.
arXiv Detail & Related papers (2023-10-29T15:26:37Z) - Hodge-Aware Contrastive Learning [101.56637264703058]
Simplicial complexes prove effective in modeling data with multiway dependencies.
We develop a contrastive self-supervised learning approach for processing simplicial data.
arXiv Detail & Related papers (2023-09-14T00:40:07Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - SpectralDiff: A Generative Framework for Hyperspectral Image
Classification with Diffusion Models [18.391049303136715]
We propose a generative framework for HSI classification with diffusion models (SpectralDiff)
SpectralDiff effectively mines the distribution information of high-dimensional and highly redundant data.
Experiments on three public HSI datasets demonstrate that the proposed method can achieve better performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-12T16:32:34Z) - Spectrum-BERT: Pre-training of Deep Bidirectional Transformers for
Spectral Classification of Chinese Liquors [0.0]
We propose a pre-training method of deep bidirectional transformers for spectral classification of Chinese liquors, abbreviated as Spectrum-BERT.
We elaborately design two pre-training tasks, Next Curve Prediction (NCP) and Masked Curve Model (MCM), so that the model can effectively utilize unlabeled samples.
In the comparative experiments, the proposed Spectrum-BERT significantly outperforms the baselines in multiple metrics.
arXiv Detail & Related papers (2022-10-22T13:11:25Z) - Spectral Splitting and Aggregation Network for Hyperspectral Face
Super-Resolution [82.59267937569213]
High-resolution (HR) hyperspectral face image plays an important role in face related computer vision tasks under uncontrolled conditions.
In this paper, we investigate how to adapt the deep learning techniques to hyperspectral face image super-resolution.
We present a spectral splitting and aggregation network (SSANet) for HFSR with limited training samples.
arXiv Detail & Related papers (2021-08-31T02:13:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.