Boosting the Generalization Ability for Hyperspectral Image Classification using Spectral-spatial Axial Aggregation Transformer
- URL: http://arxiv.org/abs/2306.16759v3
- Date: Tue, 22 Oct 2024 07:29:07 GMT
- Title: Boosting the Generalization Ability for Hyperspectral Image Classification using Spectral-spatial Axial Aggregation Transformer
- Authors: Enzhe Zhao, Zhichang Guo, Shengzhu Shi, Yao Li, Jia Li, Dazhi Zhang,
- Abstract summary: In the hyperspectral image classification (HSIC) task, the most commonly used model validation paradigm is partitioning the training-test dataset through pixel-wise random sampling.
In our experiments, we found that the high accuracy was reached because the training and test datasets share a lot of information.
We propose a spectral-spatial axial aggregation transformer model, namely SaaFormer, that preserves generalization across dataset partitions.
- Score: 14.594398447576188
- License:
- Abstract: In the hyperspectral image classification (HSIC) task, the most commonly used model validation paradigm is partitioning the training-test dataset through pixel-wise random sampling. By training on a small amount of data, the deep learning model can achieve almost perfect accuracy. However, in our experiments, we found that the high accuracy was reached because the training and test datasets share a lot of information. On non-overlapping dataset partitions, well-performing models suffer significant performance degradation. To this end, we propose a spectral-spatial axial aggregation transformer model, namely SaaFormer, that preserves generalization across dataset partitions. SaaFormer applies a multi-level spectral extraction structure to segment the spectrum into multiple spectrum clips, such that the wavelength continuity of the spectrum across the channel are preserved. For each spectrum clip, the axial aggregation attention mechanism, which integrates spatial features along multiple spectral axes is applied to mine the spectral characteristic. The multi-level spectral extraction and the axial aggregation attention emphasize spectral characteristic to improve the model generalization. The experimental results on five publicly available datasets demonstrate that our model exhibits comparable performance on the random partition, while significantly outperforming other methods on non-overlapping partitions. Moreover, SaaFormer shows excellent performance on background classification.
Related papers
- Point-Calibrated Spectral Neural Operators [54.13671100638092]
We introduce Point-Calibrated Spectral Transform, which learns operator mappings by approximating functions with the point-level adaptive spectral basis.
Point-Calibrated Spectral Neural Operators learn operator mappings by approximating functions with the point-level adaptive spectral basis.
arXiv Detail & Related papers (2024-10-15T08:19:39Z) - GLADformer: A Mixed Perspective for Graph-level Anomaly Detection [24.961973151394826]
We propose a multi-perspective hybrid graph-level anomaly detector namely GLADformer.
Specifically, we first design a Graph Transformer module with global spectrum enhancement.
To uncover local anomalous attributes, we customize a band-pass spectral GNN message passing module.
arXiv Detail & Related papers (2024-06-02T12:51:48Z) - Datacube segmentation via Deep Spectral Clustering [76.48544221010424]
Extended Vision techniques often pose a challenge in their interpretation.
The huge dimensionality of data cube spectra poses a complex task in its statistical interpretation.
In this paper, we explore the possibility of applying unsupervised clustering methods in encoded space.
A statistical dimensional reduction is performed by an ad hoc trained (Variational) AutoEncoder, while the clustering process is performed by a (learnable) iterative K-Means clustering algorithm.
arXiv Detail & Related papers (2024-01-31T09:31:28Z) - Multifractal-spectral features enhance classification of anomalous
diffusion [0.0]
Anomalous diffusion processes pose a unique challenge in classification and characterization.
The present study delves into the potential of multifractal spectral features for effectively distinguishing anomalous diffusion trajectories.
Our findings underscore the diverse and potent efficacy of multifractal spectral features in enhancing classification of anomalous diffusion.
arXiv Detail & Related papers (2024-01-15T12:42:15Z) - SpectralGPT: Spectral Remote Sensing Foundation Model [60.023956954916414]
A universal RS foundation model, named SpectralGPT, is purpose-built to handle spectral RS images using a novel 3D generative pretrained transformer (GPT)
Compared to existing foundation models, SpectralGPT accommodates input images with varying sizes, resolutions, time series, and regions in a progressive training fashion, enabling full utilization of extensive RS big data.
Our evaluation highlights significant performance improvements with pretrained SpectralGPT models, signifying substantial potential in advancing spectral RS big data applications within the field of geoscience.
arXiv Detail & Related papers (2023-11-13T07:09:30Z) - DiffSpectralNet : Unveiling the Potential of Diffusion Models for
Hyperspectral Image Classification [6.521187080027966]
We propose a new network called DiffSpectralNet, which combines diffusion and transformer techniques.
First, we use an unsupervised learning framework based on the diffusion model to extract both high-level and low-level spectral-spatial features.
The diffusion method is capable of extracting diverse and meaningful spectral-spatial features, leading to improvement in HSI classification.
arXiv Detail & Related papers (2023-10-29T15:26:37Z) - Hodge-Aware Contrastive Learning [101.56637264703058]
Simplicial complexes prove effective in modeling data with multiway dependencies.
We develop a contrastive self-supervised learning approach for processing simplicial data.
arXiv Detail & Related papers (2023-09-14T00:40:07Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - SpectralDiff: A Generative Framework for Hyperspectral Image
Classification with Diffusion Models [18.391049303136715]
We propose a generative framework for HSI classification with diffusion models (SpectralDiff)
SpectralDiff effectively mines the distribution information of high-dimensional and highly redundant data.
Experiments on three public HSI datasets demonstrate that the proposed method can achieve better performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-12T16:32:34Z) - Spectrum-BERT: Pre-training of Deep Bidirectional Transformers for
Spectral Classification of Chinese Liquors [0.0]
We propose a pre-training method of deep bidirectional transformers for spectral classification of Chinese liquors, abbreviated as Spectrum-BERT.
We elaborately design two pre-training tasks, Next Curve Prediction (NCP) and Masked Curve Model (MCM), so that the model can effectively utilize unlabeled samples.
In the comparative experiments, the proposed Spectrum-BERT significantly outperforms the baselines in multiple metrics.
arXiv Detail & Related papers (2022-10-22T13:11:25Z) - Spectral Splitting and Aggregation Network for Hyperspectral Face
Super-Resolution [82.59267937569213]
High-resolution (HR) hyperspectral face image plays an important role in face related computer vision tasks under uncontrolled conditions.
In this paper, we investigate how to adapt the deep learning techniques to hyperspectral face image super-resolution.
We present a spectral splitting and aggregation network (SSANet) for HFSR with limited training samples.
arXiv Detail & Related papers (2021-08-31T02:13:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.