Learning A 3D-CNN and Transformer Prior for Hyperspectral Image
Super-Resolution
- URL: http://arxiv.org/abs/2111.13923v1
- Date: Sat, 27 Nov 2021 15:38:57 GMT
- Title: Learning A 3D-CNN and Transformer Prior for Hyperspectral Image
Super-Resolution
- Authors: Qing Ma and Junjun Jiang and Xianming Liu and Jiayi Ma
- Abstract summary: We propose a novel HSISR method that uses Transformer instead of CNN to learn the prior of HSIs.
Specifically, we first use the gradient algorithm to solve the HSISR model, and then use an unfolding network to simulate the iterative solution processes.
- Score: 80.93870349019332
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To solve the ill-posed problem of hyperspectral image super-resolution
(HSISR), an usually method is to use the prior information of the hyperspectral
images (HSIs) as a regularization term to constrain the objective function.
Model-based methods using hand-crafted priors cannot fully characterize the
properties of HSIs. Learning-based methods usually use a convolutional neural
network (CNN) to learn the implicit priors of HSIs. However, the learning
ability of CNN is limited, it only considers the spatial characteristics of the
HSIs and ignores the spectral characteristics, and convolution is not effective
for long-range dependency modeling. There is still a lot of room for
improvement. In this paper, we propose a novel HSISR method that uses
Transformer instead of CNN to learn the prior of HSIs. Specifically, we first
use the proximal gradient algorithm to solve the HSISR model, and then use an
unfolding network to simulate the iterative solution processes. The
self-attention layer of Transformer makes it have the ability of spatial global
interaction. In addition, we add 3D-CNN behind the Transformer layers to better
explore the spatio-spectral correlation of HSIs. Both quantitative and visual
results on two widely used HSI datasets and the real-world dataset demonstrate
that the proposed method achieves a considerable gain compared to all the
mainstream algorithms including the most competitive conventional methods and
the recently proposed deep learning-based methods.
Related papers
- Coarse-Fine Spectral-Aware Deformable Convolution For Hyperspectral Image Reconstruction [15.537910100051866]
We study the inverse problem of Coded Aperture Snapshot Spectral Imaging (CASSI)
We propose Coarse-Fine Spectral-Aware Deformable Convolution Network (CFSDCN)
Our CFSDCN significantly outperforms previous state-of-the-art (SOTA) methods on both simulated and real HSI datasets.
arXiv Detail & Related papers (2024-06-18T15:15:12Z) - Superpixel Graph Contrastive Clustering with Semantic-Invariant
Augmentations for Hyperspectral Images [64.72242126879503]
Hyperspectral images (HSI) clustering is an important but challenging task.
We first use 3-D and 2-D hybrid convolutional neural networks to extract the high-order spatial and spectral features of HSI.
We then design a superpixel graph contrastive clustering model to learn discriminative superpixel representations.
arXiv Detail & Related papers (2024-03-04T07:40:55Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction [138.04956118993934]
We propose a novel Transformer-based method, coarse-to-fine sparse Transformer (CST)
CST embedding HSI sparsity into deep learning for HSI reconstruction.
In particular, CST uses our proposed spectra-aware screening mechanism (SASM) for coarse patch selecting. Then the selected patches are fed into our customized spectra-aggregation hashing multi-head self-attention (SAH-MSA) for fine pixel clustering and self-similarity capturing.
arXiv Detail & Related papers (2022-03-09T16:17:47Z) - Multiscale Convolutional Transformer with Center Mask Pretraining for
Hyperspectral Image Classificationtion [14.33259265286265]
We propose a noval multi-scale convolutional embedding module for hyperspectral images (HSI) to realize effective extraction of spatial-spectral information.
Similar to Mask autoencoder, but our pre-training method only masks the corresponding token of the central pixel in the encoder, and inputs the remaining token into the decoder to reconstruct the spectral information of the central pixel.
arXiv Detail & Related papers (2022-03-09T14:42:26Z) - HDNet: High-resolution Dual-domain Learning for Spectral Compressive
Imaging [138.04956118993934]
We propose a high-resolution dual-domain learning network (HDNet) for HSI reconstruction.
On the one hand, the proposed HR spatial-spectral attention module with its efficient feature fusion provides continuous and fine pixel-level features.
On the other hand, frequency domain learning (FDL) is introduced for HSI reconstruction to narrow the frequency domain discrepancy.
arXiv Detail & Related papers (2022-03-04T06:37:45Z) - Deep Gaussian Scale Mixture Prior for Spectral Compressive Imaging [48.34565372026196]
We propose a novel HSI reconstruction method based on the a Posterior (MAP) estimation framework.
We also propose to estimate the local means of the GSM models by the deep convolutional neural network (DCNN)
arXiv Detail & Related papers (2021-03-12T08:57:06Z) - Hyperspectral Image Classification with Spatial Consistence Using Fully
Convolutional Spatial Propagation Network [9.583523548244683]
Deep convolutional neural networks (CNNs) have shown impressive ability to represent hyperspectral images (HSIs)
We propose a novel end-to-end, pixels-to-pixels fully convolutional spatial propagation network (FCSPN) for HSI classification.
FCSPN consists of a 3D fully convolution network (3D-FCN) and a convolutional spatial propagation network (CSPN)
arXiv Detail & Related papers (2020-08-04T09:05:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.