Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution
- URL: http://arxiv.org/abs/2001.04609v1
- Date: Tue, 14 Jan 2020 03:34:55 GMT
- Title: Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution
- Authors: Qi Wang, Qiang Li, and Xuelong Li
- Abstract summary: We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
- Score: 82.1739023587565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based hyperspectral image super-resolution (SR) methods have
achieved great success recently. However, most existing models can not
effectively explore spatial information and spectral information between bands
simultaneously, obtaining relatively low performance. To address this issue, in
this paper, we propose a novel spectral-spatial residual network for
hyperspectral image super-resolution (SSRNet). Our method can effectively
explore spatial-spectral information by using 3D convolution instead of 2D
convolution, which enables the network to better extract potential information.
Furthermore, we design a spectral-spatial residual module (SSRM) to adaptively
learn more effective features from all the hierarchical features in units
through local feature fusion, significantly improving the performance of the
algorithm. In each unit, we employ spatial and temporal separable 3D
convolution to extract spatial and spectral information, which not only reduces
unaffordable memory usage and high computational cost, but also makes the
network easier to train. Extensive evaluations and comparisons on three
benchmark datasets demonstrate that the proposed approach achieves superior
performance in comparison to existing state-of-the-art methods.
Related papers
- Coarse-Fine Spectral-Aware Deformable Convolution For Hyperspectral Image Reconstruction [15.537910100051866]
We study the inverse problem of Coded Aperture Snapshot Spectral Imaging (CASSI)
We propose Coarse-Fine Spectral-Aware Deformable Convolution Network (CFSDCN)
Our CFSDCN significantly outperforms previous state-of-the-art (SOTA) methods on both simulated and real HSI datasets.
arXiv Detail & Related papers (2024-06-18T15:15:12Z) - Hyperspectral Image Super-Resolution via Dual-domain Network Based on
Hybrid Convolution [6.3814314790000415]
This paper proposes a novel HSI super-resolution algorithm, termed dual-domain network based on hybrid convolution (SRDNet)
To capture inter-spectral self-similarity, a self-attention learning mechanism (HSL) is devised in the spatial domain.
To further improve the perceptual quality of HSI, a frequency loss(HFL) is introduced to optimize the model in the frequency domain.
arXiv Detail & Related papers (2023-04-10T13:51:28Z) - Deep Posterior Distribution-based Embedding for Hyperspectral Image
Super-resolution [75.24345439401166]
This paper focuses on how to embed the high-dimensional spatial-spectral information of hyperspectral (HS) images efficiently and effectively.
We formulate HS embedding as an approximation of the posterior distribution of a set of carefully-defined HS embedding events.
Then, we incorporate the proposed feature embedding scheme into a source-consistent super-resolution framework that is physically-interpretable.
Experiments over three common benchmark datasets demonstrate that PDE-Net achieves superior performance over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-30T06:59:01Z) - Spatial-Spectral Feedback Network for Super-Resolution of Hyperspectral
Imagery [11.76638109321532]
High-dimensional and complex spectral patterns in hyperspectral image make it difficult to explore spatial information and spectral information among bands simultaneously.
The number of available hyperspectral training samples is extremely small, which can easily lead to overfitting when training a deep neural network.
We propose a novel Spatial-Spectral Feedback Network (SSFN) to refine low-level representations among local spectral bands with high-level information from global spectral bands.
arXiv Detail & Related papers (2021-03-07T13:28:48Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - Hyperspectral Image Super-resolution via Deep Spatio-spectral
Convolutional Neural Networks [32.10057746890683]
We propose a simple and efficient architecture for deep convolutional neural networks to fuse a low-resolution hyperspectral image and a high-resolution multispectral image.
The proposed network architecture achieves best performance compared with recent state-of-the-art hyperspectral image super-resolution approaches.
arXiv Detail & Related papers (2020-05-29T05:56:50Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z) - Spatial Information Guided Convolution for Real-Time RGBD Semantic
Segmentation [79.78416804260668]
We propose Spatial information guided Convolution (S-Conv), which allows efficient RGB feature and 3D spatial information integration.
S-Conv is competent to infer the sampling offset of the convolution kernel guided by the 3D spatial information.
We further embed S-Conv into a semantic segmentation network, called Spatial information Guided convolutional Network (SGNet)
arXiv Detail & Related papers (2020-04-09T13:38:05Z) - Real-Time High-Performance Semantic Image Segmentation of Urban Street
Scenes [98.65457534223539]
We propose a real-time high-performance DCNN-based method for robust semantic segmentation of urban street scenes.
The proposed method achieves the accuracy of 73.6% and 68.0% mean Intersection over Union (mIoU) with the inference speed of 51.0 fps and 39.3 fps.
arXiv Detail & Related papers (2020-03-11T08:45:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.