Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning
- URL: http://arxiv.org/abs/2006.10300v2
- Date: Sat, 5 Dec 2020 13:15:36 GMT
- Title: Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning
- Authors: Zhiyu Zhu, Junhui Hou, Jie Chen, Huanqiang Zeng, and Jiantao Zhou
- Abstract summary: Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
- Score: 62.52242684874278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores the problem of hyperspectral image (HSI) super-resolution
that merges a low resolution HSI (LR-HSI) and a high resolution multispectral
image (HR-MSI). The cross-modality distribution of the spatial and spectral
information makes the problem challenging. Inspired by the classic wavelet
decomposition-based image fusion, we propose a novel \textit{lightweight} deep
neural network-based framework, namely progressive zero-centric residual
network (PZRes-Net), to address this problem efficiently and effectively.
Specifically, PZRes-Net learns a high resolution and \textit{zero-centric}
residual image, which contains high-frequency spatial details of the scene
across all spectral bands, from both inputs in a progressive fashion along the
spectral dimension. And the resulting residual image is then superimposed onto
the up-sampled LR-HSI in a \textit{mean-value invariant} manner, leading to a
coarse HR-HSI, which is further refined by exploring the coherence across all
spectral bands simultaneously. To learn the residual image efficiently and
effectively, we employ spectral-spatial separable convolution with dense
connections. In addition, we propose zero-mean normalization implemented on the
feature maps of each layer to realize the zero-mean characteristic of the
residual image. Extensive experiments over both real and synthetic benchmark
datasets demonstrate that our PZRes-Net outperforms state-of-the-art methods to
a \textit{significant} extent in terms of both 4 quantitative metrics and
visual quality, e.g., our PZRes-Net improves the PSNR more than 3dB, while
saving 2.3$\times$ parameters and consuming 15$\times$ less FLOPs. The code is
publicly available at https://github.com/zbzhzhy/PZRes-Net .
Related papers
- Deep Posterior Distribution-based Embedding for Hyperspectral Image
Super-resolution [75.24345439401166]
This paper focuses on how to embed the high-dimensional spatial-spectral information of hyperspectral (HS) images efficiently and effectively.
We formulate HS embedding as an approximation of the posterior distribution of a set of carefully-defined HS embedding events.
Then, we incorporate the proposed feature embedding scheme into a source-consistent super-resolution framework that is physically-interpretable.
Experiments over three common benchmark datasets demonstrate that PDE-Net achieves superior performance over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-30T06:59:01Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - Hyperspectral Pansharpening Based on Improved Deep Image Prior and
Residual Reconstruction [64.10636296274168]
Hyperspectral pansharpening aims to synthesize a low-resolution hyperspectral image (LR-HSI) with a registered panchromatic image (PAN) to generate an enhanced HSI with high spectral and spatial resolution.
Recently proposed HS pansharpening methods have obtained remarkable results using deep convolutional networks (ConvNets)
We propose a novel over-complete network, called HyperKite, which focuses on learning high-level features by constraining the receptive from increasing in the deep layers.
arXiv Detail & Related papers (2021-07-06T14:11:03Z) - Spatial-Spectral Feedback Network for Super-Resolution of Hyperspectral
Imagery [11.76638109321532]
High-dimensional and complex spectral patterns in hyperspectral image make it difficult to explore spatial information and spectral information among bands simultaneously.
The number of available hyperspectral training samples is extremely small, which can easily lead to overfitting when training a deep neural network.
We propose a novel Spatial-Spectral Feedback Network (SSFN) to refine low-level representations among local spectral bands with high-level information from global spectral bands.
arXiv Detail & Related papers (2021-03-07T13:28:48Z) - Hyperspectral Image Super-resolution via Deep Spatio-spectral
Convolutional Neural Networks [32.10057746890683]
We propose a simple and efficient architecture for deep convolutional neural networks to fuse a low-resolution hyperspectral image and a high-resolution multispectral image.
The proposed network architecture achieves best performance compared with recent state-of-the-art hyperspectral image super-resolution approaches.
arXiv Detail & Related papers (2020-05-29T05:56:50Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.