Decoupled-and-Coupled Networks: Self-Supervised Hyperspectral Image
Super-Resolution with Subpixel Fusion
- URL: http://arxiv.org/abs/2205.03742v1
- Date: Sat, 7 May 2022 23:40:36 GMT
- Title: Decoupled-and-Coupled Networks: Self-Supervised Hyperspectral Image
Super-Resolution with Subpixel Fusion
- Authors: Danfeng Hong, Jing Yao, Deyu Meng, Naoto Yokoya, Jocelyn Chanussot
- Abstract summary: We propose a subpixel-level HS super-resolution framework by devising a novel decoupled-and-coupled network, called DCNet.
As the name suggests, DC-Net first decouples the input into common (or cross-sensor) and sensor-specific components.
We append a self-supervised learning module behind the CSU net by guaranteeing the material consistency to enhance the detailed appearances of the restored HS product.
- Score: 67.35540259040806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Enormous efforts have been recently made to super-resolve hyperspectral (HS)
images with the aid of high spatial resolution multispectral (MS) images. Most
prior works usually perform the fusion task by means of multifarious
pixel-level priors. Yet the intrinsic effects of a large distribution gap
between HS-MS data due to differences in the spatial and spectral resolution
are less investigated. The gap might be caused by unknown sensor-specific
properties or highly-mixed spectral information within one pixel (due to low
spatial resolution). To this end, we propose a subpixel-level HS
super-resolution framework by devising a novel decoupled-and-coupled network,
called DC-Net, to progressively fuse HS-MS information from the pixel- to
subpixel-level, from the image- to feature-level. As the name suggests, DC-Net
first decouples the input into common (or cross-sensor) and sensor-specific
components to eliminate the gap between HS-MS images before further fusion, and
then fully blends them by a model-guided coupled spectral unmixing (CSU) net.
More significantly, we append a self-supervised learning module behind the CSU
net by guaranteeing the material consistency to enhance the detailed
appearances of the restored HS product. Extensive experimental results show the
superiority of our method both visually and quantitatively and achieve a
significant improvement in comparison with the state-of-the-arts. Furthermore,
the codes and datasets will be available at
https://sites.google.com/view/danfeng-hong for the sake of reproducibility.
Related papers
- Unsupervised Hyperspectral and Multispectral Image Blind Fusion Based on Deep Tucker Decomposition Network with Spatial-Spectral Manifold Learning [15.86617273658407]
We propose an unsupervised blind fusion method for hyperspectral and multispectral images based on Tucker decomposition and spatial spectral manifold learning (DTDNML)
We show that this method enhances the accuracy and efficiency of hyperspectral and multispectral fusion on different remote sensing datasets.
arXiv Detail & Related papers (2024-09-15T08:58:26Z) - QMambaBSR: Burst Image Super-Resolution with Query State Space Model [55.56075874424194]
Burst super-resolution aims to reconstruct high-resolution images with higher quality and richer details by fusing the sub-pixel information from multiple burst low-resolution frames.
In BusrtSR, the key challenge lies in extracting the base frame's content complementary sub-pixel details while simultaneously suppressing high-frequency noise disturbance.
We introduce a novel Query Mamba Burst Super-Resolution (QMambaBSR) network, which incorporates a Query State Space Model (QSSM) and Adaptive Up-sampling module (AdaUp)
arXiv Detail & Related papers (2024-08-16T11:15:29Z) - Hyperspectral and Multispectral Image Fusion Using the Conditional
Denoising Diffusion Probabilistic Model [18.915369996829984]
We propose a deep fusion method based on the conditional denoising diffusion probabilistic model, called DDPM-Fus.
Experiments conducted on one indoor and two remote sensing datasets show the superiority of the proposed model when compared with other advanced deep learningbased fusion methods.
arXiv Detail & Related papers (2023-07-07T07:08:52Z) - Unsupervised Hyperspectral and Multispectral Images Fusion Based on the
Cycle Consistency [21.233354336608205]
We propose an unsupervised HSI and MSI fusion model based on the cycle consistency, called CycFusion.
The CycFusion learns the domain transformation between low spatial resolution HSI (LrHSI) and high spatial resolution MSI (HrMSI)
Experiments conducted on several datasets show that our proposed model outperforms all compared unsupervised fusion methods.
arXiv Detail & Related papers (2023-07-07T06:47:15Z) - Deep Posterior Distribution-based Embedding for Hyperspectral Image
Super-resolution [75.24345439401166]
This paper focuses on how to embed the high-dimensional spatial-spectral information of hyperspectral (HS) images efficiently and effectively.
We formulate HS embedding as an approximation of the posterior distribution of a set of carefully-defined HS embedding events.
Then, we incorporate the proposed feature embedding scheme into a source-consistent super-resolution framework that is physically-interpretable.
Experiments over three common benchmark datasets demonstrate that PDE-Net achieves superior performance over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-30T06:59:01Z) - Superpixel Segmentation Based on Spatially Constrained Subspace
Clustering [57.76302397774641]
We consider each representative region with independent semantic information as a subspace, and formulate superpixel segmentation as a subspace clustering problem.
We show that a simple integration of superpixel segmentation with the conventional subspace clustering does not effectively work due to the spatial correlation of the pixels.
We propose a novel convex locality-constrained subspace clustering model that is able to constrain the spatial adjacent pixels with similar attributes to be clustered into a superpixel.
arXiv Detail & Related papers (2020-12-11T06:18:36Z) - Spectral Superresolution of Multispectral Imagery with Joint Sparse and
Low-Rank Learning [29.834065415830764]
spectral superresolution (SSR) of MS imagery is challenging and less investigated due to its high ill-posedness in inverse imaging.
We develop a simple but effective method, called joint sparse and low-rank learning (J-SLoL), to spectrally enhance MS images by jointly learning low-rank HS-MS dictionary pairs from overlapped regions.
arXiv Detail & Related papers (2020-07-28T06:08:44Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.