SSIF: Learning Continuous Image Representation for Spatial-Spectral
Super-Resolution
- URL: http://arxiv.org/abs/2310.00413v1
- Date: Sat, 30 Sep 2023 15:23:30 GMT
- Title: SSIF: Learning Continuous Image Representation for Spatial-Spectral
Super-Resolution
- Authors: Gengchen Mai, Ni Lao, Weiwei Sun, Yuchi Ma, Jiaming Song, Chenlin
Meng, Hongxu Ma, Jinmeng Rao, Ziyuan Li, Stefano Ermon
- Abstract summary: We propose a neural implicit model that represents an image as a function of both continuous pixel coordinates in the spatial domain and continuous wavelengths in the spectral domain.
We show that SSIF generalizes well to both unseen spatial resolutions and spectral resolutions.
It can generate high-resolution images that improve the performance of downstream tasks by 1.7%-7%.
- Score: 73.46167948298041
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Existing digital sensors capture images at fixed spatial and spectral
resolutions (e.g., RGB, multispectral, and hyperspectral images), and each
combination requires bespoke machine learning models. Neural Implicit Functions
partially overcome the spatial resolution challenge by representing an image in
a resolution-independent way. However, they still operate at fixed, pre-defined
spectral resolutions. To address this challenge, we propose Spatial-Spectral
Implicit Function (SSIF), a neural implicit model that represents an image as a
function of both continuous pixel coordinates in the spatial domain and
continuous wavelengths in the spectral domain. We empirically demonstrate the
effectiveness of SSIF on two challenging spatio-spectral super-resolution
benchmarks. We observe that SSIF consistently outperforms state-of-the-art
baselines even when the baselines are allowed to train separate models at each
spectral resolution. We show that SSIF generalizes well to both unseen spatial
resolutions and spectral resolutions. Moreover, SSIF can generate
high-resolution images that improve the performance of downstream tasks (e.g.,
land use classification) by 1.7%-7%.
Related papers
- Physics-Inspired Degradation Models for Hyperspectral Image Fusion [61.743696362028246]
Most fusion methods solely focus on the fusion algorithm itself and overlook the degradation models.
We propose physics-inspired degradation models (PIDM) to model the degradation of LR-HSI and HR-MSI.
Our proposed PIDM can boost the fusion performance of existing fusion methods in practical scenarios.
arXiv Detail & Related papers (2024-02-04T09:07:28Z) - Superpixel-based and Spatially-regularized Diffusion Learning for
Unsupervised Hyperspectral Image Clustering [4.643572021927615]
This paper introduces a novel unsupervised HSI clustering algorithm, Superpixel-based and Spatially-regularized Diffusion Learning (S2DL)
S2DL incorporates rich spatial information encoded in HSIs into diffusion geometry-based clustering.
S2DL's performance is illustrated with extensive experiments on three publicly available, real-world HSIs.
arXiv Detail & Related papers (2023-12-24T09:54:40Z) - Unsupervised Hyperspectral and Multispectral Images Fusion Based on the
Cycle Consistency [21.233354336608205]
We propose an unsupervised HSI and MSI fusion model based on the cycle consistency, called CycFusion.
The CycFusion learns the domain transformation between low spatial resolution HSI (LrHSI) and high spatial resolution MSI (HrMSI)
Experiments conducted on several datasets show that our proposed model outperforms all compared unsupervised fusion methods.
arXiv Detail & Related papers (2023-07-07T06:47:15Z) - Decoupled-and-Coupled Networks: Self-Supervised Hyperspectral Image
Super-Resolution with Subpixel Fusion [67.35540259040806]
We propose a subpixel-level HS super-resolution framework by devising a novel decoupled-and-coupled network, called DCNet.
As the name suggests, DC-Net first decouples the input into common (or cross-sensor) and sensor-specific components.
We append a self-supervised learning module behind the CSU net by guaranteeing the material consistency to enhance the detailed appearances of the restored HS product.
arXiv Detail & Related papers (2022-05-07T23:40:36Z) - HDNet: High-resolution Dual-domain Learning for Spectral Compressive
Imaging [138.04956118993934]
We propose a high-resolution dual-domain learning network (HDNet) for HSI reconstruction.
On the one hand, the proposed HR spatial-spectral attention module with its efficient feature fusion provides continuous and fine pixel-level features.
On the other hand, frequency domain learning (FDL) is introduced for HSI reconstruction to narrow the frequency domain discrepancy.
arXiv Detail & Related papers (2022-03-04T06:37:45Z) - A Latent Encoder Coupled Generative Adversarial Network (LE-GAN) for
Efficient Hyperspectral Image Super-resolution [3.1023808510465627]
generative adversarial network (GAN) has proven to be an effective deep learning framework for image super-resolution.
To alleviate the problem of mode collapse, this work has proposed a novel GAN model coupled with a latent encoder (LE-GAN)
LE-GAN can map the generated spectral-spatial features from the image space to the latent space and produce a coupling component to regularise the generated samples.
arXiv Detail & Related papers (2021-11-16T18:40:19Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z) - Hyperspectral Image Super-resolution via Deep Spatio-spectral
Convolutional Neural Networks [32.10057746890683]
We propose a simple and efficient architecture for deep convolutional neural networks to fuse a low-resolution hyperspectral image and a high-resolution multispectral image.
The proposed network architecture achieves best performance compared with recent state-of-the-art hyperspectral image super-resolution approaches.
arXiv Detail & Related papers (2020-05-29T05:56:50Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.