Guided Deep Generative Model-based Spatial Regularization for Multiband
Imaging Inverse Problems
- URL: http://arxiv.org/abs/2306.17197v1
- Date: Thu, 29 Jun 2023 03:48:50 GMT
- Title: Guided Deep Generative Model-based Spatial Regularization for Multiband
Imaging Inverse Problems
- Authors: Min Zhao, Nicolas Dobigeon, Jie Chen
- Abstract summary: We propose a generic framework able to capitalize on an auxiliary acquisition of high spatial resolution to derive tailored data-driven spatial regularizations.
More precisely, the regularization is conceived as a deep generative network able to encode spatial semantic features contained in this auxiliary image of high spatial resolution.
- Score: 14.908906329456842
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When adopting a model-based formulation, solving inverse problems encountered
in multiband imaging requires to define spatial and spectral regularizations.
In most of the works of the literature, spectral information is extracted from
the observations directly to derive data-driven spectral priors. Conversely,
the choice of the spatial regularization often boils down to the use of
conventional penalizations (e.g., total variation) promoting expected features
of the reconstructed image (e.g., piecewise constant). In this work, we propose
a generic framework able to capitalize on an auxiliary acquisition of high
spatial resolution to derive tailored data-driven spatial regularizations. This
approach leverages on the ability of deep learning to extract high level
features. More precisely, the regularization is conceived as a deep generative
network able to encode spatial semantic features contained in this auxiliary
image of high spatial resolution. To illustrate the versatility of this
approach, it is instantiated to conduct two particular tasks, namely multiband
image fusion and multiband image inpainting. Experimental results obtained on
these two tasks demonstrate the benefit of this class of informed
regularizations when compared to more conventional ones.
Related papers
- A Generalized Tensor Formulation for Hyperspectral Image Super-Resolution Under General Spatial Blurring [9.163087502142107]
Hyperspectral super-resolution is commonly accomplished by fusing a hyperspectral imaging of low spatial resolution with a multispectral image of high spatial resolution.
It is assumed in such tensor-based methods that the spatial-blurring operation that creates the observed hyperspectral image from the desired super-resolved image is separable into independent horizontal and vertical blurring.
Recent work has argued that such separable spatial degradation is ill-equipped to model the operation of real sensors which may exhibit, for example, anisotropic blurring.
arXiv Detail & Related papers (2024-09-27T13:23:17Z) - SpectralMamba: Efficient Mamba for Hyperspectral Image Classification [39.18999103115206]
Recurrent neural networks and Transformers have dominated most applications in hyperspectral (HS) imaging.
We propose SpectralMamba -- a novel state space model incorporated efficient deep learning framework for HS image classification.
We show that SpectralMamba surprisingly creates promising win-wins from both performance and efficiency perspectives.
arXiv Detail & Related papers (2024-04-12T14:12:03Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Hierarchical Normalization for Robust Monocular Depth Estimation [85.2304122536962]
We propose a novel multi-scale depth normalization method that hierarchically normalizes the depth representations based on spatial information and depth.
Our experiments show that the proposed normalization strategy remarkably outperforms previous normalization methods.
arXiv Detail & Related papers (2022-10-18T08:18:29Z) - PC-GANs: Progressive Compensation Generative Adversarial Networks for
Pan-sharpening [50.943080184828524]
We propose a novel two-step model for pan-sharpening that sharpens the MS image through the progressive compensation of the spatial and spectral information.
The whole model is composed of triple GANs, and based on the specific architecture, a joint compensation loss function is designed to enable the triple GANs to be trained simultaneously.
arXiv Detail & Related papers (2022-07-29T03:09:21Z) - Memory-augmented Deep Unfolding Network for Guided Image
Super-resolution [67.83489239124557]
Guided image super-resolution (GISR) aims to obtain a high-resolution (HR) target image by enhancing the spatial resolution of a low-resolution (LR) target image under the guidance of a HR image.
Previous model-based methods mainly takes the entire image as a whole, and assume the prior distribution between the HR target image and the HR guidance image.
We propose a maximal a posterior (MAP) estimation model for GISR with two types of prior on the HR target image.
arXiv Detail & Related papers (2022-02-12T15:37:13Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Orthonormal Product Quantization Network for Scalable Face Image
Retrieval [14.583846619121427]
This paper integrates product quantization with orthonormal constraints into an end-to-end deep learning framework to retrieve face images.
A novel scheme that uses predefined orthonormal vectors as codewords is proposed to enhance the quantization informativeness and reduce codewords' redundancy.
Experiments are conducted on four commonly-used face datasets under both seen and unseen identities retrieval settings.
arXiv Detail & Related papers (2021-07-01T09:30:39Z) - Unsupervised Spatial-spectral Network Learning for Hyperspectral
Compressive Snapshot Reconstruction [16.530040002441694]
We propose an unsupervised spatial-spectral network to reconstruct hyperspectral images only from the compressive snapshot measurement.
Our network can achieve better reconstruction results than the state-of-the-art methods.
arXiv Detail & Related papers (2020-12-18T12:29:04Z) - Hyperspectral Image Super-resolution via Deep Spatio-spectral
Convolutional Neural Networks [32.10057746890683]
We propose a simple and efficient architecture for deep convolutional neural networks to fuse a low-resolution hyperspectral image and a high-resolution multispectral image.
The proposed network architecture achieves best performance compared with recent state-of-the-art hyperspectral image super-resolution approaches.
arXiv Detail & Related papers (2020-05-29T05:56:50Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.