LDP-Net: An Unsupervised Pansharpening Network Based on Learnable
Degradation Processes
- URL: http://arxiv.org/abs/2111.12483v1
- Date: Wed, 24 Nov 2021 13:21:22 GMT
- Title: LDP-Net: An Unsupervised Pansharpening Network Based on Learnable
Degradation Processes
- Authors: Jiahui Ni, Zhimin Shao, Zhongzhou Zhang, Mingzheng Hou, Jiliu Zhou,
Leyuan Fang, Yi Zhang
- Abstract summary: We propose a novel unsupervised network based on learnable degradation processes, dubbed as LDP-Net.
A reblurring block and a graying block are designed to learn the corresponding degradation processes, respectively.
Experiments on Worldview2 and Worldview3 images demonstrate that our proposed LDP-Net can fuse PAN and LRMS images effectively without the help of HRMS samples.
- Score: 18.139096037746672
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pansharpening in remote sensing image aims at acquiring a high-resolution
multispectral (HRMS) image directly by fusing a low-resolution multispectral
(LRMS) image with a panchromatic (PAN) image. The main concern is how to
effectively combine the rich spectral information of LRMS image with the
abundant spatial information of PAN image. Recently, many methods based on deep
learning have been proposed for the pansharpening task. However, these methods
usually has two main drawbacks: 1) requiring HRMS for supervised learning; and
2) simply ignoring the latent relation between the MS and PAN image and fusing
them directly. To solve these problems, we propose a novel unsupervised network
based on learnable degradation processes, dubbed as LDP-Net. A reblurring block
and a graying block are designed to learn the corresponding degradation
processes, respectively. In addition, a novel hybrid loss function is proposed
to constrain both spatial and spectral consistency between the pansharpened
image and the PAN and LRMS images at different resolutions. Experiments on
Worldview2 and Worldview3 images demonstrate that our proposed LDP-Net can fuse
PAN and LRMS images effectively without the help of HRMS samples, achieving
promising performance in terms of both qualitative visual effects and
quantitative metrics.
Related papers
- Variational Zero-shot Multispectral Pansharpening [43.881891055500496]
Pansharpening aims to generate a high spatial resolution multispectral image (HRMS) by fusing a low spatial resolution multispectral image (LRMS) and a panchromatic image (PAN)
Existing deep learning-based methods are unsuitable since they rely on many training pairs.
We propose a zero-shot pansharpening method by introducing a neural network into the optimization objective.
arXiv Detail & Related papers (2024-07-09T07:59:34Z) - Learning from Multi-Perception Features for Real-Word Image
Super-resolution [87.71135803794519]
We propose a novel SR method called MPF-Net that leverages multiple perceptual features of input images.
Our method incorporates a Multi-Perception Feature Extraction (MPFE) module to extract diverse perceptual information.
We also introduce a contrastive regularization term (CR) that improves the model's learning capability.
arXiv Detail & Related papers (2023-05-26T07:35:49Z) - PanFlowNet: A Flow-Based Deep Network for Pan-sharpening [41.9419544446451]
Pan-sharpening aims to generate a high-resolution multispectral (HRMS) image by integrating the spectral information of a low-resolution multispectral (LRMS) image with the texture details of a high-resolution panchromatic (PAN) image.
Existing deep learning-based methods recover only one HRMS image from the LRMS image and PAN image using a deterministic mapping.
We propose a flow-based pan-sharpening network (PanFlowNet) to directly learn the conditional distribution of HRMS image given LRMS image and PAN image.
arXiv Detail & Related papers (2023-05-12T21:34:35Z) - PC-GANs: Progressive Compensation Generative Adversarial Networks for
Pan-sharpening [50.943080184828524]
We propose a novel two-step model for pan-sharpening that sharpens the MS image through the progressive compensation of the spatial and spectral information.
The whole model is composed of triple GANs, and based on the specific architecture, a joint compensation loss function is designed to enable the triple GANs to be trained simultaneously.
arXiv Detail & Related papers (2022-07-29T03:09:21Z) - Memory-augmented Deep Unfolding Network for Guided Image
Super-resolution [67.83489239124557]
Guided image super-resolution (GISR) aims to obtain a high-resolution (HR) target image by enhancing the spatial resolution of a low-resolution (LR) target image under the guidance of a HR image.
Previous model-based methods mainly takes the entire image as a whole, and assume the prior distribution between the HR target image and the HR guidance image.
We propose a maximal a posterior (MAP) estimation model for GISR with two types of prior on the HR target image.
arXiv Detail & Related papers (2022-02-12T15:37:13Z) - Unsupervised Cycle-consistent Generative Adversarial Networks for
Pan-sharpening [41.68141846006704]
We propose an unsupervised generative adversarial framework that learns from the full-scale images without the ground truths to alleviate this problem.
We extract the modality-specific features from the PAN and MS images with a two-stream generator, perform fusion in the feature domain, and then reconstruct the pan-sharpened images.
Results demonstrate that the proposed method can greatly improve the pan-sharpening performance on the full-scale images.
arXiv Detail & Related papers (2021-09-20T09:43:24Z) - Fast and High-Quality Blind Multi-Spectral Image Pansharpening [48.68143888901669]
We propose a fast approach to blind pansharpening and achieve state-of-the-art image reconstruction quality.
To achieve fast blind pansharpening, we decouple the solution of the blur kernel and of the HRMS image.
Our algorithm outperforms state-of-the-art model-based counterparts in terms of both computational time and reconstruction quality.
arXiv Detail & Related papers (2021-03-17T23:12:14Z) - PGMAN: An Unsupervised Generative Multi-adversarial Network for
Pan-sharpening [46.84573725116611]
We propose an unsupervised framework that learns directly from the full-resolution images without any preprocessing.
We use a two-stream generator to extract the modality-specific features from the PAN and MS images, respectively, and develop a dual-discriminator to preserve the spectral and spatial information of the inputs when performing fusion.
arXiv Detail & Related papers (2020-12-16T16:21:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.