FGF-GAN: A Lightweight Generative Adversarial Network for Pansharpening
via Fast Guided Filter
- URL: http://arxiv.org/abs/2101.00062v1
- Date: Thu, 31 Dec 2020 20:27:17 GMT
- Title: FGF-GAN: A Lightweight Generative Adversarial Network for Pansharpening
via Fast Guided Filter
- Authors: Zixiang Zhao, Jiangshe Zhang, Shuang Xu, Kai Sun, Lu Huang, Junmin
Liu, Chunxia Zhang
- Abstract summary: We propose a generative adversarial network via the fast guided filter (FGF) for pansharpening.
In generator, traditional channel concatenation is replaced by FGF to better retain the spatial information.
Our network generates high-quality HRMS images that can surpass existing methods, and with fewer parameters.
- Score: 20.075225827771774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pansharpening is a widely used image enhancement technique for remote
sensing. Its principle is to fuse the input high-resolution single-channel
panchromatic (PAN) image and low-resolution multi-spectral image and to obtain
a high-resolution multi-spectral (HRMS) image. The existing deep learning
pansharpening method has two shortcomings. First, features of two input images
need to be concatenated along the channel dimension to reconstruct the HRMS
image, which makes the importance of PAN images not prominent, and also leads
to high computational cost. Second, the implicit information of features is
difficult to extract through the manually designed loss function. To this end,
we propose a generative adversarial network via the fast guided filter (FGF)
for pansharpening. In generator, traditional channel concatenation is replaced
by FGF to better retain the spatial information while reducing the number of
parameters. Meanwhile, the fusion objects can be highlighted by the spatial
attention module. In addition, the latent information of features can be
preserved effectively through adversarial training. Numerous experiments
illustrate that our network generates high-quality HRMS images that can surpass
existing methods, and with fewer parameters.
Related papers
- Panchromatic and Multispectral Image Fusion via Alternating Reverse
Filtering Network [23.74842833472348]
Pan-sharpening refers to super-resolve the low-resolution (LR) multi-spectral (MS) images in the spatial domain.
We present a simple yet effective textitalternating reverse filtering network for pan-sharpening.
arXiv Detail & Related papers (2022-10-15T03:56:05Z) - PC-GANs: Progressive Compensation Generative Adversarial Networks for
Pan-sharpening [50.943080184828524]
We propose a novel two-step model for pan-sharpening that sharpens the MS image through the progressive compensation of the spatial and spectral information.
The whole model is composed of triple GANs, and based on the specific architecture, a joint compensation loss function is designed to enable the triple GANs to be trained simultaneously.
arXiv Detail & Related papers (2022-07-29T03:09:21Z) - Decoupled-and-Coupled Networks: Self-Supervised Hyperspectral Image
Super-Resolution with Subpixel Fusion [67.35540259040806]
We propose a subpixel-level HS super-resolution framework by devising a novel decoupled-and-coupled network, called DCNet.
As the name suggests, DC-Net first decouples the input into common (or cross-sensor) and sensor-specific components.
We append a self-supervised learning module behind the CSU net by guaranteeing the material consistency to enhance the detailed appearances of the restored HS product.
arXiv Detail & Related papers (2022-05-07T23:40:36Z) - Memory-augmented Deep Unfolding Network for Guided Image
Super-resolution [67.83489239124557]
Guided image super-resolution (GISR) aims to obtain a high-resolution (HR) target image by enhancing the spatial resolution of a low-resolution (LR) target image under the guidance of a HR image.
Previous model-based methods mainly takes the entire image as a whole, and assume the prior distribution between the HR target image and the HR guidance image.
We propose a maximal a posterior (MAP) estimation model for GISR with two types of prior on the HR target image.
arXiv Detail & Related papers (2022-02-12T15:37:13Z) - LDP-Net: An Unsupervised Pansharpening Network Based on Learnable
Degradation Processes [18.139096037746672]
We propose a novel unsupervised network based on learnable degradation processes, dubbed as LDP-Net.
A reblurring block and a graying block are designed to learn the corresponding degradation processes, respectively.
Experiments on Worldview2 and Worldview3 images demonstrate that our proposed LDP-Net can fuse PAN and LRMS images effectively without the help of HRMS samples.
arXiv Detail & Related papers (2021-11-24T13:21:22Z) - Multi-Attention Generative Adversarial Network for Remote Sensing Image
Super-Resolution [17.04588012373861]
Image super-resolution (SR) methods can generate remote sensing images with high spatial resolution without increasing the cost.
We propose a network based on the generative adversarial network (GAN) to generate high resolution remote sensing images.
arXiv Detail & Related papers (2021-07-14T08:06:19Z) - Hyperspectral Pansharpening Based on Improved Deep Image Prior and
Residual Reconstruction [64.10636296274168]
Hyperspectral pansharpening aims to synthesize a low-resolution hyperspectral image (LR-HSI) with a registered panchromatic image (PAN) to generate an enhanced HSI with high spectral and spatial resolution.
Recently proposed HS pansharpening methods have obtained remarkable results using deep convolutional networks (ConvNets)
We propose a novel over-complete network, called HyperKite, which focuses on learning high-level features by constraining the receptive from increasing in the deep layers.
arXiv Detail & Related papers (2021-07-06T14:11:03Z) - PGMAN: An Unsupervised Generative Multi-adversarial Network for
Pan-sharpening [46.84573725116611]
We propose an unsupervised framework that learns directly from the full-resolution images without any preprocessing.
We use a two-stream generator to extract the modality-specific features from the PAN and MS images, respectively, and develop a dual-discriminator to preserve the spectral and spatial information of the inputs when performing fusion.
arXiv Detail & Related papers (2020-12-16T16:21:03Z) - Hyperspectral Image Super-resolution via Deep Progressive Zero-centric
Residual Learning [62.52242684874278]
Cross-modality distribution of spatial and spectral information makes the problem challenging.
We propose a novel textitlightweight deep neural network-based framework, namely PZRes-Net.
Our framework learns a high resolution and textitzero-centric residual image, which contains high-frequency spatial details of the scene.
arXiv Detail & Related papers (2020-06-18T06:32:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.