Unsupervised Cycle-consistent Generative Adversarial Networks for
Pan-sharpening
- URL: http://arxiv.org/abs/2109.09395v2
- Date: Tue, 21 Sep 2021 16:06:55 GMT
- Title: Unsupervised Cycle-consistent Generative Adversarial Networks for
Pan-sharpening
- Authors: Huanyu Zhou, Qingjie Liu, and Yunhong Wang
- Abstract summary: We propose an unsupervised generative adversarial framework that learns from the full-scale images without the ground truths to alleviate this problem.
We extract the modality-specific features from the PAN and MS images with a two-stream generator, perform fusion in the feature domain, and then reconstruct the pan-sharpened images.
Results demonstrate that the proposed method can greatly improve the pan-sharpening performance on the full-scale images.
- Score: 41.68141846006704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning based pan-sharpening has received significant research interest
in recent years. Most of existing methods fall into the supervised learning
framework in which they down-sample the multi-spectral (MS) and panchromatic
(PAN) images and regard the original MS images as ground truths to form
training samples. Although impressive performance could be achieved, they have
difficulties generalizing to the original full-scale images due to the scale
gap, which makes them lack of practicability. In this paper, we propose an
unsupervised generative adversarial framework that learns from the full-scale
images without the ground truths to alleviate this problem. We extract the
modality-specific features from the PAN and MS images with a two-stream
generator, perform fusion in the feature domain, and then reconstruct the
pan-sharpened images. Furthermore, we introduce a novel hybrid loss based on
the cycle-consistency and adversarial scheme to improve the performance.
Comparison experiments with the state-of-the-art methods are conducted on
GaoFen-2 and WorldView-3 satellites. Results demonstrate that the proposed
method can greatly improve the pan-sharpening performance on the full-scale
images, which clearly show its practical value. Codes and datasets will be made
publicly available.
Related papers
- MFCLIP: Multi-modal Fine-grained CLIP for Generalizable Diffusion Face Forgery Detection [64.29452783056253]
The rapid development of photo-realistic face generation methods has raised significant concerns in society and academia.
Although existing approaches mainly capture face forgery patterns using image modality, other modalities like fine-grained noises and texts are not fully explored.
We propose a novel multi-modal fine-grained CLIP (MFCLIP) model, which mines comprehensive and fine-grained forgery traces across image-noise modalities.
arXiv Detail & Related papers (2024-09-15T13:08:59Z) - CrossDiff: Exploring Self-Supervised Representation of Pansharpening via
Cross-Predictive Diffusion Model [42.39485365164292]
Fusion of a panchromatic (PAN) image and corresponding multispectral (MS) image is also known as pansharpening.
Due to the absence of high-resolution MS images, available deep-learning-based methods usually follow the paradigm of training at reduced resolution and testing at both reduced and full resolution.
We propose to explore the self-supervised representation of pansharpening by designing a cross-predictive diffusion model, named CrossDiff.
arXiv Detail & Related papers (2024-01-10T13:32:47Z) - Unsupervised Deep Learning-based Pansharpening with Jointly-Enhanced
Spectral and Spatial Fidelity [4.425982186154401]
We propose a new deep learning-based pansharpening model that fully exploits the potential of this approach.
The proposed model features a novel loss function that jointly promotes the spectral and spatial quality of the pansharpened data.
Experiments on a large variety of test images, performed in challenging scenarios, demonstrate that the proposed method compares favorably with the state of the art.
arXiv Detail & Related papers (2023-07-26T17:25:28Z) - PC-GANs: Progressive Compensation Generative Adversarial Networks for
Pan-sharpening [50.943080184828524]
We propose a novel two-step model for pan-sharpening that sharpens the MS image through the progressive compensation of the spatial and spectral information.
The whole model is composed of triple GANs, and based on the specific architecture, a joint compensation loss function is designed to enable the triple GANs to be trained simultaneously.
arXiv Detail & Related papers (2022-07-29T03:09:21Z) - LDP-Net: An Unsupervised Pansharpening Network Based on Learnable
Degradation Processes [18.139096037746672]
We propose a novel unsupervised network based on learnable degradation processes, dubbed as LDP-Net.
A reblurring block and a graying block are designed to learn the corresponding degradation processes, respectively.
Experiments on Worldview2 and Worldview3 images demonstrate that our proposed LDP-Net can fuse PAN and LRMS images effectively without the help of HRMS samples.
arXiv Detail & Related papers (2021-11-24T13:21:22Z) - More Photos are All You Need: Semi-Supervised Learning for Fine-Grained
Sketch Based Image Retrieval [112.1756171062067]
We introduce a novel semi-supervised framework for cross-modal retrieval.
At the centre of our design is a sequential photo-to-sketch generation model.
We also introduce a discriminator guided mechanism to guide against unfaithful generation.
arXiv Detail & Related papers (2021-03-25T17:27:08Z) - PGMAN: An Unsupervised Generative Multi-adversarial Network for
Pan-sharpening [46.84573725116611]
We propose an unsupervised framework that learns directly from the full-resolution images without any preprocessing.
We use a two-stream generator to extract the modality-specific features from the PAN and MS images, respectively, and develop a dual-discriminator to preserve the spectral and spatial information of the inputs when performing fusion.
arXiv Detail & Related papers (2020-12-16T16:21:03Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.