Unsupervised Cycle-consistent Generative Adversarial Networks for
Pan-sharpening
- URL: http://arxiv.org/abs/2109.09395v2
- Date: Tue, 21 Sep 2021 16:06:55 GMT
- Title: Unsupervised Cycle-consistent Generative Adversarial Networks for
Pan-sharpening
- Authors: Huanyu Zhou, Qingjie Liu, and Yunhong Wang
- Abstract summary: We propose an unsupervised generative adversarial framework that learns from the full-scale images without the ground truths to alleviate this problem.
We extract the modality-specific features from the PAN and MS images with a two-stream generator, perform fusion in the feature domain, and then reconstruct the pan-sharpened images.
Results demonstrate that the proposed method can greatly improve the pan-sharpening performance on the full-scale images.
- Score: 41.68141846006704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning based pan-sharpening has received significant research interest
in recent years. Most of existing methods fall into the supervised learning
framework in which they down-sample the multi-spectral (MS) and panchromatic
(PAN) images and regard the original MS images as ground truths to form
training samples. Although impressive performance could be achieved, they have
difficulties generalizing to the original full-scale images due to the scale
gap, which makes them lack of practicability. In this paper, we propose an
unsupervised generative adversarial framework that learns from the full-scale
images without the ground truths to alleviate this problem. We extract the
modality-specific features from the PAN and MS images with a two-stream
generator, perform fusion in the feature domain, and then reconstruct the
pan-sharpened images. Furthermore, we introduce a novel hybrid loss based on
the cycle-consistency and adversarial scheme to improve the performance.
Comparison experiments with the state-of-the-art methods are conducted on
GaoFen-2 and WorldView-3 satellites. Results demonstrate that the proposed
method can greatly improve the pan-sharpening performance on the full-scale
images, which clearly show its practical value. Codes and datasets will be made
publicly available.
Related papers
- Can We Generate Images with CoT? Let's Verify and Reinforce Image Generation Step by Step [77.86514804787622]
Chain-of-Thought (CoT) reasoning has been extensively explored in large models to tackle complex understanding tasks.
We provide the first comprehensive investigation of the potential of CoT reasoning to enhance autoregressive image generation.
We propose the Potential Assessment Reward Model (PARM) and PARM++, specialized for autoregressive image generation.
arXiv Detail & Related papers (2025-01-23T18:59:43Z) - Hipandas: Hyperspectral Image Joint Denoising and Super-Resolution by Image Fusion with the Panchromatic Image [51.333064033152304]
Recently launched satellites can concurrently acquire HSIs and panchromatic (PAN) images.
Hipandas is a novel learning paradigm that reconstructs HRHS images from noisy low-resolution HSIs and high-resolution PAN images.
arXiv Detail & Related papers (2024-12-05T14:39:29Z) - MFCLIP: Multi-modal Fine-grained CLIP for Generalizable Diffusion Face Forgery Detection [64.29452783056253]
The rapid development of photo-realistic face generation methods has raised significant concerns in society and academia.
Although existing approaches mainly capture face forgery patterns using image modality, other modalities like fine-grained noises and texts are not fully explored.
We propose a novel multi-modal fine-grained CLIP (MFCLIP) model, which mines comprehensive and fine-grained forgery traces across image-noise modalities.
arXiv Detail & Related papers (2024-09-15T13:08:59Z) - CrossDiff: Exploring Self-Supervised Representation of Pansharpening via
Cross-Predictive Diffusion Model [42.39485365164292]
Fusion of a panchromatic (PAN) image and corresponding multispectral (MS) image is also known as pansharpening.
Due to the absence of high-resolution MS images, available deep-learning-based methods usually follow the paradigm of training at reduced resolution and testing at both reduced and full resolution.
We propose to explore the self-supervised representation of pansharpening by designing a cross-predictive diffusion model, named CrossDiff.
arXiv Detail & Related papers (2024-01-10T13:32:47Z) - PC-GANs: Progressive Compensation Generative Adversarial Networks for
Pan-sharpening [50.943080184828524]
We propose a novel two-step model for pan-sharpening that sharpens the MS image through the progressive compensation of the spatial and spectral information.
The whole model is composed of triple GANs, and based on the specific architecture, a joint compensation loss function is designed to enable the triple GANs to be trained simultaneously.
arXiv Detail & Related papers (2022-07-29T03:09:21Z) - LDP-Net: An Unsupervised Pansharpening Network Based on Learnable
Degradation Processes [18.139096037746672]
We propose a novel unsupervised network based on learnable degradation processes, dubbed as LDP-Net.
A reblurring block and a graying block are designed to learn the corresponding degradation processes, respectively.
Experiments on Worldview2 and Worldview3 images demonstrate that our proposed LDP-Net can fuse PAN and LRMS images effectively without the help of HRMS samples.
arXiv Detail & Related papers (2021-11-24T13:21:22Z) - PGMAN: An Unsupervised Generative Multi-adversarial Network for
Pan-sharpening [46.84573725116611]
We propose an unsupervised framework that learns directly from the full-resolution images without any preprocessing.
We use a two-stream generator to extract the modality-specific features from the PAN and MS images, respectively, and develop a dual-discriminator to preserve the spectral and spatial information of the inputs when performing fusion.
arXiv Detail & Related papers (2020-12-16T16:21:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.