Remote sensing image fusion based on Bayesian GAN
- URL: http://arxiv.org/abs/2009.09465v1
- Date: Sun, 20 Sep 2020 16:15:51 GMT
- Title: Remote sensing image fusion based on Bayesian GAN
- Authors: Junfu Chen, Yue Pan, Yang Chen
- Abstract summary: We build a two-stream generator network with PAN and MS images as input, which consists of three parts: feature extraction, feature fusion and image reconstruction.
We leverage Markov discriminator to enhance the ability of generator to reconstruct the fusion image, so that the result image can retain more details.
Experiments on QuickBird and WorldView datasets show that the model proposed in this paper can effectively fuse PAN and MS images.
- Score: 9.852262451235472
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Remote sensing image fusion technology (pan-sharpening) is an important means
to improve the information capacity of remote sensing images. Inspired by the
efficient arameter space posteriori sampling of Bayesian neural networks, in
this paper we propose a Bayesian Generative Adversarial Network based on
Preconditioned Stochastic Gradient Langevin Dynamics (PGSLD-BGAN) to improve
pan-sharpening tasks. Unlike many traditional generative models that consider
only one optimal solution (might be locally optimal), the proposed PGSLD-BGAN
performs Bayesian inference on the network parameters, and explore the
generator posteriori distribution, which assists selecting the appropriate
generator parameters. First, we build a two-stream generator network with PAN
and MS images as input, which consists of three parts: feature extraction,
feature fusion and image reconstruction. Then, we leverage Markov discriminator
to enhance the ability of generator to reconstruct the fusion image, so that
the result image can retain more details. Finally, introducing Preconditioned
Stochastic Gradient Langevin Dynamics policy, we perform Bayesian inference on
the generator network. Experiments on QuickBird and WorldView datasets show
that the model proposed in this paper can effectively fuse PAN and MS images,
and be competitive with even superior to state of the arts in terms of
subjective and objective metrics.
Related papers
- D$^3$: Scaling Up Deepfake Detection by Learning from Discrepancy [11.239248133240126]
We seek a step toward a universal deepfake detection system with better generalization and robustness.
We propose our Discrepancy Deepfake Detector framework, whose core idea is to learn the universal artifacts from multiple generators.
Our framework achieves a 5.3% accuracy improvement in the OOD testing compared to the current SOTA methods while maintaining the ID performance.
arXiv Detail & Related papers (2024-04-06T10:45:02Z) - PC-GANs: Progressive Compensation Generative Adversarial Networks for
Pan-sharpening [50.943080184828524]
We propose a novel two-step model for pan-sharpening that sharpens the MS image through the progressive compensation of the spatial and spectral information.
The whole model is composed of triple GANs, and based on the specific architecture, a joint compensation loss function is designed to enable the triple GANs to be trained simultaneously.
arXiv Detail & Related papers (2022-07-29T03:09:21Z) - Structural Prior Guided Generative Adversarial Transformers for
Low-Light Image Enhancement [51.22694467126883]
We propose an effective Structural Prior guided Generative Adversarial Transformer (SPGAT) to solve low-light image enhancement.
The generator is based on a U-shaped Transformer which is used to explore non-local information for better clear image restoration.
To generate more realistic images, we develop a new structural prior guided adversarial learning method by building the skip connections between the generator and discriminators.
arXiv Detail & Related papers (2022-07-16T04:05:40Z) - An Energy-Based Prior for Generative Saliency [62.79775297611203]
We propose a novel generative saliency prediction framework that adopts an informative energy-based model as a prior distribution.
With the generative saliency model, we can obtain a pixel-wise uncertainty map from an image, indicating model confidence in the saliency prediction.
Experimental results show that our generative saliency model with an energy-based prior can achieve not only accurate saliency predictions but also reliable uncertainty maps consistent with human perception.
arXiv Detail & Related papers (2022-04-19T10:51:00Z) - A Novel Generator with Auxiliary Branch for Improving GAN Performance [7.005458308454871]
This brief introduces a novel generator architecture that produces the image by combining features obtained through two different branches.
The goal of the main branch is to produce the image by passing through the multiple residual blocks, whereas the auxiliary branch is to convey the coarse information in the earlier layer to the later one.
To prove the superiority of the proposed method, this brief provides extensive experiments using various standard datasets.
arXiv Detail & Related papers (2021-12-30T08:38:49Z) - Learning Generative Vision Transformer with Energy-Based Latent Space
for Saliency Prediction [51.80191416661064]
We propose a novel vision transformer with latent variables following an informative energy-based prior for salient object detection.
Both the vision transformer network and the energy-based prior model are jointly trained via Markov chain Monte Carlo-based maximum likelihood estimation.
With the generative vision transformer, we can easily obtain a pixel-wise uncertainty map from an image, which indicates the model confidence in predicting saliency from the image.
arXiv Detail & Related papers (2021-12-27T06:04:33Z) - Raw Bayer Pattern Image Synthesis with Conditional GAN [0.0]
We propose a method to generate Bayer pattern images by Generative adversarial network (GANs)
The Bayer pattern images can be generated by configuring the transformation as demosaicing.
Experiments show that the images generated by our proposed method outperform the original Pix2PixHD model in FID score, PSNR, and SSIM.
arXiv Detail & Related papers (2021-10-25T11:40:36Z) - StyleGAN-induced data-driven regularization for inverse problems [2.5138572116292686]
Recent advances in generative adversarial networks (GANs) have opened up the possibility of generating high-resolution images that were impossible to produce previously.
We develop a framework that utilizes the full potential of a pre-trained StyleGAN2 generator for constructing the prior distribution on the underlying image.
Considering the inverse problems of image inpainting and super-resolution, we demonstrate that the proposed approach is competitive with, and sometimes superior to, state-of-the-art GAN-based image reconstruction methods.
arXiv Detail & Related papers (2021-10-07T22:25:30Z) - 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment
Feedback Loop [128.07841893637337]
Regression-based methods have recently shown promising results in reconstructing human meshes from monocular images.
Minor deviation in parameters may lead to noticeable misalignment between the estimated meshes and image evidences.
We propose a Pyramidal Mesh Alignment Feedback (PyMAF) loop to leverage a feature pyramid and rectify the predicted parameters.
arXiv Detail & Related papers (2021-03-30T17:07:49Z) - Unpaired Image Enhancement with Quality-Attention Generative Adversarial
Network [92.01145655155374]
We propose a quality attention generative adversarial network (QAGAN) trained on unpaired data.
Key novelty of the proposed QAGAN lies in the injected QAM for the generator.
Our proposed method achieves better performance in both objective and subjective evaluations.
arXiv Detail & Related papers (2020-12-30T05:57:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.