Raw Bayer Pattern Image Synthesis with Conditional GAN
- URL: http://arxiv.org/abs/2110.12823v1
- Date: Mon, 25 Oct 2021 11:40:36 GMT
- Title: Raw Bayer Pattern Image Synthesis with Conditional GAN
- Authors: Zhou Wei
- Abstract summary: We propose a method to generate Bayer pattern images by Generative adversarial network (GANs)
The Bayer pattern images can be generated by configuring the transformation as demosaicing.
Experiments show that the images generated by our proposed method outperform the original Pix2PixHD model in FID score, PSNR, and SSIM.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a method to generate Bayer pattern images by
Generative adversarial network (GANs). It is shown theoretically that using the
transformed data in GANs training is able to improve the generator learning of
the original data distribution, owing to the invariant of Jensen Shannon(JS)
divergence between two distributions under invertible and differentiable
transformation. The Bayer pattern images can be generated by configuring the
transformation as demosaicing, by converting the existing standard color
datasets to Bayer domain, the proposed method is promising in the applications
such as to find the optimal ISP configuration for computer vision tasks, in the
in sensor or near sensor computing, even in photography. Experiments show that
the images generated by our proposed method outperform the original Pix2PixHD
model in FID score, PSNR, and SSIM, and the training process is more stable.
For the situation similar to in sensor or near sensor computing for object
detection, by using our proposed method, the model performance can be improved
without the modification to the image sensor.
Related papers
- Siamese Meets Diffusion Network: SMDNet for Enhanced Change Detection in
High-Resolution RS Imagery [7.767708235606408]
We propose a new network, Siamese-U2Net Feature Differential Meets Network (SMDNet)
This network combines the Siam-U2Net Feature Differential (SU-FDE) and the denoising diffusion implicit model to improve the accuracy of image edge change detection.
Our method's combination of feature extraction and diffusion models demonstrates effectiveness in change detection in remote sensing images.
arXiv Detail & Related papers (2024-01-17T16:48:55Z) - ESVAE: An Efficient Spiking Variational Autoencoder with Reparameterizable Poisson Spiking Sampling [20.36674120648714]
Variational autoencoders (VAEs) are one of the most popular image generation models.
Current VAE methods implicitly construct the latent space by an elaborated autoregressive network.
We propose an efficient spiking variational autoencoder (ESVAE) that constructs an interpretable latent space distribution.
arXiv Detail & Related papers (2023-10-23T12:01:10Z) - In-Domain GAN Inversion for Faithful Reconstruction and Editability [132.68255553099834]
We propose in-domain GAN inversion, which consists of a domain-guided domain-regularized and a encoder to regularize the inverted code in the native latent space of the pre-trained GAN model.
We make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property.
arXiv Detail & Related papers (2023-09-25T08:42:06Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - DDPM-CD: Denoising Diffusion Probabilistic Models as Feature Extractors
for Change Detection [31.125812018296127]
We introduce a novel approach for change detection by pre-training a Deno Diffusionising Probabilistic Model (DDPM)
DDPM learns the training data distribution by gradually converting training images into a Gaussian distribution using a Markov chain.
During inference (i.e., sampling), they can generate a diverse set of samples closer to the training distribution.
Experiments conducted on the LEVIR-CD, WHU-CD, DSIFN-CD, and CDD datasets demonstrate that the proposed DDPM-CD method significantly outperforms the existing change detection methods in terms of F1 score, I
arXiv Detail & Related papers (2022-06-23T17:58:29Z) - Learning Generative Vision Transformer with Energy-Based Latent Space
for Saliency Prediction [51.80191416661064]
We propose a novel vision transformer with latent variables following an informative energy-based prior for salient object detection.
Both the vision transformer network and the energy-based prior model are jointly trained via Markov chain Monte Carlo-based maximum likelihood estimation.
With the generative vision transformer, we can easily obtain a pixel-wise uncertainty map from an image, which indicates the model confidence in predicting saliency from the image.
arXiv Detail & Related papers (2021-12-27T06:04:33Z) - Visual Saliency Transformer [127.33678448761599]
We develop a novel unified model based on a pure transformer, Visual Saliency Transformer (VST), for both RGB and RGB-D salient object detection (SOD)
It takes image patches as inputs and leverages the transformer to propagate global contexts among image patches.
Experimental results show that our model outperforms existing state-of-the-art results on both RGB and RGB-D SOD benchmark datasets.
arXiv Detail & Related papers (2021-04-25T08:24:06Z) - Remote sensing image fusion based on Bayesian GAN [9.852262451235472]
We build a two-stream generator network with PAN and MS images as input, which consists of three parts: feature extraction, feature fusion and image reconstruction.
We leverage Markov discriminator to enhance the ability of generator to reconstruct the fusion image, so that the result image can retain more details.
Experiments on QuickBird and WorldView datasets show that the model proposed in this paper can effectively fuse PAN and MS images.
arXiv Detail & Related papers (2020-09-20T16:15:51Z) - Uncertainty Inspired RGB-D Saliency Detection [70.50583438784571]
We propose the first framework to employ uncertainty for RGB-D saliency detection by learning from the data labeling process.
Inspired by the saliency data labeling process, we propose a generative architecture to achieve probabilistic RGB-D saliency detection.
Results on six challenging RGB-D benchmark datasets show our approach's superior performance in learning the distribution of saliency maps.
arXiv Detail & Related papers (2020-09-07T13:01:45Z) - RAIN: A Simple Approach for Robust and Accurate Image Classification
Networks [156.09526491791772]
It has been shown that the majority of existing adversarial defense methods achieve robustness at the cost of sacrificing prediction accuracy.
This paper proposes a novel preprocessing framework, which we term Robust and Accurate Image classificatioN(RAIN)
RAIN applies randomization over inputs to break the ties between the model forward prediction path and the backward gradient path, thus improving the model robustness.
We conduct extensive experiments on the STL10 and ImageNet datasets to verify the effectiveness of RAIN against various types of adversarial attacks.
arXiv Detail & Related papers (2020-04-24T02:03:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.