Pixel-wise Conditioned Generative Adversarial Networks for Image
Synthesis and Completion
- URL: http://arxiv.org/abs/2002.01281v1
- Date: Tue, 4 Feb 2020 13:49:15 GMT
- Title: Pixel-wise Conditioned Generative Adversarial Networks for Image
Synthesis and Completion
- Authors: Cyprien Ruffino and Romain H\'erault and Eric Laloy and Gilles Gasso
- Abstract summary: Generative Adversarial Networks (GANs) have proven successful for unsupervised image generation.
We investigate the effectiveness of conditioning GANs when very few pixel values are provided.
We propose a modelling framework which results in adding an explicit cost term to the GAN objective function to enforce pixel-wise conditioning.
- Score: 3.8807073304999355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) have proven successful for
unsupervised image generation. Several works have extended GANs to image
inpainting by conditioning the generation with parts of the image to be
reconstructed. Despite their success, these methods have limitations in
settings where only a small subset of the image pixels is known beforehand. In
this paper we investigate the effectiveness of conditioning GANs when very few
pixel values are provided. We propose a modelling framework which results in
adding an explicit cost term to the GAN objective function to enforce
pixel-wise conditioning. We investigate the influence of this regularization
term on the quality of the generated images and the fulfillment of the given
pixel constraints. Using the recent PacGAN technique, we ensure that we keep
diversity in the generated samples. Conducted experiments on FashionMNIST show
that the regularization term effectively controls the trade-off between quality
of the generated images and the conditioning. Experimental evaluation on the
CIFAR-10 and CelebA datasets evidences that our method achieves accurate
results both visually and quantitatively in term of Fr\'echet Inception
Distance, while still enforcing the pixel conditioning. We also evaluate our
method on a texture image generation task using fully-convolutional networks.
As a final contribution, we apply the method to a classical geological
simulation application.
Related papers
- Image Deblurring using GAN [0.0]
This project focuses on the application of Generative Adversarial Network (GAN) in image deblurring.
The project defines a GAN model inflow and trains it with GoPRO dataset.
The network can obtain sharper pixels in image, achieving an average of 29.3 Peak Signal-to-Noise Ratio (PSNR) and 0.72 Structural Similarity Assessment (SSIM)
arXiv Detail & Related papers (2023-12-15T02:43:30Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Image Inpainting via Tractable Steering of Diffusion Models [54.13818673257381]
This paper proposes to exploit the ability of Tractable Probabilistic Models (TPMs) to exactly and efficiently compute the constrained posterior.
Specifically, this paper adopts a class of expressive TPMs termed Probabilistic Circuits (PCs)
We show that our approach can consistently improve the overall quality and semantic coherence of inpainted images with only 10% additional computational overhead.
arXiv Detail & Related papers (2023-11-28T21:14:02Z) - Pixel-Inconsistency Modeling for Image Manipulation Localization [59.968362815126326]
Digital image forensics plays a crucial role in image authentication and manipulation localization.
This paper presents a generalized and robust manipulation localization model through the analysis of pixel inconsistency artifacts.
Experiments show that our method successfully extracts inherent pixel-inconsistency forgery fingerprints.
arXiv Detail & Related papers (2023-09-30T02:54:51Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Automatic Correction of Internal Units in Generative Neural Networks [15.67941936262584]
Generative Adversarial Networks (GANs) have shown satisfactory performance in synthetic image generation.
There exists a number of generated images with defective visual patterns which are known as artifacts.
In this work, we devise a method that automatically identifies the internal units generating various types of artifact images.
arXiv Detail & Related papers (2021-04-13T11:46:45Z) - Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation [69.10717733870575]
We present a novel method for guiding generic non-conditional GANs to behave as conditional GANs.
Our approach adds into the mix an encoder network to generate the high-dimensional random input that are fed to the generator network of a non-conditional GAN.
arXiv Detail & Related papers (2021-01-04T14:03:32Z) - LT-GAN: Self-Supervised GAN with Latent Transformation Detection [10.405721171353195]
We propose a self-supervised approach (LT-GAN) to improve the generation quality and diversity of images.
We experimentally demonstrate that our proposed LT-GAN can be effectively combined with other state-of-the-art training techniques for added benefits.
arXiv Detail & Related papers (2020-10-19T22:09:45Z) - Efficient texture-aware multi-GAN for image inpainting [5.33024001730262]
Recent GAN-based (Generative adversarial networks) inpainting methods show remarkable improvements.
We propose a multi-GAN architecture improving both the performance and rendering efficiency.
arXiv Detail & Related papers (2020-09-30T14:58:03Z) - Progressively Unfreezing Perceptual GAN [28.330940021951438]
Generative adversarial networks (GANs) are widely used in image generation tasks, yet the generated images are usually lack of texture details.
We propose a general framework, called Progressively Unfreezing Perceptual GAN (PUPGAN), which can generate images with fine texture details.
arXiv Detail & Related papers (2020-06-18T03:12:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.