Progressively Unfreezing Perceptual GAN
- URL: http://arxiv.org/abs/2006.10250v1
- Date: Thu, 18 Jun 2020 03:12:41 GMT
- Title: Progressively Unfreezing Perceptual GAN
- Authors: Jinxuan Sun, Yang Chen, Junyu Dong and Guoqiang Zhong
- Abstract summary: Generative adversarial networks (GANs) are widely used in image generation tasks, yet the generated images are usually lack of texture details.
We propose a general framework, called Progressively Unfreezing Perceptual GAN (PUPGAN), which can generate images with fine texture details.
- Score: 28.330940021951438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) are widely used in image generation
tasks, yet the generated images are usually lack of texture details. In this
paper, we propose a general framework, called Progressively Unfreezing
Perceptual GAN (PUPGAN), which can generate images with fine texture details.
Particularly, we propose an adaptive perceptual discriminator with a
pre-trained perceptual feature extractor, which can efficiently measure the
discrepancy between multi-level features of the generated and real images. In
addition, we propose a progressively unfreezing scheme for the adaptive
perceptual discriminator, which ensures a smooth transfer process from a large
scale classification task to a specified image generation task. The qualitative
and quantitative experiments with comparison to the classical baselines on
three image generation tasks, i.e. single image super-resolution, paired
image-to-image translation and unpaired image-to-image translation demonstrate
the superiority of PUPGAN over the compared approaches.
Related papers
- Prompt Recovery for Image Generation Models: A Comparative Study of Discrete Optimizers [58.50071292008407]
We present the first head-to-head comparison of recent discrete optimization techniques for the problem of prompt inversion.
We find that focusing on the CLIP similarity between the inverted prompts and the ground truth image acts as a poor proxy for the similarity between ground truth image and the image generated by the inverted prompts.
arXiv Detail & Related papers (2024-08-12T21:35:59Z) - FreeCompose: Generic Zero-Shot Image Composition with Diffusion Prior [50.0535198082903]
We offer a novel approach to image composition, which integrates multiple input images into a single, coherent image.
We showcase the potential of utilizing the powerful generative prior inherent in large-scale pre-trained diffusion models to accomplish generic image composition.
arXiv Detail & Related papers (2024-07-06T03:35:43Z) - Attack Deterministic Conditional Image Generative Models for Diverse and
Controllable Generation [17.035117118768945]
We propose a plug-in projected gradient descent (PGD) like method for diverse and controllable image generation.
The key idea is attacking the pre-trained deterministic generative models by adding a micro perturbation to the input condition.
Our work opens the door to applying adversarial attack to low-level vision tasks.
arXiv Detail & Related papers (2024-03-13T06:57:23Z) - Image Deblurring using GAN [0.0]
This project focuses on the application of Generative Adversarial Network (GAN) in image deblurring.
The project defines a GAN model inflow and trains it with GoPRO dataset.
The network can obtain sharper pixels in image, achieving an average of 29.3 Peak Signal-to-Noise Ratio (PSNR) and 0.72 Structural Similarity Assessment (SSIM)
arXiv Detail & Related papers (2023-12-15T02:43:30Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - Wavelet-based Unsupervised Label-to-Image Translation [9.339522647331334]
We propose a new Unsupervised paradigm for SIS (USIS) that makes use of a self-supervised segmentation loss and whole image wavelet based discrimination.
We test our methodology on 3 challenging datasets and demonstrate its ability to bridge the performance gap between paired and unpaired models.
arXiv Detail & Related papers (2023-05-16T17:48:44Z) - Few-shot Image Generation via Masked Discrimination [20.998032566820907]
Few-shot image generation aims to generate images of high quality and great diversity with limited data.
It is difficult for modern GANs to avoid overfitting when trained on only a few images.
This work presents a novel approach to realize few-shot GAN adaptation via masked discrimination.
arXiv Detail & Related papers (2022-10-27T06:02:22Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - IMAGINE: Image Synthesis by Image-Guided Model Inversion [79.4691654458141]
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images.
We leverage the knowledge of image semantics from a pre-trained classifier to achieve plausible generations.
IMAGINE enables the synthesis procedure to simultaneously 1) enforce semantic specificity constraints during the synthesis, 2) produce realistic images without generator training, and 3) give users intuitive control over the generation process.
arXiv Detail & Related papers (2021-04-13T02:00:24Z) - Generative Hierarchical Features from Synthesizing Images [65.66756821069124]
We show that learning to synthesize images can bring remarkable hierarchical visual features that are generalizable across a wide range of applications.
The visual feature produced by our encoder, termed as Generative Hierarchical Feature (GH-Feat), has strong transferability to both generative and discriminative tasks.
arXiv Detail & Related papers (2020-07-20T18:04:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.