GANs Conditioning Methods: A Survey
- URL: http://arxiv.org/abs/2408.15640v3
- Date: Tue, 3 Sep 2024 08:35:15 GMT
- Title: GANs Conditioning Methods: A Survey
- Authors: Anis Bourou, Valérie Mezger, Auguste Genovesio,
- Abstract summary: Generative Adversarial Networks (GANs) have seen significant advancements, leading to their widespread adoption across various fields.
Many practical applications require precise control over the generated output, which has led to the development of conditional GANs (cGANs)
In this work, we review the conditioning methods proposed for GANs, exploring the characteristics of each method and highlighting their unique mechanisms and theoretical foundations.
- Score: 0.9558392439655012
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, Generative Adversarial Networks (GANs) have seen significant advancements, leading to their widespread adoption across various fields. The original GAN architecture enables the generation of images without any specific control over the content, making it an unconditional generation process. However, many practical applications require precise control over the generated output, which has led to the development of conditional GANs (cGANs) that incorporate explicit conditioning to guide the generation process. cGANs extend the original framework by incorporating additional information (conditions), enabling the generation of samples that adhere to that specific criteria. Various conditioning methods have been proposed, each differing in how they integrate the conditioning information into both the generator and the discriminator networks. In this work, we review the conditioning methods proposed for GANs, exploring the characteristics of each method and highlighting their unique mechanisms and theoretical foundations. Furthermore, we conduct a comparative analysis of these methods, evaluating their performance on various image datasets. Through these analyses, we aim to provide insights into the strengths and limitations of various conditioning techniques, guiding future research and application in generative modeling.
Related papers
- Personalized Image Generation with Deep Generative Models: A Decade Survey [51.26287478042516]
We present a review of generalized personalized image generation across various generative models.
We first define a unified framework that standardizes the personalization process across different generative models.
We then provide an in-depth analysis of personalization techniques within each generative model, highlighting their unique contributions and innovations.
arXiv Detail & Related papers (2025-02-18T17:34:04Z) - A Simple Approach to Unifying Diffusion-based Conditional Generation [63.389616350290595]
We introduce a simple, unified framework to handle diverse conditional generation tasks.
Our approach enables versatile capabilities via different inference-time sampling schemes.
Our model supports additional capabilities like non-spatially aligned and coarse conditioning.
arXiv Detail & Related papers (2024-10-15T09:41:43Z) - Image is All You Need to Empower Large-scale Diffusion Models for In-Domain Generation [7.1629002695210024]
In-domain generation aims to perform a variety of tasks within a specific domain, such as unconditional generation, text-to-image, image editing, 3D generation, and more.
Early research typically required training specialized generators for each unique task and domain, often relying on fully-labeled data.
Motivated by the powerful generative capabilities and broad applications of diffusion models, we are driven to explore leveraging label-free data to empower these models for in-domain generation.
arXiv Detail & Related papers (2023-12-13T14:59:49Z) - Ten Years of Generative Adversarial Nets (GANs): A survey of the
state-of-the-art [0.0]
Generative Adversarial Networks (GANs) have rapidly emerged as powerful tools for generating realistic and diverse data across various domains.
In February 2018, GAN secured the leading spot on the Top Ten Global Breakthrough Technologies List''
This survey aims to provide a general overview of GANs, summarizing the latent architecture, validation metrics, and application areas of the most widely recognized variants.
arXiv Detail & Related papers (2023-08-30T20:46:45Z) - Diffusion-based Visual Counterfactual Explanations -- Towards Systematic
Quantitative Evaluation [64.0476282000118]
Latest methods for visual counterfactual explanations (VCE) harness the power of deep generative models to synthesize new examples of high-dimensional images of impressive quality.
It is currently difficult to compare the performance of these VCE methods as the evaluation procedures largely vary and often boil down to visual inspection of individual examples and small scale user studies.
We propose a framework for systematic, quantitative evaluation of the VCE methods and a minimal set of metrics to be used.
arXiv Detail & Related papers (2023-08-11T12:22:37Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - Deep Generative Modelling: A Comparative Review of VAEs, GANs,
Normalizing Flows, Energy-Based and Autoregressive Models [7.477211792460795]
Deep generative modelling is a class of techniques that train deep neural networks to model the distribution of training samples.
This compendium covers energy-based models, variational autoencoders, generative adversarial networks, autoregressive models, normalizing flows.
arXiv Detail & Related papers (2021-03-08T17:34:03Z) - Guiding GANs: How to control non-conditional pre-trained GANs for
conditional image generation [69.10717733870575]
We present a novel method for guiding generic non-conditional GANs to behave as conditional GANs.
Our approach adds into the mix an encoder network to generate the high-dimensional random input that are fed to the generator network of a non-conditional GAN.
arXiv Detail & Related papers (2021-01-04T14:03:32Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z) - Regularization Methods for Generative Adversarial Networks: An Overview
of Recent Studies [3.829070379776576]
Generative Adversarial Network (GAN) has been extensively studied and used for various tasks.
Regularization methods have been proposed to make the training of GAN stable.
arXiv Detail & Related papers (2020-05-19T01:59:24Z) - Information Compensation for Deep Conditional Generative Networks [38.054911004694624]
We propose a novel structure for unsupervised conditional GANs powered by a novel Information Compensation Connection (IC-Connection)
The proposed IC-Connection enables GANs to compensate for information loss incurred during deconvolution operations.
Our empirical results suggest that our method achieves better disentanglement compared to the state-of-the-art GANs in a conditional generation setting.
arXiv Detail & Related papers (2020-01-23T14:39:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.