Self-Conditioned Generative Adversarial Networks for Image Editing
- URL: http://arxiv.org/abs/2202.04040v1
- Date: Tue, 8 Feb 2022 18:08:24 GMT
- Title: Self-Conditioned Generative Adversarial Networks for Image Editing
- Authors: Yunzhe Liu, Rinon Gal, Amit H. Bermano, Baoquan Chen, Daniel Cohen-Or
- Abstract summary: Generative Adversarial Networks (GANs) are susceptible to bias, learned from either the unbalanced data, or through mode collapse.
We argue that this bias is responsible not only for fairness concerns, but that it plays a key role in the collapse of latent-traversal editing methods when deviating away from the distribution's core.
- Score: 61.50205580051405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GANs) are susceptible to bias, learned from
either the unbalanced data, or through mode collapse. The networks focus on the
core of the data distribution, leaving the tails - or the edges of the
distribution - behind. We argue that this bias is responsible not only for
fairness concerns, but that it plays a key role in the collapse of
latent-traversal editing methods when deviating away from the distribution's
core. Building on this observation, we outline a method for mitigating
generative bias through a self-conditioning process, where distances in the
latent-space of a pre-trained generator are used to provide initial labels for
the data. By fine-tuning the generator on a re-sampled distribution drawn from
these self-labeled data, we force the generator to better contend with rare
semantic attributes and enable more realistic generation of these properties.
We compare our models to a wide range of latent editing methods, and show that
by alleviating the bias they achieve finer semantic control and better identity
preservation through a wider range of transformations. Our code and models will
be available at https://github.com/yzliu567/sc-gan
Related papers
- AITTI: Learning Adaptive Inclusive Token for Text-to-Image Generation [53.65701943405546]
We learn adaptive inclusive tokens to shift the attribute distribution of the final generative outputs.
Our method requires neither explicit attribute specification nor prior knowledge of the bias distribution.
Our method achieves comparable performance to models that require specific attributes or editing directions for generation.
arXiv Detail & Related papers (2024-06-18T17:22:23Z) - Balancing Act: Distribution-Guided Debiasing in Diffusion Models [31.38505986239798]
Diffusion Models (DMs) have emerged as powerful generative models with unprecedented image generation capability.
DMs reflect the biases present in the training datasets.
We present a method for debiasing DMs without relying on additional data or model retraining.
arXiv Detail & Related papers (2024-02-28T09:53:17Z) - Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Certifying Model Accuracy under Distribution Shifts [151.67113334248464]
We present provable robustness guarantees on the accuracy of a model under bounded Wasserstein shifts of the data distribution.
We show that a simple procedure that randomizes the input of the model within a transformation space is provably robust to distributional shifts under the transformation.
arXiv Detail & Related papers (2022-01-28T22:03:50Z) - Self-supervised GANs with Label Augmentation [43.78253518292111]
We propose a novel self-supervised GANs framework with label augmentation, i.e., augmenting the GAN labels (real or fake) with the self-supervised pseudo-labels.
We demonstrate that the proposed method significantly outperforms competitive baselines on both generative modeling and representation learning.
arXiv Detail & Related papers (2021-06-16T07:58:00Z) - Generating Out of Distribution Adversarial Attack using Latent Space
Poisoning [5.1314136039587925]
We propose a novel mechanism of generating adversarial examples where the actual image is not corrupted.
latent space representation is utilized to tamper with the inherent structure of the image.
As opposed to gradient-based attacks, the latent space poisoning exploits the inclination of classifiers to model the independent and identical distribution of the training dataset.
arXiv Detail & Related papers (2020-12-09T13:05:44Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.