Teaching a GAN What Not to Learn
- URL: http://arxiv.org/abs/2010.15639v1
- Date: Thu, 29 Oct 2020 14:44:24 GMT
- Title: Teaching a GAN What Not to Learn
- Authors: Siddarth Asokan and Chandra Sekhar Seelamantula
- Abstract summary: Generative adversarial networks (GANs) were originally envisioned as unsupervised generative models that learn to follow a target distribution.
In this paper, we approach the supervised GAN problem from a different perspective, one motivated by the philosophy of the famous Persian poet Rumi.
In the GAN framework, we not only provide the GAN positive data that it must learn to model, but also present it with so-called negative samples that it must learn to avoid.
This formulation allows the discriminator to represent the underlying target distribution better by learning to penalize generated samples that are undesirable.
- Score: 20.03447539784024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) were originally envisioned as
unsupervised generative models that learn to follow a target distribution.
Variants such as conditional GANs, auxiliary-classifier GANs (ACGANs) project
GANs on to supervised and semi-supervised learning frameworks by providing
labelled data and using multi-class discriminators. In this paper, we approach
the supervised GAN problem from a different perspective, one that is motivated
by the philosophy of the famous Persian poet Rumi who said, "The art of knowing
is knowing what to ignore." In the GAN framework, we not only provide the GAN
positive data that it must learn to model, but also present it with so-called
negative samples that it must learn to avoid - we call this "The Rumi
Framework." This formulation allows the discriminator to represent the
underlying target distribution better by learning to penalize generated samples
that are undesirable - we show that this capability accelerates the learning
process of the generator. We present a reformulation of the standard GAN (SGAN)
and least-squares GAN (LSGAN) within the Rumi setting. The advantage of the
reformulation is demonstrated by means of experiments conducted on MNIST,
Fashion MNIST, CelebA, and CIFAR-10 datasets. Finally, we consider an
application of the proposed formulation to address the important problem of
learning an under-represented class in an unbalanced dataset. The Rumi approach
results in substantially lower FID scores than the standard GAN frameworks
while possessing better generalization capability.
Related papers
- ACTRESS: Active Retraining for Semi-supervised Visual Grounding [52.08834188447851]
A previous study, RefTeacher, makes the first attempt to tackle this task by adopting the teacher-student framework to provide pseudo confidence supervision and attention-based supervision.
This approach is incompatible with current state-of-the-art visual grounding models, which follow the Transformer-based pipeline.
Our paper proposes the ACTive REtraining approach for Semi-Supervised Visual Grounding, abbreviated as ACTRESS.
arXiv Detail & Related papers (2024-07-03T16:33:31Z) - Damage GAN: A Generative Model for Imbalanced Data [1.027461951217988]
This study explores the application of Generative Adversarial Networks (GANs) within the context of imbalanced datasets.
We introduce a novel network architecture known as Damage GAN, building upon the ContraD GAN framework which seamlessly integrates GANs and contrastive learning.
arXiv Detail & Related papers (2023-12-08T06:36:33Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - MineGAN++: Mining Generative Models for Efficient Knowledge Transfer to
Limited Data Domains [77.46963293257912]
We propose a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain.
This is done using a miner network that identifies which part of the generative distribution of each pretrained GAN outputs samples closest to the target domain.
We show that the proposed method, called MineGAN, effectively transfers knowledge to domains with few target images, outperforming existing methods.
arXiv Detail & Related papers (2021-04-28T13:10:56Z) - HGAN: Hybrid Generative Adversarial Network [25.940501417539416]
We propose a hybrid generative adversarial network (HGAN) for which we can enforce data density estimation via an autoregressive model.
A novel deep architecture within the GAN formulation is developed to adversarially distill the autoregressive model information in addition to simple GAN training approach.
arXiv Detail & Related papers (2021-02-07T03:54:12Z) - EC-GAN: Low-Sample Classification using Semi-Supervised Algorithms and
GANs [0.0]
Semi-supervised learning has been gaining attention as it allows for performing image analysis tasks such as classification with limited labeled data.
Some popular algorithms using Generative Adrial Networks (GANs) for semi-supervised classification share a single architecture for classification and discrimination.
This may require a model to converge to a separate data distribution for each task, which may reduce overall performance.
We propose a novel GAN model namely External GAN (ECGAN) that utilizes GANs and semi-supervised algorithms to improve classification in fully-supervised tasks.
arXiv Detail & Related papers (2020-12-26T05:58:00Z) - Exploring DeshuffleGANs in Self-Supervised Generative Adversarial
Networks [0.0]
We study the contribution of a self-supervision task, deshuffling of the DeshuffleGANs in the generalizability context.
We show that the DeshuffleGAN obtains the best FID results for several datasets compared to the other self-supervised GANs.
We design the conditional DeshuffleGAN called cDeshuffleGAN to evaluate the quality of the learnt representations.
arXiv Detail & Related papers (2020-11-03T14:22:54Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Robust Generative Adversarial Network [37.015223009069175]
We aim to improve the generalization capability of GANs by promoting the local robustness within the small neighborhood of the training samples.
We design a robust optimization framework where the generator and discriminator compete with each other in a textitworst-case setting within a small Wasserstein ball.
We have proved that our robust method can obtain a tighter generalization upper bound than traditional GANs under mild assumptions.
arXiv Detail & Related papers (2020-04-28T07:37:01Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.