RepFair-GAN: Mitigating Representation Bias in GANs Using Gradient
Clipping
- URL: http://arxiv.org/abs/2207.10653v1
- Date: Wed, 13 Jul 2022 14:58:48 GMT
- Title: RepFair-GAN: Mitigating Representation Bias in GANs Using Gradient
Clipping
- Authors: Patrik Joslin Kenfack, Kamil Sabbagh, Ad\'in Ram\'irez Rivera, Adil
Khan
- Abstract summary: We define a new fairness notion for generative models in terms of the distribution of generated samples sharing the same protected attributes.
We show that this fairness notion is violated even when the dataset contains equally represented groups.
We show that controlling the groups' gradient norm by performing group-wise gradient norm clipping in the discriminator leads to a more fair data generation.
- Score: 2.580765958706854
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness has become an essential problem in many domains of Machine Learning
(ML), such as classification, natural language processing, and Generative
Adversarial Networks (GANs). In this research effort, we study the unfairness
of GANs. We formally define a new fairness notion for generative models in
terms of the distribution of generated samples sharing the same protected
attributes (gender, race, etc.). The defined fairness notion (representational
fairness) requires the distribution of the sensitive attributes at the test
time to be uniform, and, in particular for GAN model, we show that this
fairness notion is violated even when the dataset contains equally represented
groups, i.e., the generator favors generating one group of samples over the
others at the test time. In this work, we shed light on the source of this
representation bias in GANs along with a straightforward method to overcome
this problem. We first show on two widely used datasets (MNIST, SVHN) that when
the norm of the gradient of one group is more important than the other during
the discriminator's training, the generator favours sampling data from one
group more than the other at test time. We then show that controlling the
groups' gradient norm by performing group-wise gradient norm clipping in the
discriminator during the training leads to a more fair data generation in terms
of representational fairness compared to existing models while preserving the
quality of generated samples.
Related papers
- DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Breaking the Spurious Causality of Conditional Generation via Fairness
Intervention with Corrective Sampling [77.15766509677348]
Conditional generative models often inherit spurious correlations from the training dataset.
This can result in label-conditional distributions that are imbalanced with respect to another latent attribute.
We propose a general two-step strategy to mitigate this issue.
arXiv Detail & Related papers (2022-12-05T08:09:33Z) - On the Privacy Properties of GAN-generated Samples [12.765060550622422]
We show that GAN-generated samples inherently satisfy some (weak) privacy guarantees.
We also study the robustness of GAN-generated samples to membership inference attacks.
arXiv Detail & Related papers (2022-06-03T00:29:35Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - cGANs with Auxiliary Discriminative Classifier [43.78253518292111]
Conditional generative models aim to learn the underlying joint distribution of data and labels.
auxiliary classifier generative adversarial networks (AC-GAN) have been widely used, but suffer from the issue of low intra-class diversity on generated samples.
We propose novel cGANs with auxiliary discriminative classifier (ADC-GAN) to address the issue of AC-GAN.
arXiv Detail & Related papers (2021-07-21T13:06:32Z) - Self-supervised GANs with Label Augmentation [43.78253518292111]
We propose a novel self-supervised GANs framework with label augmentation, i.e., augmenting the GAN labels (real or fake) with the self-supervised pseudo-labels.
We demonstrate that the proposed method significantly outperforms competitive baselines on both generative modeling and representation learning.
arXiv Detail & Related papers (2021-06-16T07:58:00Z) - On the Fairness of Generative Adversarial Networks (GANs) [1.061960673667643]
Generative adversarial networks (GANs) are one of the greatest advances in AI in recent years.
In this paper, we analyze and highlight fairness concerns of GANs model.
arXiv Detail & Related papers (2021-03-01T12:25:01Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.