Fair GANs through model rebalancing for extremely imbalanced class
distributions
- URL: http://arxiv.org/abs/2308.08638v2
- Date: Thu, 21 Dec 2023 16:22:44 GMT
- Title: Fair GANs through model rebalancing for extremely imbalanced class
distributions
- Authors: Anubhav Jain, Nasir Memon, Julian Togelius
- Abstract summary: We present an approach to construct an unbiased generative adversarial network (GAN) from an existing biased GAN.
We show results for the StyleGAN2 models while training on the Flickr Faces High Quality (FFHQ) dataset for racial fairness.
We further validate our approach by applying it to an imbalanced CIFAR10 dataset which is also twice as large.
- Score: 5.463417677777276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep generative models require large amounts of training data. This often
poses a problem as the collection of datasets can be expensive and difficult,
in particular datasets that are representative of the appropriate underlying
distribution (e.g. demographic). This introduces biases in datasets which are
further propagated in the models. We present an approach to construct an
unbiased generative adversarial network (GAN) from an existing biased GAN by
rebalancing the model distribution. We do so by generating balanced data from
an existing imbalanced deep generative model using an evolutionary algorithm
and then using this data to train a balanced generative model. Additionally, we
propose a bias mitigation loss function that minimizes the deviation of the
learned class distribution from being equiprobable. We show results for the
StyleGAN2 models while training on the Flickr Faces High Quality (FFHQ) dataset
for racial fairness and see that the proposed approach improves on the fairness
metric by almost 5 times, whilst maintaining image quality. We further validate
our approach by applying it to an imbalanced CIFAR10 dataset where we show that
we can obtain comparable fairness and image quality as when training on a
balanced CIFAR10 dataset which is also twice as large. Lastly, we argue that
the traditionally used image quality metrics such as Frechet inception distance
(FID) are unsuitable for scenarios where the class distributions are imbalanced
and a balanced reference set is not available.
Related papers
- Fair CoVariance Neural Networks [34.68621550644667]
We propose Fair coVariance Neural Networks (FVNNs), which perform graph convolutions on the covariance matrix for both fair and accurate predictions.
We prove that FVNNs are intrinsically fairer than analogous PCA approaches thanks to their stability in low sample regimes.
arXiv Detail & Related papers (2024-09-13T06:24:18Z) - Constrained Diffusion Models via Dual Training [80.03953599062365]
Diffusion processes are prone to generating samples that reflect biases in a training dataset.
We develop constrained diffusion models by imposing diffusion constraints based on desired distributions.
We show that our constrained diffusion models generate new data from a mixture data distribution that achieves the optimal trade-off among objective and constraints.
arXiv Detail & Related papers (2024-08-27T14:25:42Z) - Training Class-Imbalanced Diffusion Model Via Overlap Optimization [55.96820607533968]
Diffusion models trained on real-world datasets often yield inferior fidelity for tail classes.
Deep generative models, including diffusion models, are biased towards classes with abundant training images.
We propose a method based on contrastive learning to minimize the overlap between distributions of synthetic images for different classes.
arXiv Detail & Related papers (2024-02-16T16:47:21Z) - Federated Skewed Label Learning with Logits Fusion [23.062650578266837]
Federated learning (FL) aims to collaboratively train a shared model across multiple clients without transmitting their local data.
We propose FedBalance, which corrects the optimization bias among local models by calibrating their logits.
Our method can gain 13% higher average accuracy compared with state-of-the-art methods.
arXiv Detail & Related papers (2023-11-14T14:37:33Z) - Class-Balancing Diffusion Models [57.38599989220613]
Class-Balancing Diffusion Models (CBDM) are trained with a distribution adjustment regularizer as a solution.
Our method benchmarked the generation results on CIFAR100/CIFAR100LT dataset and shows outstanding performance on the downstream recognition task.
arXiv Detail & Related papers (2023-04-30T20:00:14Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Class Balancing GAN with a Classifier in the Loop [58.29090045399214]
We introduce a novel theoretically motivated Class Balancing regularizer for training GANs.
Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset.
We demonstrate the utility of our regularizer in learning representations for long-tailed distributions via achieving better performance than existing approaches over multiple datasets.
arXiv Detail & Related papers (2021-06-17T11:41:30Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.