Class Balancing GAN with a Classifier in the Loop
- URL: http://arxiv.org/abs/2106.09402v1
- Date: Thu, 17 Jun 2021 11:41:30 GMT
- Title: Class Balancing GAN with a Classifier in the Loop
- Authors: Harsh Rangwani, Konda Reddy Mopuri, and R. Venkatesh Babu
- Abstract summary: We introduce a novel theoretically motivated Class Balancing regularizer for training GANs.
Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset.
We demonstrate the utility of our regularizer in learning representations for long-tailed distributions via achieving better performance than existing approaches over multiple datasets.
- Score: 58.29090045399214
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative Adversarial Networks (GANs) have swiftly evolved to imitate
increasingly complex image distributions. However, majority of the developments
focus on performance of GANs on balanced datasets. We find that the existing
GANs and their training regimes which work well on balanced datasets fail to be
effective in case of imbalanced (i.e. long-tailed) datasets. In this work we
introduce a novel theoretically motivated Class Balancing regularizer for
training GANs. Our regularizer makes use of the knowledge from a pre-trained
classifier to ensure balanced learning of all the classes in the dataset. This
is achieved via modelling the effective class frequency based on the
exponential forgetting observed in neural networks and encouraging the GAN to
focus on underrepresented classes. We demonstrate the utility of our
regularizer in learning representations for long-tailed distributions via
achieving better performance than existing approaches over multiple datasets.
Specifically, when applied to an unconditional GAN, it improves the FID from
$13.03$ to $9.01$ on the long-tailed iNaturalist-$2019$ dataset.
Related papers
- Reducing Bias in Federated Class-Incremental Learning with Hierarchical Generative Prototypes [10.532838477096055]
Federated Learning (FL) aims at unburdening the training of deep models by distributing computation across multiple devices.
We shed light on the Incremental and Federated biases that naturally emerge in FCL.
Our proposal constrains both biases in the last layer by efficiently fine-tuning a pre-trained backbone.
arXiv Detail & Related papers (2024-06-04T16:12:27Z) - Diffusion-based Neural Network Weights Generation [85.6725307453325]
We propose an efficient and adaptive transfer learning scheme through dataset-conditioned pretrained weights sampling.
Specifically, we use a latent diffusion model with a variational autoencoder that can reconstruct the neural network weights.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - SMaRt: Improving GANs with Score Matching Regularity [94.81046452865583]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.
We show that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.
We propose to improve the optimization of GANs with score matching regularity (SMaRt)
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - Class Instance Balanced Learning for Long-Tailed Classification [0.0]
Long-tailed image classification task deals with large imbalances in the class frequencies of the training data.
Previous approaches have shown that combining cross-entropy and contrastive learning can improve performance on the long-tailed task.
We propose a novel class instance balanced loss (CIBL), which reweights the relative contributions of a cross-entropy and a contrastive loss as a function of the frequency of class instances in the training batch.
arXiv Detail & Related papers (2023-07-11T15:09:10Z) - Inducing Neural Collapse in Deep Long-tailed Learning [13.242721780822848]
We propose two explicit feature regularization terms to learn high-quality representation for class-imbalanced data.
With the proposed regularization, Neural Collapse phenomena will appear under the class-imbalanced distribution.
Our method is easily implemented, highly effective, and can be plugged into most existing methods.
arXiv Detail & Related papers (2023-02-24T05:07:05Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Regularizing Generative Adversarial Networks under Limited Data [88.57330330305535]
This work proposes a regularization approach for training robust GAN models on limited data.
We show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data.
arXiv Detail & Related papers (2021-04-07T17:59:06Z) - Long-tailed Recognition by Routing Diverse Distribution-Aware Experts [64.71102030006422]
We propose a new long-tailed classifier called RoutIng Diverse Experts (RIDE)
It reduces the model variance with multiple experts, reduces the model bias with a distribution-aware diversity loss, reduces the computational cost with a dynamic expert routing module.
RIDE outperforms the state-of-the-art by 5% to 7% on CIFAR100-LT, ImageNet-LT and iNaturalist 2018 benchmarks.
arXiv Detail & Related papers (2020-10-05T06:53:44Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.