Improving GANs for Long-Tailed Data through Group Spectral
Regularization
- URL: http://arxiv.org/abs/2208.09932v1
- Date: Sun, 21 Aug 2022 17:51:05 GMT
- Title: Improving GANs for Long-Tailed Data through Group Spectral
Regularization
- Authors: Harsh Rangwani, Naman Jaswani, Tejan Karmali, Varun Jampani, R.
Venkatesh Babu
- Abstract summary: We propose a novel group Spectral Regularizer (gSR) that prevents the spectral explosion alleviating mode collapse.
We find that gSR effectively combines with existing augmentation and regularization techniques, leading to state-of-the-art image generation performance on long-tailed data.
- Score: 51.58250647277375
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep long-tailed learning aims to train useful deep networks on practical,
real-world imbalanced distributions, wherein most labels of the tail classes
are associated with a few samples. There has been a large body of work to train
discriminative models for visual recognition on long-tailed distribution. In
contrast, we aim to train conditional Generative Adversarial Networks, a class
of image generation models on long-tailed distributions. We find that similar
to recognition, state-of-the-art methods for image generation also suffer from
performance degradation on tail classes. The performance degradation is mainly
due to class-specific mode collapse for tail classes, which we observe to be
correlated with the spectral explosion of the conditioning parameter matrix. We
propose a novel group Spectral Regularizer (gSR) that prevents the spectral
explosion alleviating mode collapse, which results in diverse and plausible
image generation even for tail classes. We find that gSR effectively combines
with existing augmentation and regularization techniques, leading to
state-of-the-art image generation performance on long-tailed data. Extensive
experiments demonstrate the efficacy of our regularizer on long-tailed datasets
with different degrees of imbalance.
Related papers
- Training Class-Imbalanced Diffusion Model Via Overlap Optimization [55.96820607533968]
Diffusion models trained on real-world datasets often yield inferior fidelity for tail classes.
Deep generative models, including diffusion models, are biased towards classes with abundant training images.
We propose a method based on contrastive learning to minimize the overlap between distributions of synthetic images for different classes.
arXiv Detail & Related papers (2024-02-16T16:47:21Z) - SuperDisco: Super-Class Discovery Improves Visual Recognition for the
Long-Tail [69.50380510879697]
We propose SuperDisco, an algorithm that discovers super-class representations for long-tailed recognition.
We learn to construct the super-class graph to guide the representation learning to deal with long-tailed distributions.
arXiv Detail & Related papers (2023-03-31T19:51:12Z) - Long-tailed Recognition by Learning from Latent Categories [70.6272114218549]
We introduce a Latent Categories based long-tail Recognition (LCReg) method.
Specifically, we learn a set of class-agnostic latent features shared among the head and tail classes.
Then, we implicitly enrich the training sample diversity via applying semantic data augmentation to the latent features.
arXiv Detail & Related papers (2022-06-02T12:19:51Z) - Long-Tailed Classification with Gradual Balanced Loss and Adaptive
Feature Generation [19.17617301462919]
We propose a new method, Gradual Balanced Loss and Adaptive Feature Generator (GLAG) to alleviate imbalance.
State-of-the-art results have been achieved on long-tail datasets such as CIFAR100-LT, ImageNetLT, and iNaturalist.
arXiv Detail & Related papers (2022-02-28T01:20:35Z) - Improving Tail-Class Representation with Centroid Contrastive Learning [145.73991900239017]
We propose interpolative centroid contrastive learning (ICCL) to improve long-tailed representation learning.
ICCL interpolates two images from a class-agnostic sampler and a class-aware sampler, and trains the model such that the representation of the ICCL can be used to retrieve the centroids for both source classes.
Our result shows a significant accuracy gain of 2.8% on the iNaturalist 2018 dataset with a real-world long-tailed distribution.
arXiv Detail & Related papers (2021-10-19T15:24:48Z) - Long-tailed Recognition by Routing Diverse Distribution-Aware Experts [64.71102030006422]
We propose a new long-tailed classifier called RoutIng Diverse Experts (RIDE)
It reduces the model variance with multiple experts, reduces the model bias with a distribution-aware diversity loss, reduces the computational cost with a dynamic expert routing module.
RIDE outperforms the state-of-the-art by 5% to 7% on CIFAR100-LT, ImageNet-LT and iNaturalist 2018 benchmarks.
arXiv Detail & Related papers (2020-10-05T06:53:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.