RSG: A Simple but Effective Module for Learning Imbalanced Datasets
- URL: http://arxiv.org/abs/2106.09859v1
- Date: Fri, 18 Jun 2021 01:10:27 GMT
- Title: RSG: A Simple but Effective Module for Learning Imbalanced Datasets
- Authors: Jianfeng Wang, Thomas Lukasiewicz, Xiaolin Hu, Jianfei Cai, Zhenghua
Xu
- Abstract summary: We propose a new rare-class sample generator (RSG) to generate new samples for rare classes during training.
RSG is convenient to use and highly versatile, because it can be easily integrated intoany kind of convolutional neural network.
We obtain competitive results on Imbalanced CIFAR, ImageNet-LT, and iNaturalist 2018 using RSG.
- Score: 99.77194308426606
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Imbalanced datasets widely exist in practice and area great challenge for
training deep neural models with agood generalization on infrequent classes. In
this work, wepropose a new rare-class sample generator (RSG) to solvethis
problem. RSG aims to generate some new samplesfor rare classes during training,
and it has in particularthe following advantages: (1) it is convenient to use
andhighly versatile, because it can be easily integrated intoany kind of
convolutional neural network, and it works wellwhen combined with different
loss functions, and (2) it isonly used during the training phase, and
therefore, no ad-ditional burden is imposed on deep neural networks duringthe
testing phase. In extensive experimental evaluations, weverify the
effectiveness of RSG. Furthermore, by leveragingRSG, we obtain competitive
results on Imbalanced CIFARand new state-of-the-art results on Places-LT,
ImageNet-LT, and iNaturalist 2018. The source code is available at
https://github.com/Jianf-Wang/RSG.
Related papers
- A Coefficient Makes SVRG Effective [55.104068027239656]
Variance Reduced Gradient (SVRG) is a theoretically compelling optimization method.
In this work, we demonstrate the potential of SVRG in optimizing real-world neural networks.
Our analysis finds that, for deeper networks, the strength of the variance reduction term in SVRG should be smaller and decrease as training progresses.
arXiv Detail & Related papers (2023-11-09T18:47:44Z) - Hybrid Graph Neural Networks for Few-Shot Learning [85.93495480949079]
Graph neural networks (GNNs) have been used to tackle the few-shot learning problem.
Under the inductive setting, existing GNN based methods are less competitive.
We propose a novel hybrid GNN model consisting of two GNNs, an instance GNN and a prototype GNN.
arXiv Detail & Related papers (2021-12-13T10:20:15Z) - Class Balancing GAN with a Classifier in the Loop [58.29090045399214]
We introduce a novel theoretically motivated Class Balancing regularizer for training GANs.
Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset.
We demonstrate the utility of our regularizer in learning representations for long-tailed distributions via achieving better performance than existing approaches over multiple datasets.
arXiv Detail & Related papers (2021-06-17T11:41:30Z) - Self-Damaging Contrastive Learning [92.34124578823977]
Unlabeled data in reality is commonly imbalanced and shows a long-tail distribution.
This paper proposes a principled framework called Self-Damaging Contrastive Learning to automatically balance the representation learning without knowing the classes.
Our experiments show that SDCLR significantly improves not only overall accuracies but also balancedness.
arXiv Detail & Related papers (2021-06-06T00:04:49Z) - Constrained Optimization for Training Deep Neural Networks Under Class
Imbalance [9.557146081524008]
We introduce a novel constraint that can be used with existing loss functions to enforce maximal area under the ROC curve.
We present experimental results for image-based classification applications using the CIFAR10 and an in-house medical imaging dataset.
arXiv Detail & Related papers (2021-02-21T09:49:36Z) - Training Sparse Neural Networks using Compressed Sensing [13.84396596420605]
We develop and test a novel method based on compressed sensing which combines the pruning and training into a single step.
Specifically, we utilize an adaptively weighted $ell1$ penalty on the weights during training, which we combine with a generalization of the regularized dual averaging (RDA) algorithm in order to train sparse neural networks.
arXiv Detail & Related papers (2020-08-21T19:35:54Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z) - Equalization Loss for Long-Tailed Object Recognition [109.91045951333835]
State-of-the-art object detection methods still perform poorly on large vocabulary and long-tailed datasets.
We propose a simple but effective loss, named equalization loss, to tackle the problem of long-tailed rare categories.
Our method achieves AP gains of 4.1% and 4.8% for the rare and common categories on the challenging LVIS benchmark.
arXiv Detail & Related papers (2020-03-11T09:14:53Z) - AL2: Progressive Activation Loss for Learning General Representations in
Classification Neural Networks [12.14537824884951]
We propose a novel regularization method that progressively penalizes the magnitude of activations during training.
Our method's effect on generalization is analyzed with label randomization tests and cumulative ablations.
arXiv Detail & Related papers (2020-03-07T18:38:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.