Imbalanced Image Classification with Complement Cross Entropy
- URL: http://arxiv.org/abs/2009.02189v4
- Date: Wed, 4 Aug 2021 10:52:44 GMT
- Title: Imbalanced Image Classification with Complement Cross Entropy
- Authors: Yechan Kim, Younkwan Lee, and Moongu Jeon
- Abstract summary: We study the study of cross entropy which mostly ignores output scores on incorrect classes.
This work discovers that predicted probabilities on incorrect classes improves the prediction accuracy for imbalanced image classification.
The proposed loss makes the ground truth class overwhelm the other classes in terms of softmax probability.
- Score: 10.35173901214638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, deep learning models have achieved great success in computer vision
applications, relying on large-scale class-balanced datasets. However,
imbalanced class distributions still limit the wide applicability of these
models due to degradation in performance. To solve this problem, in this paper,
we concentrate on the study of cross entropy which mostly ignores output scores
on incorrect classes. This work discovers that neutralizing predicted
probabilities on incorrect classes improves the prediction accuracy for
imbalanced image classification. This paper proposes a simple but effective
loss named complement cross entropy based on this finding. The proposed loss
makes the ground truth class overwhelm the other classes in terms of softmax
probability, by neutralizing probabilities of incorrect classes, without
additional training procedures. Along with it, this loss facilitates the models
to learn key information especially from samples on minority classes. It
ensures more accurate and robust classification results on imbalanced
distributions. Extensive experiments on imbalanced datasets demonstrate the
effectiveness of the proposed method.
Related papers
- When resampling/reweighting improves feature learning in imbalanced classification?: A toy-model study [5.5730368125641405]
A toy model of binary classification is studied with the aim of clarifying the class-wise resampling/reweighting effect on the feature learning performance under the presence of class imbalance.
The result shows that there exists a case in which the no resampling/reweighting situation gives the best feature learning performance irrespective of the choice of losses or classifiers.
arXiv Detail & Related papers (2024-09-09T13:31:00Z) - Class-Balancing Diffusion Models [57.38599989220613]
Class-Balancing Diffusion Models (CBDM) are trained with a distribution adjustment regularizer as a solution.
Our method benchmarked the generation results on CIFAR100/CIFAR100LT dataset and shows outstanding performance on the downstream recognition task.
arXiv Detail & Related papers (2023-04-30T20:00:14Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - An Embarrassingly Simple Baseline for Imbalanced Semi-Supervised
Learning [103.65758569417702]
Semi-supervised learning (SSL) has shown great promise in leveraging unlabeled data to improve model performance.
We consider a more realistic and challenging setting called imbalanced SSL, where imbalanced class distributions occur in both labeled and unlabeled data.
We study a simple yet overlooked baseline -- SimiS -- which tackles data imbalance by simply supplementing labeled data with pseudo-labels.
arXiv Detail & Related papers (2022-11-20T21:18:41Z) - Imbalanced Nodes Classification for Graph Neural Networks Based on
Valuable Sample Mining [9.156427521259195]
A new loss function FD-Loss is reconstructed based on the traditional algorithm-level approach to the imbalance problem.
Our loss function can effectively solve the sample node imbalance problem and improve the classification accuracy by 4% compared to existing methods in the node classification task.
arXiv Detail & Related papers (2022-09-18T09:22:32Z) - Learning to Re-weight Examples with Optimal Transport for Imbalanced
Classification [74.62203971625173]
Imbalanced data pose challenges for deep learning based classification models.
One of the most widely-used approaches for tackling imbalanced data is re-weighting.
We propose a novel re-weighting method based on optimal transport (OT) from a distributional point of view.
arXiv Detail & Related papers (2022-08-05T01:23:54Z) - Prototype-Anchored Learning for Learning with Imperfect Annotations [83.7763875464011]
It is challenging to learn unbiased classification models from imperfectly annotated datasets.
We propose a prototype-anchored learning (PAL) method, which can be easily incorporated into various learning-based classification schemes.
We verify the effectiveness of PAL on class-imbalanced learning and noise-tolerant learning by extensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-06-23T10:25:37Z) - Throwing Away Data Improves Worst-Class Error in Imbalanced
Classification [36.91428748713018]
Class imbalances pervade classification problems, yet their treatment differs in theory and practice.
We take on the challenge of developing learning theory able to describe the worst-class error of classifiers over linearly-separable data.
arXiv Detail & Related papers (2022-05-23T23:43:18Z) - Imbalanced Classification via Explicit Gradient Learning From Augmented
Data [0.0]
We propose a novel deep meta-learning technique to augment a given imbalanced dataset with new minority instances.
The advantage of the proposed method is demonstrated on synthetic and real-world datasets with various imbalance ratios.
arXiv Detail & Related papers (2022-02-21T22:16:50Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.