Subclass-balancing Contrastive Learning for Long-tailed Recognition
- URL: http://arxiv.org/abs/2306.15925v2
- Date: Sat, 9 Sep 2023 10:40:30 GMT
- Title: Subclass-balancing Contrastive Learning for Long-tailed Recognition
- Authors: Chengkai Hou and Jieyu Zhang and Haonan Wang and Tianyi Zhou
- Abstract summary: Long-tailed recognition with imbalanced class distribution naturally emerges in practical machine learning applications.
We propose a novel subclass-balancing contrastive learning'' approach that clusters each head class into multiple subclasses of similar sizes as the tail classes.
We evaluate SBCL over a list of long-tailed benchmark datasets and it achieves the state-of-the-art performance.
- Score: 38.31221755013738
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Long-tailed recognition with imbalanced class distribution naturally emerges
in practical machine learning applications. Existing methods such as data
reweighing, resampling, and supervised contrastive learning enforce the class
balance with a price of introducing imbalance between instances of head class
and tail class, which may ignore the underlying rich semantic substructures of
the former and exaggerate the biases in the latter. We overcome these drawbacks
by a novel ``subclass-balancing contrastive learning (SBCL)'' approach that
clusters each head class into multiple subclasses of similar sizes as the tail
classes and enforce representations to capture the two-layer class hierarchy
between the original classes and their subclasses. Since the clustering is
conducted in the representation space and updated during the course of
training, the subclass labels preserve the semantic substructures of head
classes. Meanwhile, it does not overemphasize tail class samples, so each
individual instance contribute to the representation learning equally. Hence,
our method achieves both the instance- and subclass-balance, while the original
class labels are also learned through contrastive learning among subclasses
from different classes. We evaluate SBCL over a list of long-tailed benchmark
datasets and it achieves the state-of-the-art performance. In addition, we
present extensive analyses and ablation studies of SBCL to verify its
advantages.
Related papers
- Covariance-based Space Regularization for Few-shot Class Incremental Learning [25.435192867105552]
Few-shot Class Incremental Learning (FSCIL) requires the model to continually learn new classes with limited labeled data.
Due to the limited data in incremental sessions, models are prone to overfitting new classes and suffering catastrophic forgetting of base classes.
Recent advancements resort to prototype-based approaches to constrain the base class distribution and learn discriminative representations of new classes.
arXiv Detail & Related papers (2024-11-02T08:03:04Z) - Simple-Sampling and Hard-Mixup with Prototypes to Rebalance Contrastive Learning for Text Classification [11.072083437769093]
We propose a novel model named SharpReCL for imbalanced text classification tasks.
Our model even outperforms popular large language models across several datasets.
arXiv Detail & Related papers (2024-05-19T11:33:49Z) - Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - Class Uncertainty: A Measure to Mitigate Class Imbalance [0.0]
We show that considering solely the cardinality of classes does not cover all issues causing class imbalance.
We propose "Class Uncertainty" as the average predictive uncertainty of the training examples.
We also curate SVCI-20 as a novel dataset in which the classes have equal number of training examples but they differ in terms of their hardness.
arXiv Detail & Related papers (2023-11-23T16:36:03Z) - DiGeo: Discriminative Geometry-Aware Learning for Generalized Few-Shot
Object Detection [39.937724871284665]
Generalized few-shot object detection aims to achieve precise detection on both base classes with abundant annotations and novel classes with limited training data.
Existing approaches enhance few-shot generalization with the sacrifice of base-class performance.
We propose a new training framework, DiGeo, to learn Geometry-aware features of inter-class separation and intra-class compactness.
arXiv Detail & Related papers (2023-03-16T22:37:09Z) - Generalization Bounds for Few-Shot Transfer Learning with Pretrained
Classifiers [26.844410679685424]
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes.
We show that the few-shot error of the learned feature map on new classes is small in case of class-feature-variability collapse.
arXiv Detail & Related papers (2022-12-23T18:46:05Z) - Long-tail Recognition via Compositional Knowledge Transfer [60.03764547406601]
We introduce a novel strategy for long-tail recognition that addresses the tail classes' few-shot problem.
Our objective is to transfer knowledge acquired from information-rich common classes to semantically similar, and yet data-hungry, rare classes.
Experiments show that our approach can achieve significant performance boosts on rare classes while maintaining robust common class performance.
arXiv Detail & Related papers (2021-12-13T15:48:59Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition
from a Domain Adaptation Perspective [98.70226503904402]
Object frequency in the real world often follows a power law, leading to a mismatch between datasets with long-tailed class distributions.
We propose to augment the classic class-balanced learning by explicitly estimating the differences between the class-conditioned distributions with a meta-learning approach.
arXiv Detail & Related papers (2020-03-24T11:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.