Class Instance Balanced Learning for Long-Tailed Classification
- URL: http://arxiv.org/abs/2307.05322v1
- Date: Tue, 11 Jul 2023 15:09:10 GMT
- Title: Class Instance Balanced Learning for Long-Tailed Classification
- Authors: Marc-Antoine Lavoie, Steven Waslander
- Abstract summary: Long-tailed image classification task deals with large imbalances in the class frequencies of the training data.
Previous approaches have shown that combining cross-entropy and contrastive learning can improve performance on the long-tailed task.
We propose a novel class instance balanced loss (CIBL), which reweights the relative contributions of a cross-entropy and a contrastive loss as a function of the frequency of class instances in the training batch.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The long-tailed image classification task remains important in the
development of deep neural networks as it explicitly deals with large
imbalances in the class frequencies of the training data. While uncommon in
engineered datasets, this imbalance is almost always present in real-world
data. Previous approaches have shown that combining cross-entropy and
contrastive learning can improve performance on the long-tailed task, but they
do not explore the tradeoff between head and tail classes. We propose a novel
class instance balanced loss (CIBL), which reweights the relative contributions
of a cross-entropy and a contrastive loss as a function of the frequency of
class instances in the training batch. This balancing favours the contrastive
loss for more common classes, leading to a learned classifier with a more
balanced performance across all class frequencies. Furthermore, increasing the
relative weight on the contrastive head shifts performance from common (head)
to rare (tail) classes, allowing the user to skew the performance towards these
classes if desired. We also show that changing the linear classifier head with
a cosine classifier yields a network that can be trained to similar performance
in substantially fewer epochs. We obtain competitive results on both
CIFAR-100-LT and ImageNet-LT.
Related papers
- RAHNet: Retrieval Augmented Hybrid Network for Long-tailed Graph
Classification [10.806893809269074]
We propose a novel framework called Retrieval Augmented Hybrid Network (RAHNet) to jointly learn a robust feature extractor and an unbiased classifier.
In the feature extractor training stage, we develop a graph retrieval module to search for relevant graphs that directly enrich the intra-class diversity for the tail classes.
We also innovatively optimize a category-centered supervised contrastive loss to obtain discriminative representations.
arXiv Detail & Related papers (2023-08-04T14:06:44Z) - Towards Calibrated Hyper-Sphere Representation via Distribution Overlap
Coefficient for Long-tailed Learning [8.208237033120492]
Long-tailed learning aims to tackle the challenge that head classes dominate the training procedure under severe class imbalance in real-world scenarios.
Motivated by this, we generalize the cosine-based classifiers to a von Mises-Fisher (vMF) mixture model.
We measure representation quality upon the hyper-sphere space via calculating distribution overlap coefficient.
arXiv Detail & Related papers (2022-08-22T03:53:29Z) - Balanced Contrastive Learning for Long-Tailed Visual Recognition [32.789465918318925]
Real-world data typically follow a long-tailed distribution, where a few majority categories occupy most of the data.
In this paper, we focus on representation learning for imbalanced data.
We propose a novel loss for balanced contrastive learning (BCL)
arXiv Detail & Related papers (2022-07-19T03:48:59Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - You Only Need End-to-End Training for Long-Tailed Recognition [8.789819609485225]
Cross-entropy loss tends to produce highly correlated features on imbalanced data.
We propose two novel modules, Block-based Relatively Balanced Batch Sampler (B3RS) and Batch Embedded Training (BET)
Experimental results on the long-tailed classification benchmarks, CIFAR-LT and ImageNet-LT, demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2021-12-11T11:44:09Z) - Improving Tail-Class Representation with Centroid Contrastive Learning [145.73991900239017]
We propose interpolative centroid contrastive learning (ICCL) to improve long-tailed representation learning.
ICCL interpolates two images from a class-agnostic sampler and a class-aware sampler, and trains the model such that the representation of the ICCL can be used to retrieve the centroids for both source classes.
Our result shows a significant accuracy gain of 2.8% on the iNaturalist 2018 dataset with a real-world long-tailed distribution.
arXiv Detail & Related papers (2021-10-19T15:24:48Z) - Class Balancing GAN with a Classifier in the Loop [58.29090045399214]
We introduce a novel theoretically motivated Class Balancing regularizer for training GANs.
Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset.
We demonstrate the utility of our regularizer in learning representations for long-tailed distributions via achieving better performance than existing approaches over multiple datasets.
arXiv Detail & Related papers (2021-06-17T11:41:30Z) - Distributional Robustness Loss for Long-tail Learning [20.800627115140465]
Real-world data is often unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes.
We show that the feature extractor part of deep networks suffers greatly from this bias.
We propose a new loss based on robustness theory, which encourages the model to learn high-quality representations for both head and tail classes.
arXiv Detail & Related papers (2021-04-07T11:34:04Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - Long-tailed Recognition by Routing Diverse Distribution-Aware Experts [64.71102030006422]
We propose a new long-tailed classifier called RoutIng Diverse Experts (RIDE)
It reduces the model variance with multiple experts, reduces the model bias with a distribution-aware diversity loss, reduces the computational cost with a dynamic expert routing module.
RIDE outperforms the state-of-the-art by 5% to 7% on CIFAR100-LT, ImageNet-LT and iNaturalist 2018 benchmarks.
arXiv Detail & Related papers (2020-10-05T06:53:44Z) - Long-Tailed Recognition Using Class-Balanced Experts [128.73438243408393]
We propose an ensemble of class-balanced experts that combines the strength of diverse classifiers.
Our ensemble of class-balanced experts reaches results close to state-of-the-art and an extended ensemble establishes a new state-of-the-art on two benchmarks for long-tailed recognition.
arXiv Detail & Related papers (2020-04-07T20:57:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.