Distributional Robustness Loss for Long-tail Learning
- URL: http://arxiv.org/abs/2104.03066v1
- Date: Wed, 7 Apr 2021 11:34:04 GMT
- Title: Distributional Robustness Loss for Long-tail Learning
- Authors: Dvir Samuel and Gal Chechik
- Abstract summary: Real-world data is often unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes.
We show that the feature extractor part of deep networks suffers greatly from this bias.
We propose a new loss based on robustness theory, which encourages the model to learn high-quality representations for both head and tail classes.
- Score: 20.800627115140465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-world data is often unbalanced and long-tailed, but deep models struggle
to recognize rare classes in the presence of frequent classes. To address
unbalanced data, most studies try balancing the data, the loss, or the
classifier to reduce classification bias towards head classes. Far less
attention has been given to the latent representations learned with unbalanced
data. We show that the feature extractor part of deep networks suffers greatly
from this bias. We propose a new loss based on robustness theory, which
encourages the model to learn high-quality representations for both head and
tail classes. While the general form of the robustness loss may be hard to
compute, we further derive an easy-to-compute upper bound that can be minimized
efficiently. This procedure reduces representation bias towards head classes in
the feature space and achieves new SOTA results on CIFAR100-LT, ImageNet-LT,
and iNaturalist long-tail benchmarks. We find that training with robustness
increases recognition accuracy of tail classes while largely maintaining the
accuracy of head classes. The new robustness loss can be combined with various
classifier balancing techniques and can be applied to representations at
several layers of the deep model.
Related papers
- Orthogonal Uncertainty Representation of Data Manifold for Robust
Long-Tailed Learning [52.021899899683675]
In scenarios with long-tailed distributions, the model's ability to identify tail classes is limited due to the under-representation of tail samples.
We propose an Orthogonal Uncertainty Representation (OUR) of feature embedding and an end-to-end training strategy to improve the long-tail phenomenon of model robustness.
arXiv Detail & Related papers (2023-10-16T05:50:34Z) - A dual-branch model with inter- and intra-branch contrastive loss for
long-tailed recognition [7.225494453600985]
Models trained on long-tailed datasets have poor adaptability to tail classes and the decision boundaries are ambiguous.
We propose a simple yet effective model, named Dual-Branch Long-Tailed Recognition (DB-LTR), which includes an imbalanced learning branch and a Contrastive Learning Branch (CoLB)
CoLB can improve the capability of the model in adapting to tail classes and assist the imbalanced learning branch to learn a well-represented feature space and discriminative decision boundary.
arXiv Detail & Related papers (2023-09-28T03:31:11Z) - Dual Compensation Residual Networks for Class Imbalanced Learning [98.35401757647749]
We propose Dual Compensation Residual Networks to better fit both tail and head classes.
An important factor causing overfitting is that there is severe feature drift between training and test data on tail classes.
We also propose a Residual Balanced Multi-Proxies classifier to alleviate the under-fitting issue.
arXiv Detail & Related papers (2023-08-25T04:06:30Z) - Class Instance Balanced Learning for Long-Tailed Classification [0.0]
Long-tailed image classification task deals with large imbalances in the class frequencies of the training data.
Previous approaches have shown that combining cross-entropy and contrastive learning can improve performance on the long-tailed task.
We propose a novel class instance balanced loss (CIBL), which reweights the relative contributions of a cross-entropy and a contrastive loss as a function of the frequency of class instances in the training batch.
arXiv Detail & Related papers (2023-07-11T15:09:10Z) - Inducing Neural Collapse in Deep Long-tailed Learning [13.242721780822848]
We propose two explicit feature regularization terms to learn high-quality representation for class-imbalanced data.
With the proposed regularization, Neural Collapse phenomena will appear under the class-imbalanced distribution.
Our method is easily implemented, highly effective, and can be plugged into most existing methods.
arXiv Detail & Related papers (2023-02-24T05:07:05Z) - Constructing Balance from Imbalance for Long-tailed Image Recognition [50.6210415377178]
The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
arXiv Detail & Related papers (2022-08-04T10:22:24Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - Long-tailed Recognition by Routing Diverse Distribution-Aware Experts [64.71102030006422]
We propose a new long-tailed classifier called RoutIng Diverse Experts (RIDE)
It reduces the model variance with multiple experts, reduces the model bias with a distribution-aware diversity loss, reduces the computational cost with a dynamic expert routing module.
RIDE outperforms the state-of-the-art by 5% to 7% on CIFAR100-LT, ImageNet-LT and iNaturalist 2018 benchmarks.
arXiv Detail & Related papers (2020-10-05T06:53:44Z) - Long-Tailed Recognition Using Class-Balanced Experts [128.73438243408393]
We propose an ensemble of class-balanced experts that combines the strength of diverse classifiers.
Our ensemble of class-balanced experts reaches results close to state-of-the-art and an extended ensemble establishes a new state-of-the-art on two benchmarks for long-tailed recognition.
arXiv Detail & Related papers (2020-04-07T20:57:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.