A dual-branch model with inter- and intra-branch contrastive loss for
long-tailed recognition
- URL: http://arxiv.org/abs/2309.16135v1
- Date: Thu, 28 Sep 2023 03:31:11 GMT
- Title: A dual-branch model with inter- and intra-branch contrastive loss for
long-tailed recognition
- Authors: Qiong Chen, Tianlin Huang, Geren Zhu, Enlu Lin
- Abstract summary: Models trained on long-tailed datasets have poor adaptability to tail classes and the decision boundaries are ambiguous.
We propose a simple yet effective model, named Dual-Branch Long-Tailed Recognition (DB-LTR), which includes an imbalanced learning branch and a Contrastive Learning Branch (CoLB)
CoLB can improve the capability of the model in adapting to tail classes and assist the imbalanced learning branch to learn a well-represented feature space and discriminative decision boundary.
- Score: 7.225494453600985
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Real-world data often exhibits a long-tailed distribution, in which head
classes occupy most of the data, while tail classes only have very few samples.
Models trained on long-tailed datasets have poor adaptability to tail classes
and the decision boundaries are ambiguous. Therefore, in this paper, we propose
a simple yet effective model, named Dual-Branch Long-Tailed Recognition
(DB-LTR), which includes an imbalanced learning branch and a Contrastive
Learning Branch (CoLB). The imbalanced learning branch, which consists of a
shared backbone and a linear classifier, leverages common imbalanced learning
approaches to tackle the data imbalance issue. In CoLB, we learn a prototype
for each tail class, and calculate an inter-branch contrastive loss, an
intra-branch contrastive loss and a metric loss. CoLB can improve the
capability of the model in adapting to tail classes and assist the imbalanced
learning branch to learn a well-represented feature space and discriminative
decision boundary. Extensive experiments on three long-tailed benchmark
datasets, i.e., CIFAR100-LT, ImageNet-LT and Places-LT, show that our DB-LTR is
competitive and superior to the comparative methods.
Related papers
- Long-Tail Learning with Rebalanced Contrastive Loss [1.4443576276330394]
We present Rebalanced Contrastive Learning (RCL), an efficient means to increase the long tail classification accuracy.
RCL addresses three main aspects: Feature space balancedness, Intra-Class compactness and Regularization.
Our experiments on three benchmark datasets demonstrate the richness of the learnt embeddings and increased top-1 balanced accuracy RCL provides to the BCL framework.
arXiv Detail & Related papers (2023-12-04T09:27:03Z) - Orthogonal Uncertainty Representation of Data Manifold for Robust
Long-Tailed Learning [52.021899899683675]
In scenarios with long-tailed distributions, the model's ability to identify tail classes is limited due to the under-representation of tail samples.
We propose an Orthogonal Uncertainty Representation (OUR) of feature embedding and an end-to-end training strategy to improve the long-tail phenomenon of model robustness.
arXiv Detail & Related papers (2023-10-16T05:50:34Z) - Dual Compensation Residual Networks for Class Imbalanced Learning [98.35401757647749]
We propose Dual Compensation Residual Networks to better fit both tail and head classes.
An important factor causing overfitting is that there is severe feature drift between training and test data on tail classes.
We also propose a Residual Balanced Multi-Proxies classifier to alleviate the under-fitting issue.
arXiv Detail & Related papers (2023-08-25T04:06:30Z) - Constructing Balance from Imbalance for Long-tailed Image Recognition [50.6210415377178]
The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
arXiv Detail & Related papers (2022-08-04T10:22:24Z) - Balanced Contrastive Learning for Long-Tailed Visual Recognition [32.789465918318925]
Real-world data typically follow a long-tailed distribution, where a few majority categories occupy most of the data.
In this paper, we focus on representation learning for imbalanced data.
We propose a novel loss for balanced contrastive learning (BCL)
arXiv Detail & Related papers (2022-07-19T03:48:59Z) - Improving Tail-Class Representation with Centroid Contrastive Learning [145.73991900239017]
We propose interpolative centroid contrastive learning (ICCL) to improve long-tailed representation learning.
ICCL interpolates two images from a class-agnostic sampler and a class-aware sampler, and trains the model such that the representation of the ICCL can be used to retrieve the centroids for both source classes.
Our result shows a significant accuracy gain of 2.8% on the iNaturalist 2018 dataset with a real-world long-tailed distribution.
arXiv Detail & Related papers (2021-10-19T15:24:48Z) - Balanced Knowledge Distillation for Long-tailed Learning [9.732397447057318]
Deep models trained on long-tailed datasets exhibit unsatisfactory performance on tail classes.
Existing methods usually modify the classification loss to increase the learning focus on tail classes.
We propose Balanced Knowledge Distillation to disentangle the contradiction between the two goals and achieve both simultaneously.
arXiv Detail & Related papers (2021-04-21T13:07:35Z) - Distributional Robustness Loss for Long-tail Learning [20.800627115140465]
Real-world data is often unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes.
We show that the feature extractor part of deep networks suffers greatly from this bias.
We propose a new loss based on robustness theory, which encourages the model to learn high-quality representations for both head and tail classes.
arXiv Detail & Related papers (2021-04-07T11:34:04Z) - Long-tailed Recognition by Routing Diverse Distribution-Aware Experts [64.71102030006422]
We propose a new long-tailed classifier called RoutIng Diverse Experts (RIDE)
It reduces the model variance with multiple experts, reduces the model bias with a distribution-aware diversity loss, reduces the computational cost with a dynamic expert routing module.
RIDE outperforms the state-of-the-art by 5% to 7% on CIFAR100-LT, ImageNet-LT and iNaturalist 2018 benchmarks.
arXiv Detail & Related papers (2020-10-05T06:53:44Z) - Long-Tailed Recognition Using Class-Balanced Experts [128.73438243408393]
We propose an ensemble of class-balanced experts that combines the strength of diverse classifiers.
Our ensemble of class-balanced experts reaches results close to state-of-the-art and an extended ensemble establishes a new state-of-the-art on two benchmarks for long-tailed recognition.
arXiv Detail & Related papers (2020-04-07T20:57:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.