Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition
from a Domain Adaptation Perspective
- URL: http://arxiv.org/abs/2003.10780v1
- Date: Tue, 24 Mar 2020 11:28:42 GMT
- Title: Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition
from a Domain Adaptation Perspective
- Authors: Muhammad Abdullah Jamal and Matthew Brown and Ming-Hsuan Yang and
Liqiang Wang and Boqing Gong
- Abstract summary: Object frequency in the real world often follows a power law, leading to a mismatch between datasets with long-tailed class distributions.
We propose to augment the classic class-balanced learning by explicitly estimating the differences between the class-conditioned distributions with a meta-learning approach.
- Score: 98.70226503904402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object frequency in the real world often follows a power law, leading to a
mismatch between datasets with long-tailed class distributions seen by a
machine learning model and our expectation of the model to perform well on all
classes. We analyze this mismatch from a domain adaptation point of view. First
of all, we connect existing class-balanced methods for long-tailed
classification to target shift, a well-studied scenario in domain adaptation.
The connection reveals that these methods implicitly assume that the training
data and test data share the same class-conditioned distribution, which does
not hold in general and especially for the tail classes. While a head class
could contain abundant and diverse training examples that well represent the
expected data at inference time, the tail classes are often short of
representative training data. To this end, we propose to augment the classic
class-balanced learning by explicitly estimating the differences between the
class-conditioned distributions with a meta-learning approach. We validate our
approach with six benchmark datasets and three loss functions.
Related papers
- FedLF: Adaptive Logit Adjustment and Feature Optimization in Federated Long-Tailed Learning [5.23984567704876]
Federated learning offers a paradigm to the challenge of preserving privacy in distributed machine learning.
Traditional approach fails to address the phenomenon of class-wise bias in global long-tailed data.
New method FedLF introduces three modifications in the local training phase: adaptive logit adjustment, continuous class centred optimization, and feature decorrelation.
arXiv Detail & Related papers (2024-09-18T16:25:29Z) - Subclass-balancing Contrastive Learning for Long-tailed Recognition [38.31221755013738]
Long-tailed recognition with imbalanced class distribution naturally emerges in practical machine learning applications.
We propose a novel subclass-balancing contrastive learning'' approach that clusters each head class into multiple subclasses of similar sizes as the tail classes.
We evaluate SBCL over a list of long-tailed benchmark datasets and it achieves the state-of-the-art performance.
arXiv Detail & Related papers (2023-06-28T05:08:43Z) - Revisiting Long-tailed Image Classification: Survey and Benchmarks with
New Evaluation Metrics [88.39382177059747]
A corpus of metrics is designed for measuring the accuracy, robustness, and bounds of algorithms for learning with long-tailed distribution.
Based on our benchmarks, we re-evaluate the performance of existing methods on CIFAR10 and CIFAR100 datasets.
arXiv Detail & Related papers (2023-02-03T02:40:54Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - Constructing Balance from Imbalance for Long-tailed Image Recognition [50.6210415377178]
The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
arXiv Detail & Related papers (2022-08-04T10:22:24Z) - Learning Muti-expert Distribution Calibration for Long-tailed Video
Classification [88.12433458277168]
We propose an end-to-end multi-experts distribution calibration method based on two-level distribution information.
By modeling this two-level distribution information, the model can consider the head classes and the tail classes.
Our method achieves state-of-the-art performance on the long-tailed video classification task.
arXiv Detail & Related papers (2022-05-22T09:52:34Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - The Devil is the Classifier: Investigating Long Tail Relation
Classification with Decoupling Analysis [36.298869931803836]
Long-tailed relation classification is a challenging problem as the head classes may dominate the training phase.
We propose a robust classifier with attentive relation routing, which assigns soft weights by automatically aggregating the relations.
arXiv Detail & Related papers (2020-09-15T12:47:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.