Margin Calibration for Long-Tailed Visual Recognition
- URL: http://arxiv.org/abs/2112.07225v1
- Date: Tue, 14 Dec 2021 08:25:29 GMT
- Title: Margin Calibration for Long-Tailed Visual Recognition
- Authors: Yidong Wang, Bowen Zhang, Wenxin Hou, Zhen Wu, Jindong Wang, Takahiro
Shinozaki
- Abstract summary: We study the relationship between the margins and logits (classification scores) and empirically observe the biased margins and the biased logits are positively correlated.
We propose MARC, a simple yet effective MARgin function to dynamically calibrate the biased margins for unbiased logits.
- Score: 14.991077564590128
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The long-tailed class distribution in visual recognition tasks poses great
challenges for neural networks on how to handle the biased predictions between
head and tail classes, i.e., the model tends to classify tail classes as head
classes. While existing research focused on data resampling and loss function
engineering, in this paper, we take a different perspective: the classification
margins. We study the relationship between the margins and logits
(classification scores) and empirically observe the biased margins and the
biased logits are positively correlated. We propose MARC, a simple yet
effective MARgin Calibration function to dynamically calibrate the biased
margins for unbiased logits. We validate MARC through extensive experiments on
common long-tailed benchmarks including CIFAR-LT, ImageNet-LT, Places-LT, and
iNaturalist-LT. Experimental results demonstrate that our MARC achieves
favorable results on these benchmarks. In addition, MARC is extremely easy to
implement with just three lines of code. We hope this simple method will
motivate people to rethink the biased margins and biased logits in long-tailed
visual recognition.
Related papers
- Memory Consistency Guided Divide-and-Conquer Learning for Generalized
Category Discovery [56.172872410834664]
Generalized category discovery (GCD) aims at addressing a more realistic and challenging setting of semi-supervised learning.
We propose a Memory Consistency guided Divide-and-conquer Learning framework (MCDL)
Our method outperforms state-of-the-art models by a large margin on both seen and unseen classes of the generic image recognition.
arXiv Detail & Related papers (2024-01-24T09:39:45Z) - Marginal Debiased Network for Fair Visual Recognition [59.05212866862219]
We propose a novel marginal debiased network (MDN) to learn debiased representations.
Our MDN can achieve a remarkable performance on under-represented samples.
arXiv Detail & Related papers (2024-01-04T08:57:09Z) - Constructing Balance from Imbalance for Long-tailed Image Recognition [50.6210415377178]
The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
arXiv Detail & Related papers (2022-08-04T10:22:24Z) - Relieving Long-tailed Instance Segmentation via Pairwise Class Balance [85.53585498649252]
Long-tailed instance segmentation is a challenging task due to the extreme imbalance of training samples among classes.
It causes severe biases of the head classes (with majority samples) against the tailed ones.
We propose a novel Pairwise Class Balance (PCB) method, built upon a confusion matrix which is updated during training to accumulate the ongoing prediction preferences.
arXiv Detail & Related papers (2022-01-08T07:48:36Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.