AlphaNet: Improving Long-Tail Classification By Combining Classifiers
- URL: http://arxiv.org/abs/2008.07073v2
- Date: Wed, 26 Jul 2023 04:03:47 GMT
- Title: AlphaNet: Improving Long-Tail Classification By Combining Classifiers
- Authors: Nadine Chang, Jayanth Koushik, Aarti Singh, Martial Hebert, Yu-Xiong
Wang, Michael J. Tarr
- Abstract summary: Methods in long-tail learning focus on improving performance for data-poor (rare) classes.
A large number of errors are due to misclassification of rare items as visually similar frequent classes.
We introduce AlphaNet, a method that can be applied to existing models, performing post hoc correction on classifiers of rare classes.
- Score: 44.5124310374697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Methods in long-tail learning focus on improving performance for data-poor
(rare) classes; however, performance for such classes remains much lower than
performance for more data-rich (frequent) classes. Analyzing the predictions of
long-tail methods for rare classes reveals that a large number of errors are
due to misclassification of rare items as visually similar frequent classes. To
address this problem, we introduce AlphaNet, a method that can be applied to
existing models, performing post hoc correction on classifiers of rare classes.
Starting with a pre-trained model, we find frequent classes that are closest to
rare classes in the model's representation space and learn weights to update
rare class classifiers with a linear combination of frequent class classifiers.
AlphaNet, applied to several models, greatly improves test accuracy for rare
classes in multiple long-tailed datasets, with very little change to overall
accuracy. Our method also provides a way to control the trade-off between rare
class and overall accuracy, making it practical for long-tail classification in
the wild.
Related papers
- Understanding the Detrimental Class-level Effects of Data Augmentation [63.1733767714073]
achieving optimal average accuracy comes at the cost of significantly hurting individual class accuracy by as much as 20% on ImageNet.
We present a framework for understanding how DA interacts with class-level learning dynamics.
We show that simple class-conditional augmentation strategies improve performance on the negatively affected classes.
arXiv Detail & Related papers (2023-12-07T18:37:43Z) - Are Deep Sequence Classifiers Good at Non-Trivial Generalization? [4.941630596191806]
We study binary sequence classification problems and we look at model calibration from a different perspective.
We focus on sparse sequence classification, that is problems in which the target class is rare and compare three deep learning sequence classification models.
Our results suggest that in this binary setting the deep-learning models are indeed able to learn the underlying class distribution in a non-trivial manner.
arXiv Detail & Related papers (2022-10-24T10:01:06Z) - Long-tail Recognition via Compositional Knowledge Transfer [60.03764547406601]
We introduce a novel strategy for long-tail recognition that addresses the tail classes' few-shot problem.
Our objective is to transfer knowledge acquired from information-rich common classes to semantically similar, and yet data-hungry, rare classes.
Experiments show that our approach can achieve significant performance boosts on rare classes while maintaining robust common class performance.
arXiv Detail & Related papers (2021-12-13T15:48:59Z) - Improving Calibration for Long-Tailed Recognition [68.32848696795519]
We propose two methods to improve calibration and performance in such scenarios.
For dataset bias due to different samplers, we propose shifted batch normalization.
Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2021-04-01T13:55:21Z) - One vs Previous and Similar Classes Learning -- A Comparative Study [2.208242292882514]
This work proposes three learning paradigms which allow trained models to be updated without the need of retraining from scratch.
Results show that the proposed paradigms are faster than the baseline at updating, with two of them being faster at training from scratch as well, especially on larger datasets.
arXiv Detail & Related papers (2021-01-05T00:28:38Z) - Predicting Classification Accuracy When Adding New Unobserved Classes [8.325327265120283]
We study how a classifier's performance can be used to extrapolate its expected accuracy on a larger, unobserved set of classes.
We formulate a robust neural-network-based algorithm, "CleaneX", which learns to estimate the accuracy of such classifiers on arbitrarily large sets of classes.
arXiv Detail & Related papers (2020-10-28T14:37:25Z) - Feature Space Augmentation for Long-Tailed Data [74.65615132238291]
Real-world data often follow a long-tailed distribution as the frequency of each class is typically different.
Class-balanced loss and advanced methods on data re-sampling and augmentation are among the best practices to alleviate the data imbalance problem.
We present a novel approach to address the long-tailed problem by augmenting the under-represented classes in the feature space with the features learned from the classes with ample samples.
arXiv Detail & Related papers (2020-08-09T06:38:00Z) - Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition
from a Domain Adaptation Perspective [98.70226503904402]
Object frequency in the real world often follows a power law, leading to a mismatch between datasets with long-tailed class distributions.
We propose to augment the classic class-balanced learning by explicitly estimating the differences between the class-conditioned distributions with a meta-learning approach.
arXiv Detail & Related papers (2020-03-24T11:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.