Learning Prototype Classifiers for Long-Tailed Recognition
- URL: http://arxiv.org/abs/2302.00491v3
- Date: Mon, 26 Jun 2023 05:42:36 GMT
- Title: Learning Prototype Classifiers for Long-Tailed Recognition
- Authors: Saurabh Sharma, Yongqin Xian, Ning Yu, Ambuj Singh
- Abstract summary: We show that learning prototype classifiers addresses the biased softmax problem in long-tailed recognition.
We propose to jointly learn prototypes by using distances to prototypes in representation space as the logit scores for classification.
Our analysis shows that prototypes learned by Prototype classifiers are better separated than empirical centroids.
- Score: 18.36167187657728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of long-tailed recognition (LTR) has received attention in recent
years due to the fundamental power-law distribution of objects in the
real-world. Most recent works in LTR use softmax classifiers that are biased in
that they correlate classifier norm with the amount of training data for a
given class. In this work, we show that learning prototype classifiers
addresses the biased softmax problem in LTR. Prototype classifiers can deliver
promising results simply using Nearest-Class- Mean (NCM), a special case where
prototypes are empirical centroids. We go one step further and propose to
jointly learn prototypes by using distances to prototypes in representation
space as the logit scores for classification. Further, we theoretically analyze
the properties of Euclidean distance based prototype classifiers that lead to
stable gradient-based optimization which is robust to outliers. To enable
independent distance scales along each channel, we enhance Prototype
classifiers by learning channel-dependent temperature parameters. Our analysis
shows that prototypes learned by Prototype classifiers are better separated
than empirical centroids. Results on four LTR benchmarks show that Prototype
classifier outperforms or is comparable to state-of-the-art methods. Our code
is made available at
https://github.com/saurabhsharma1993/prototype-classifier-ltr.
Related papers
- Rethinking Few-shot 3D Point Cloud Semantic Segmentation [62.80639841429669]
This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS)
We focus on two significant issues in the state-of-the-art: foreground leakage and sparse point distribution.
To address these issues, we introduce a standardized FS-PCS setting, upon which a new benchmark is built.
arXiv Detail & Related papers (2024-03-01T15:14:47Z) - Beyond Prototypes: Semantic Anchor Regularization for Better
Representation Learning [82.29761875805369]
One of the ultimate goals of representation learning is to achieve compactness within a class and well-separability between classes.
We propose a novel perspective to use pre-defined class anchors serving as feature centroid to unidirectionally guide feature learning.
The proposed Semantic Anchor Regularization (SAR) can be used in a plug-and-play manner in the existing models.
arXiv Detail & Related papers (2023-12-19T05:52:38Z) - FeCAM: Exploiting the Heterogeneity of Class Distributions in
Exemplar-Free Continual Learning [21.088762527081883]
Exemplar-free class-incremental learning (CIL) poses several challenges since it prohibits the rehearsal of data from previous tasks.
Recent approaches to incrementally learning the classifier by freezing the feature extractor after the first task have gained much attention.
We explore prototypical networks for CIL, which generate new class prototypes using the frozen feature extractor and classify the features based on the Euclidean distance to the prototypes.
arXiv Detail & Related papers (2023-09-25T11:54:33Z) - Rethinking Person Re-identification from a Projection-on-Prototypes
Perspective [84.24742313520811]
Person Re-IDentification (Re-ID) as a retrieval task, has achieved tremendous development over the past decade.
We propose a new baseline ProNet, which innovatively reserves the function of the classifier at the inference stage.
Experiments on four benchmarks demonstrate that our proposed ProNet is simple yet effective, and significantly beats previous baselines.
arXiv Detail & Related papers (2023-08-21T13:38:10Z) - Fantastic DNN Classifiers and How to Identify them without Data [0.685316573653194]
We show that the quality of a trained DNN classifier can be assessed without any example data.
We have developed two metrics: one using the features of the prototypes and the other using adversarial examples corresponding to each prototype.
Empirical evaluations show that accuracy obtained from test examples is directly proportional to quality measures obtained from the proposed metrics.
arXiv Detail & Related papers (2023-05-24T20:54:48Z) - Rethinking Semantic Segmentation: A Prototype View [126.59244185849838]
We present a nonparametric semantic segmentation model based on non-learnable prototypes.
Our framework yields compelling results over several datasets.
We expect this work will provoke a rethink of the current de facto semantic segmentation model design.
arXiv Detail & Related papers (2022-03-28T21:15:32Z) - A Closer Look at Prototype Classifier for Few-shot Image Classification [28.821731837776593]
We show that a prototype classifier works equally well without fine-tuning and meta-learning.
We derive a novel generalization bound for the prototypical network and show that focusing on the variance of the norm of a feature vector can improve performance.
arXiv Detail & Related papers (2021-10-11T08:28:43Z) - Minimum Variance Embedded Auto-associative Kernel Extreme Learning
Machine for One-class Classification [1.4146420810689422]
VAAKELM is a novel extension of an auto-associative kernel extreme learning machine.
It embeds minimum variance information within its architecture and reduces the intra-class variance.
It follows a reconstruction-based approach to one-class classification and minimizes the reconstruction error.
arXiv Detail & Related papers (2020-11-24T17:00:30Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Predicting Classification Accuracy When Adding New Unobserved Classes [8.325327265120283]
We study how a classifier's performance can be used to extrapolate its expected accuracy on a larger, unobserved set of classes.
We formulate a robust neural-network-based algorithm, "CleaneX", which learns to estimate the accuracy of such classifiers on arbitrarily large sets of classes.
arXiv Detail & Related papers (2020-10-28T14:37:25Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.