Intra-Class Uncertainty Loss Function for Classification
- URL: http://arxiv.org/abs/2104.05298v1
- Date: Mon, 12 Apr 2021 09:02:41 GMT
- Title: Intra-Class Uncertainty Loss Function for Classification
- Authors: He Zhu, Shan Yu
- Abstract summary: intra-class uncertainty/variability is not considered, especially for datasets containing unbalanced classes.
In our framework, the features extracted by deep networks of each class are characterized by independent Gaussian distribution.
The proposed approach shows improved classification performance, through learning a better class representation.
- Score: 6.523198497365588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most classification models can be considered as the process of matching
templates. However, when intra-class uncertainty/variability is not considered,
especially for datasets containing unbalanced classes, this may lead to
classification errors. To address this issue, we propose a loss function with
intra-class uncertainty following Gaussian distribution. Specifically, in our
framework, the features extracted by deep networks of each class are
characterized by independent Gaussian distribution. The parameters of
distribution are learned with a likelihood regularization along with other
network parameters. The means of the Gaussian play a similar role as the center
anchor in existing methods, and the variance describes the uncertainty of
different classes. In addition, similar to the inter-class margin in
traditional loss functions, we introduce a margin to intra-class uncertainty to
make each cluster more compact and reduce the imbalance of feature distribution
from different categories. Based on MNIST, CIFAR, ImageNet, and Long-tailed
CIFAR analyses, the proposed approach shows improved classification
performance, through learning a better class representation.
Related papers
- Class Uncertainty: A Measure to Mitigate Class Imbalance [0.0]
We show that considering solely the cardinality of classes does not cover all issues causing class imbalance.
We propose "Class Uncertainty" as the average predictive uncertainty of the training examples.
We also curate SVCI-20 as a novel dataset in which the classes have equal number of training examples but they differ in terms of their hardness.
arXiv Detail & Related papers (2023-11-23T16:36:03Z) - PatchMix Augmentation to Identify Causal Features in Few-shot Learning [55.64873998196191]
Few-shot learning aims to transfer knowledge learned from base with sufficient categories labelled data to novel categories with scarce known information.
We propose a novel data augmentation strategy dubbed as PatchMix that can break this spurious dependency.
We show that such an augmentation mechanism, different from existing ones, is able to identify the causal features.
arXiv Detail & Related papers (2022-11-29T08:41:29Z) - Generalized Inter-class Loss for Gait Recognition [11.15855312510806]
Gait recognition is a unique biometric technique that can be performed at a long distance non-cooperatively.
Previous gait works focus more on minimizing the intra-class variance while ignoring the significance in constraining inter-class variance.
We propose a generalized inter-class loss which resolves the inter-class variance from both sample-level feature distribution and class-level feature distribution.
arXiv Detail & Related papers (2022-10-13T06:44:53Z) - Exploring Category-correlated Feature for Few-shot Image Classification [27.13708881431794]
We present a simple yet effective feature rectification method by exploring the category correlation between novel and base classes as the prior knowledge.
The proposed approach consistently obtains considerable performance gains on three widely used benchmarks.
arXiv Detail & Related papers (2021-12-14T08:25:24Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - When in Doubt: Improving Classification Performance with Alternating
Normalization [57.39356691967766]
We introduce Classification with Alternating Normalization (CAN), a non-parametric post-processing step for classification.
CAN improves classification accuracy for challenging examples by re-adjusting their predicted class probability distribution.
We empirically demonstrate its effectiveness across a diverse set of classification tasks.
arXiv Detail & Related papers (2021-09-28T02:55:42Z) - PLM: Partial Label Masking for Imbalanced Multi-label Classification [59.68444804243782]
Neural networks trained on real-world datasets with long-tailed label distributions are biased towards frequent classes and perform poorly on infrequent classes.
We propose a method, Partial Label Masking (PLM), which utilizes this ratio during training.
Our method achieves strong performance when compared to existing methods on both multi-label (MultiMNIST and MSCOCO) and single-label (imbalanced CIFAR-10 and CIFAR-100) image classification datasets.
arXiv Detail & Related papers (2021-05-22T18:07:56Z) - Entropy-Based Uncertainty Calibration for Generalized Zero-Shot Learning [49.04790688256481]
The goal of generalized zero-shot learning (GZSL) is to recognise both seen and unseen classes.
Most GZSL methods typically learn to synthesise visual representations from semantic information on the unseen classes.
We propose a novel framework that leverages dual variational autoencoders with a triplet loss to learn discriminative latent features.
arXiv Detail & Related papers (2021-01-09T05:21:27Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - Beyond cross-entropy: learning highly separable feature distributions
for robust and accurate classification [22.806324361016863]
We propose a novel approach for training deep robust multiclass classifiers that provides adversarial robustness.
We show that the regularization of the latent space based on our approach yields excellent classification accuracy.
arXiv Detail & Related papers (2020-10-29T11:15:17Z) - Variational Feature Disentangling for Fine-Grained Few-Shot
Classification [30.350307891161865]
Fine-grained few-shot recognition often suffers from the problem of training data scarcity for novel categories.
In this paper, we focus one enlarging the intra-class variance of the unseen class to improve few-shot classification performance.
arXiv Detail & Related papers (2020-10-07T08:13:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.