PML: Progressive Margin Loss for Long-tailed Age Classification
- URL: http://arxiv.org/abs/2103.02140v1
- Date: Wed, 3 Mar 2021 02:47:09 GMT
- Title: PML: Progressive Margin Loss for Long-tailed Age Classification
- Authors: Zongyong Deng, Hao Liu, Yaoxing Wang, Chenyang Wang, Zekuan Yu,
Xuehong Sun
- Abstract summary: We propose a progressive margin loss (PML) approach for unconstrained facial age classification.
Our PML aims to adaptively refine the age label pattern by enforcing a couple of margins.
- Score: 9.020103398777653
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this paper, we propose a progressive margin loss (PML) approach for
unconstrained facial age classification. Conventional methods make strong
assumption on that each class owns adequate instances to outline its data
distribution, likely leading to bias prediction where the training samples are
sparse across age classes. Instead, our PML aims to adaptively refine the age
label pattern by enforcing a couple of margins, which fully takes in the
in-between discrepancy of the intra-class variance, inter-class variance and
class center. Our PML typically incorporates with the ordinal margin and the
variational margin, simultaneously plugging in the globally-tuned deep neural
network paradigm. More specifically, the ordinal margin learns to exploit the
correlated relationship of the real-world age labels. Accordingly, the
variational margin is leveraged to minimize the influence of head classes that
misleads the prediction of tailed samples. Moreover, our optimization carefully
seeks a series of indicator curricula to achieve robust and efficient model
training. Extensive experimental results on three face aging datasets
demonstrate that our PML achieves compelling performance compared to state of
the arts. Code will be made publicly.
Related papers
- Flexible Distribution Alignment: Towards Long-tailed Semi-supervised Learning with Proper Calibration [18.376601653387315]
Longtailed semi-supervised learning (LTSSL) represents a practical scenario for semi-supervised applications.
This problem is often aggravated by discrepancies between labeled and unlabeled class distributions.
We introduce Flexible Distribution Alignment (FlexDA), a novel adaptive logit-adjusted loss framework.
arXiv Detail & Related papers (2023-06-07T17:50:59Z) - Learning in Imperfect Environment: Multi-Label Classification with
Long-Tailed Distribution and Partial Labels [53.68653940062605]
We introduce a novel task, Partial labeling and Long-Tailed Multi-Label Classification (PLT-MLC)
We find that most LT-MLC and PL-MLC approaches fail to solve the degradation-MLC.
We propose an end-to-end learning framework: textbfCOrrection $rightarrow$ textbfModificattextbfIon $rightarrow$ balantextbfCe.
arXiv Detail & Related papers (2023-04-20T20:05:08Z) - CLIPood: Generalizing CLIP to Out-of-Distributions [73.86353105017076]
Contrastive language-image pre-training (CLIP) models have shown impressive zero-shot ability, but the further adaptation of CLIP on downstream tasks undesirably degrades OOD performances.
We propose CLIPood, a fine-tuning method that can adapt CLIP models to OOD situations where both domain shifts and open classes may occur on unseen test data.
Experiments on diverse datasets with different OOD scenarios show that CLIPood consistently outperforms existing generalization techniques.
arXiv Detail & Related papers (2023-02-02T04:27:54Z) - Adaptive Distribution Calibration for Few-Shot Learning with
Hierarchical Optimal Transport [78.9167477093745]
We propose a novel distribution calibration method by learning the adaptive weight matrix between novel samples and base classes.
Experimental results on standard benchmarks demonstrate that our proposed plug-and-play model outperforms competing approaches.
arXiv Detail & Related papers (2022-10-09T02:32:57Z) - PLM: Partial Label Masking for Imbalanced Multi-label Classification [59.68444804243782]
Neural networks trained on real-world datasets with long-tailed label distributions are biased towards frequent classes and perform poorly on infrequent classes.
We propose a method, Partial Label Masking (PLM), which utilizes this ratio during training.
Our method achieves strong performance when compared to existing methods on both multi-label (MultiMNIST and MSCOCO) and single-label (imbalanced CIFAR-10 and CIFAR-100) image classification datasets.
arXiv Detail & Related papers (2021-05-22T18:07:56Z) - Two-stage Training for Learning from Label Proportions [18.78148397471913]
Learning from label proportions (LLP) aims at learning an instance-level classifier with label proportions in grouped training data.
We introduce the mixup strategy and symmetric crossentropy to further reduce the label noise.
Our framework is model-agnostic, and demonstrates compelling performance improvement in extensive experiments.
arXiv Detail & Related papers (2021-05-22T03:55:35Z) - Margin-Based Transfer Bounds for Meta Learning with Deep Feature
Embedding [67.09827634481712]
We leverage margin theory and statistical learning theory to establish three margin-based transfer bounds for meta-learning based multiclass classification (MLMC)
These bounds reveal that the expected error of a given classification algorithm for a future task can be estimated with the average empirical error on a finite number of previous tasks.
Experiments on three benchmarks show that these margin-based models still achieve competitive performance.
arXiv Detail & Related papers (2020-12-02T23:50:51Z) - Boosting Few-Shot Learning With Adaptive Margin Loss [109.03665126222619]
This paper proposes an adaptive margin principle to improve the generalization ability of metric-based meta-learning approaches for few-shot learning problems.
Extensive experiments demonstrate that the proposed method can boost the performance of current metric-based meta-learning approaches.
arXiv Detail & Related papers (2020-05-28T07:58:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.