General vs. Long-Tailed Age Estimation: An Approach to Kill Two Birds
with One Stone
- URL: http://arxiv.org/abs/2307.10129v1
- Date: Wed, 19 Jul 2023 16:51:59 GMT
- Title: General vs. Long-Tailed Age Estimation: An Approach to Kill Two Birds
with One Stone
- Authors: Zenghao Bao, Zichang Tan, Jun Li, Jun Wan, Xibo Ma, Zhen Lei
- Abstract summary: We propose a simple, effective, and flexible training paradigm named GLAE, which is two-fold.
Our GLAE provides a surprising improvement on Morph II, reaching the lowest MAE and CMAE of 1.14 and 1.27 years, respectively.
- Score: 48.849311629912734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial age estimation has received a lot of attention for its diverse
application scenarios. Most existing studies treat each sample equally and aim
to reduce the average estimation error for the entire dataset, which can be
summarized as General Age Estimation. However, due to the long-tailed
distribution prevalent in the dataset, treating all samples equally will
inevitably bias the model toward the head classes (usually the adult with a
majority of samples). Driven by this, some works suggest that each class should
be treated equally to improve performance in tail classes (with a minority of
samples), which can be summarized as Long-tailed Age Estimation. However,
Long-tailed Age Estimation usually faces a performance trade-off, i.e.,
achieving improvement in tail classes by sacrificing the head classes. In this
paper, our goal is to design a unified framework to perform well on both tasks,
killing two birds with one stone. To this end, we propose a simple, effective,
and flexible training paradigm named GLAE, which is two-fold. Our GLAE provides
a surprising improvement on Morph II, reaching the lowest MAE and CMAE of 1.14
and 1.27 years, respectively. Compared to the previous best method, MAE dropped
by up to 34%, which is an unprecedented improvement, and for the first time,
MAE is close to 1 year old. Extensive experiments on other age benchmark
datasets, including CACD, MIVIA, and Chalearn LAP 2015, also indicate that GLAE
outperforms the state-of-the-art approaches significantly.
Related papers
- Gradient-Aware Logit Adjustment Loss for Long-tailed Classifier [30.931850375858573]
In the real-world setting, data often follows a long-tailed distribution, where head classes contain significantly more training samples than tail classes.
We propose the Gradient-Aware Logit Adjustment (GALA) loss, which adjusts the logits based on accumulated gradients to balance the optimization process.
Our approach achieves top-1 accuracy of 48.5%, 41.4%, and 73.3% on popular long-tailed recognition benchmark datasets.
arXiv Detail & Related papers (2024-03-14T02:21:01Z) - Boosting Long-tailed Object Detection via Step-wise Learning on
Smooth-tail Data [60.64535309016623]
We build smooth-tail data where the long-tailed distribution of categories decays smoothly to correct the bias towards head classes.
We fine-tune the class-agnostic modules of the pre-trained model on the head class dominant replay data.
We train a unified model on the tail class dominant replay data while transferring knowledge from the head class expert model to ensure accurate detection of all categories.
arXiv Detail & Related papers (2023-05-22T08:53:50Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Evaluating Parameter Efficient Learning for Generation [32.52577462253145]
We present comparisons between PERMs and finetuning from three new perspectives.
Our results show that for in-domain settings (a) there is a cross point of sample size for which PERMs will perform better than finetuning when training with fewer samples, and (b) larger PLMs.
We also compare the faithfulness of generations and show that PERMs can achieve better faithfulness score than finetuning, especially for small training set, by as much as 6%.
arXiv Detail & Related papers (2022-10-25T00:14:48Z) - Strict baselines for Covid-19 forecasting and ML perspective for USA and
Russia [105.54048699217668]
Covid-19 allows researchers to gather datasets accumulated over 2 years and to use them in predictive analysis.
We present the results of a consistent comparative study of different types of methods for predicting the dynamics of the spread of Covid-19 based on regional data for two countries: the United States and Russia.
arXiv Detail & Related papers (2022-07-15T18:21:36Z) - LAE : Long-tailed Age Estimation [52.5745217752147]
We first formulate a simple standard baseline and build a much strong one by collecting the tricks in pre-training, data augmentation, model architecture, and so on.
Compared with the standard baseline, the proposed one significantly decreases the estimation errors.
We propose a two-stage training method named Long-tailed Age Estimation (LAE), which decouples the learning procedure into representation learning and classification.
arXiv Detail & Related papers (2021-10-25T09:05:44Z) - using multiple losses for accurate facial age estimation [6.851375622634309]
We propose a simple yet effective approach for age estimation, which improves the performance compared to classification-based methods.
We validate the Age-Granularity-Net framework on the CVPR Chalearn 2016 dataset, and extensive experiments show that the proposed approach can reduce the prediction error compared to any individual loss.
arXiv Detail & Related papers (2021-06-17T11:18:16Z) - PML: Progressive Margin Loss for Long-tailed Age Classification [9.020103398777653]
We propose a progressive margin loss (PML) approach for unconstrained facial age classification.
Our PML aims to adaptively refine the age label pattern by enforcing a couple of margins.
arXiv Detail & Related papers (2021-03-03T02:47:09Z) - The Devil is in Classification: A Simple Framework for Long-tail Object
Detection and Instance Segmentation [93.17367076148348]
We investigate performance drop of the state-of-the-art two-stage instance segmentation model Mask R-CNN on the recent long-tail LVIS dataset.
We unveil that a major cause is the inaccurate classification of object proposals.
We propose a simple calibration framework to more effectively alleviate classification head bias with a bi-level class balanced sampling approach.
arXiv Detail & Related papers (2020-07-23T12:49:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.