MetaAge: Meta-Learning Personalized Age Estimators
- URL: http://arxiv.org/abs/2207.05288v1
- Date: Tue, 12 Jul 2022 03:53:42 GMT
- Title: MetaAge: Meta-Learning Personalized Age Estimators
- Authors: Wanhua Li, Jiwen Lu, Abudukelimu Wuerkaixi, Jianjiang Feng, Jie Zhou
- Abstract summary: We propose a meta-learning method named MetaAge for age estimation.
Specifically, we introduce a personalized estimator meta-learner, which takes identity features as the input and outputs the parameters of customized estimators.
In this way, our method learns the meta knowledge without the above requirements and seamlessly transfers the learned meta knowledge to the test set.
- Score: 94.73054410570037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Different people age in different ways. Learning a personalized age estimator
for each person is a promising direction for age estimation given that it
better models the personalization of aging processes. However, most existing
personalized methods suffer from the lack of large-scale datasets due to the
high-level requirements: identity labels and enough samples for each person to
form a long-term aging pattern. In this paper, we aim to learn personalized age
estimators without the above requirements and propose a meta-learning method
named MetaAge for age estimation. Unlike most existing personalized methods
that learn the parameters of a personalized estimator for each person in the
training set, our method learns the mapping from identity information to age
estimator parameters. Specifically, we introduce a personalized estimator
meta-learner, which takes identity features as the input and outputs the
parameters of customized estimators. In this way, our method learns the meta
knowledge without the above requirements and seamlessly transfers the learned
meta knowledge to the test set, which enables us to leverage the existing
large-scale age datasets without any additional annotations. Extensive
experimental results on three benchmark datasets including MORPH II, ChaLearn
LAP 2015 and ChaLearn LAP 2016 databases demonstrate that our MetaAge
significantly boosts the performance of existing personalized methods and
outperforms the state-of-the-art approaches.
Related papers
- PersonalLLM: Tailoring LLMs to Individual Preferences [11.717169516971856]
We present a public benchmark, PersonalLLM, focusing on adapting LLMs to provide maximal benefits for a particular user.
We curate open-ended prompts paired with many high-quality answers over which users would be expected to display heterogeneous latent preferences.
Our dataset and generated personalities offer an innovative testbed for developing personalization algorithms.
arXiv Detail & Related papers (2024-09-30T13:55:42Z) - Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning [119.70303730341938]
We propose ePisode cUrriculum inveRsion (ECI) during data-free meta training and invErsion calibRation following inner loop (ICFIL) during meta testing.
ECI adaptively increases the difficulty level of pseudo episodes according to the real-time feedback of the meta model.
We formulate the optimization process of meta training with ECI as an adversarial form in an end-to-end manner.
arXiv Detail & Related papers (2023-03-20T15:10:41Z) - FedPC: Federated Learning for Language Generation with Personal and
Context Preference Embeddings [10.235620939242505]
Federated learning is a training paradigm that learns from multiple distributed users without aggregating data on a centralized server.
We propose a new direction for personalization research within federated learning, leveraging both personal embeddings and shared context embeddings.
We present an approach to predict these preference'' embeddings, enabling personalization without backpropagation.
arXiv Detail & Related papers (2022-10-07T18:01:19Z) - LAE : Long-tailed Age Estimation [52.5745217752147]
We first formulate a simple standard baseline and build a much strong one by collecting the tricks in pre-training, data augmentation, model architecture, and so on.
Compared with the standard baseline, the proposed one significantly decreases the estimation errors.
We propose a two-stage training method named Long-tailed Age Estimation (LAE), which decouples the learning procedure into representation learning and classification.
arXiv Detail & Related papers (2021-10-25T09:05:44Z) - FP-Age: Leveraging Face Parsing Attention for Facial Age Estimation in
the Wild [50.8865921538953]
We propose a method to explicitly incorporate facial semantics into age estimation.
We design a face parsing-based network to learn semantic information at different scales.
We show that our method consistently outperforms all existing age estimation methods.
arXiv Detail & Related papers (2021-06-21T14:31:32Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z) - Personalized Federated Learning: A Meta-Learning Approach [28.281166755509886]
In Federated Learning, we aim to train models across multiple computing units (users)
In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data.
arXiv Detail & Related papers (2020-02-19T01:08:46Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.