P-Age: Pexels Dataset for Robust Spatio-Temporal Apparent Age
Classification
- URL: http://arxiv.org/abs/2311.02432v1
- Date: Sat, 4 Nov 2023 15:23:21 GMT
- Title: P-Age: Pexels Dataset for Robust Spatio-Temporal Apparent Age
Classification
- Authors: Abid Ali and Ashish Marisetty and Francois Bremond
- Abstract summary: AgeFormer utilizes information on the dynamics of the entire body face-based methods for age classification.
To fill the gap in predicting age in real-world situations from videos, we construct a video dataset called Pexels Age.
The proposed method achieves superior results compared to existing face-based age estimation methods.
- Score: 0.7234862895932991
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Age estimation is a challenging task that has numerous applications. In this
paper, we propose a new direction for age classification that utilizes a
video-based model to address challenges such as occlusions, low-resolution, and
lighting conditions. To address these challenges, we propose AgeFormer which
utilizes spatio-temporal information on the dynamics of the entire body
dominating face-based methods for age classification. Our novel two-stream
architecture uses TimeSformer and EfficientNet as backbones, to effectively
capture both facial and body dynamics information for efficient and accurate
age estimation in videos. Furthermore, to fill the gap in predicting age in
real-world situations from videos, we construct a video dataset called Pexels
Age (P-Age) for age classification. The proposed method achieves superior
results compared to existing face-based age estimation methods and is evaluated
in situations where the face is highly occluded, blurred, or masked. The method
is also cross-tested on a variety of challenging video datasets such as
Charades, Smarthome, and Thumos-14.
Related papers
- Video Face Re-Aging: Toward Temporally Consistent Face Re-Aging [5.252268654349522]
Video face re-aging deals with altering the apparent age of a person to the target age in videos.
Most re-aging methods process each image individually without considering the temporal consistency of videos.
We propose a novel synthetic video dataset that features subjects across a diverse range of age groups.
arXiv Detail & Related papers (2023-11-20T10:01:13Z) - A Demographic Attribute Guided Approach to Age Estimation [4.215251065887862]
This research makes use of auxiliary information of face attributes and proposes a new age estimation approach with an attribute guidance module.
Experimental results on three public datasets of UTKFace, LAP2016 and Morph show that our proposed approach achieves superior performance compared to other state-of-the-art methods.
arXiv Detail & Related papers (2022-05-20T15:34:47Z) - LAE : Long-tailed Age Estimation [52.5745217752147]
We first formulate a simple standard baseline and build a much strong one by collecting the tricks in pre-training, data augmentation, model architecture, and so on.
Compared with the standard baseline, the proposed one significantly decreases the estimation errors.
We propose a two-stage training method named Long-tailed Age Estimation (LAE), which decouples the learning procedure into representation learning and classification.
arXiv Detail & Related papers (2021-10-25T09:05:44Z) - FP-Age: Leveraging Face Parsing Attention for Facial Age Estimation in
the Wild [50.8865921538953]
We propose a method to explicitly incorporate facial semantics into age estimation.
We design a face parsing-based network to learn semantic information at different scales.
We show that our method consistently outperforms all existing age estimation methods.
arXiv Detail & Related papers (2021-06-21T14:31:32Z) - PFA-GAN: Progressive Face Aging with Generative Adversarial Network [19.45760984401544]
This paper proposes a novel progressive face aging framework based on generative adversarial network (PFA-GAN)
The framework can be trained in an end-to-end manner to eliminate accumulative artifacts and blurriness.
Extensively experimental results demonstrate superior performance over existing (c)GANs-based methods.
arXiv Detail & Related papers (2020-12-07T05:45:13Z) - Age Gap Reducer-GAN for Recognizing Age-Separated Faces [72.26969872180841]
We propose a novel algorithm for matching faces with temporal variations caused due to age progression.
The proposed generative adversarial network algorithm is a unified framework that combines facial age estimation and age-separated face verification.
arXiv Detail & Related papers (2020-11-11T16:43:32Z) - Toward Accurate Person-level Action Recognition in Videos of Crowded
Scenes [131.9067467127761]
We focus on improving the action recognition by fully-utilizing the information of scenes and collecting new data.
Specifically, we adopt a strong human detector to detect spatial location of each frame.
We then apply action recognition models to learn thetemporal information from video frames on both the HIE dataset and new data with diverse scenes from the internet.
arXiv Detail & Related papers (2020-10-16T13:08:50Z) - Enhancing Facial Data Diversity with Style-based Face Aging [59.984134070735934]
In particular, face datasets are typically biased in terms of attributes such as gender, age, and race.
We propose a novel, generative style-based architecture for data augmentation that captures fine-grained aging patterns.
We show that the proposed method outperforms state-of-the-art algorithms for age transfer.
arXiv Detail & Related papers (2020-06-06T21:53:44Z) - Investigating Bias in Deep Face Analysis: The KANFace Dataset and
Empirical Study [67.3961439193994]
We introduce the most comprehensive, large-scale dataset of facial images and videos to date.
The data are manually annotated in terms of identity, exact age, gender and kinship.
A method to debias network embeddings is introduced and tested on the proposed benchmarks.
arXiv Detail & Related papers (2020-05-15T00:14:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.