Lightweight Facial Attractiveness Prediction Using Dual Label Distribution
- URL: http://arxiv.org/abs/2212.01742v2
- Date: Wed, 24 Apr 2024 05:30:46 GMT
- Title: Lightweight Facial Attractiveness Prediction Using Dual Label Distribution
- Authors: Shu Liu, Enquan Huang, Ziyu Zhou, Yan Xu, Xiaoyan Kui, Tao Lei, Hongying Meng,
- Abstract summary: Facial attractiveness prediction (FAP) aims to assess facial attractiveness automatically based on human aesthetic perception.
We present a novel end-to-end FAP approach that integrates dual label distribution and lightweight design.
Our approach achieves promising results and succeeds in balancing performance and efficiency.
- Score: 16.60169799392108
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Facial attractiveness prediction (FAP) aims to assess facial attractiveness automatically based on human aesthetic perception. Previous methods using deep convolutional neural networks have improved the performance, but their large-scale models have led to a deficiency in flexibility. In addition, most methods fail to take full advantage of the dataset. In this paper, we present a novel end-to-end FAP approach that integrates dual label distribution and lightweight design. The manual ratings, attractiveness score, and standard deviation are aggregated explicitly to construct a dual-label distribution to make the best use of the dataset, including the attractiveness distribution and the rating distribution. Such distributions, as well as the attractiveness score, are optimized under a joint learning framework based on the label distribution learning (LDL) paradigm. The data processing is simplified to a minimum for a lightweight design, and MobileNetV2 is selected as our backbone. Extensive experiments are conducted on two benchmark datasets, where our approach achieves promising results and succeeds in balancing performance and efficiency. Ablation studies demonstrate that our delicately designed learning modules are indispensable and correlated. Additionally, the visualization indicates that our approach can perceive facial attractiveness and capture attractive facial regions to facilitate semantic predictions. The code is available at https://github.com/enquan/2D_FAP.
Related papers
- Harnessing Diffusion Models for Visual Perception with Meta Prompts [68.78938846041767]
We propose a simple yet effective scheme to harness a diffusion model for visual perception tasks.
We introduce learnable embeddings (meta prompts) to the pre-trained diffusion models to extract proper features for perception.
Our approach achieves new performance records in depth estimation tasks on NYU depth V2 and KITTI, and in semantic segmentation task on CityScapes.
arXiv Detail & Related papers (2023-12-22T14:40:55Z) - Probabilistic Self-supervised Learning via Scoring Rules Minimization [19.347097627898876]
We propose a novel probabilistic self-supervised learning via Scoring Rule Minimization (ProSMIN) to enhance representation quality and mitigate collapsing representations.
Our method achieves superior accuracy and calibration, surpassing the self-supervised baseline in a wide range of experiments on large-scale datasets.
arXiv Detail & Related papers (2023-09-05T08:48:25Z) - Toward Fair Facial Expression Recognition with Improved Distribution
Alignment [19.442685015494316]
We present a novel approach to mitigate bias in facial expression recognition (FER) models.
Our method aims to reduce sensitive attribute information such as gender, age, or race, in the embeddings produced by FER models.
For the first time, we analyze the notion of attractiveness as an important sensitive attribute in FER models and demonstrate that FER models can indeed exhibit biases towards more attractive faces.
arXiv Detail & Related papers (2023-06-11T14:59:20Z) - Model Debiasing via Gradient-based Explanation on Representation [14.673988027271388]
We propose a novel fairness framework that performs debiasing with regard to sensitive attributes and proxy attributes.
Our framework achieves better fairness-accuracy trade-off on unstructured and structured datasets than previous state-of-the-art approaches.
arXiv Detail & Related papers (2023-05-20T11:57:57Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Towards Unbiased Label Distribution Learning for Facial Pose Estimation
Using Anisotropic Spherical Gaussian [8.597165738132617]
We propose an Anisotropic Spherical Gaussian (ASG)-based LDL approach for facial pose estimation.
In particular, our approach adopts the spherical Gaussian distribution on a unit sphere which constantly generates unbiased expectation.
Our method sets new state-of-the-art records on AFLW2000 and BIWI datasets.
arXiv Detail & Related papers (2022-08-19T02:12:36Z) - Imposing Consistency for Optical Flow Estimation [73.53204596544472]
Imposing consistency through proxy tasks has been shown to enhance data-driven learning.
This paper introduces novel and effective consistency strategies for optical flow estimation.
arXiv Detail & Related papers (2022-04-14T22:58:30Z) - Learning Expectation of Label Distribution for Facial Age and
Attractiveness Estimation [65.5880700862751]
We analyze the essential relationship between two state-of-the-art methods (Ranking-CNN and DLDL) and show that the Ranking method is in fact learning label distribution implicitly.
We propose a lightweight network architecture and propose a unified framework which can jointly learn facial attribute distribution and regress attribute value.
Our method achieves new state-of-the-art results using the single model with 36$times$ fewer parameters and 3$times$ faster inference speed on facial age/attractiveness estimation.
arXiv Detail & Related papers (2020-07-03T15:46:53Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.