Multi-Task Learning Using Uncertainty to Weigh Losses for Heterogeneous
Face Attribute Estimation
- URL: http://arxiv.org/abs/2403.00561v1
- Date: Fri, 1 Mar 2024 14:39:15 GMT
- Title: Multi-Task Learning Using Uncertainty to Weigh Losses for Heterogeneous
Face Attribute Estimation
- Authors: Huaqing Yuan and Yi He and Peng Du and Lu Song
- Abstract summary: We propose a framework for joint estimation of ordinal and nominal attributes based on information sharing.
Experimental results on benchmarks with multiple face attributes show that the proposed approach has superior performance compared to state of the art.
- Score: 9.466352272999698
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face images contain a wide variety of attribute information. In this paper,
we propose a generalized framework for joint estimation of ordinal and nominal
attributes based on information sharing. We tackle the correlation problem
between heterogeneous attributes using hard parameter sharing of shallow
features, and trade-off multiple loss functions by considering homoskedastic
uncertainty for each attribute estimation task. This leads to optimal
estimation of multiple attributes of the face and reduces the training cost of
multitask learning. Experimental results on benchmarks with multiple face
attributes show that the proposed approach has superior performance compared to
state of the art. Finally, we discuss the bias issues arising from the proposed
approach in face attribute estimation and validate its feasibility on edge
systems.
Related papers
- Adaptive Feature Selection for No-Reference Image Quality Assessment by Mitigating Semantic Noise Sensitivity [55.399230250413986]
We propose a Quality-Aware Feature Matching IQA Metric (QFM-IQM) to remove harmful semantic noise features from the upstream task.
Our approach achieves superior performance to the state-of-the-art NR-IQA methods on eight standard IQA datasets.
arXiv Detail & Related papers (2023-12-11T06:50:27Z) - SwinFace: A Multi-task Transformer for Face Recognition, Expression
Recognition, Age Estimation and Attribute Estimation [60.94239810407917]
This paper presents a multi-purpose algorithm for simultaneous face recognition, facial expression recognition, age estimation, and face attribute estimation based on a single Swin Transformer.
To address the conflicts among multiple tasks, a Multi-Level Channel Attention (MLCA) module is integrated into each task-specific analysis.
Experiments show that the proposed model has a better understanding of the face and achieves excellent performance for all tasks.
arXiv Detail & Related papers (2023-08-22T15:38:39Z) - A Solution to Co-occurrence Bias: Attributes Disentanglement via Mutual
Information Minimization for Pedestrian Attribute Recognition [10.821982414387525]
We show that current methods can actually suffer in generalizing such fitted attributes interdependencies onto scenes or identities off the dataset distribution.
To render models robust in realistic scenes, we propose the attributes-disentangled feature learning to ensure the recognition of an attribute not inferring on the existence of others.
arXiv Detail & Related papers (2023-07-28T01:34:55Z) - Blind Image Quality Assessment via Vision-Language Correspondence: A
Multitask Learning Perspective [93.56647950778357]
Blind image quality assessment (BIQA) predicts the human perception of image quality without any reference information.
We develop a general and automated multitask learning scheme for BIQA to exploit auxiliary knowledge from other tasks.
arXiv Detail & Related papers (2023-03-27T07:58:09Z) - Using Positive Matching Contrastive Loss with Facial Action Units to
mitigate bias in Facial Expression Recognition [6.015556590955814]
We propose to mitigate bias by guiding the model's focus towards task-relevant features using domain knowledge.
We show that incorporating task-relevant features via our method can improve model fairness at minimal cost to classification performance.
arXiv Detail & Related papers (2023-03-08T21:28:02Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - TransFA: Transformer-based Representation for Face Attribute Evaluation [87.09529826340304]
We propose a novel textbftransformer-based representation for textbfattribute evaluation method (textbfTransFA)
The proposed TransFA achieves superior performances compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-07-12T10:58:06Z) - Deep Collaborative Multi-Modal Learning for Unsupervised Kinship
Estimation [53.62256887837659]
Kinship verification is a long-standing research challenge in computer vision.
We propose a novel deep collaborative multi-modal learning (DCML) to integrate the underlying information presented in facial properties.
Our DCML method is always superior to some state-of-the-art kinship verification methods.
arXiv Detail & Related papers (2021-09-07T01:34:51Z) - Face Image Quality Assessment: A Literature Survey [16.647739693192236]
This survey provides an overview of the face image quality assessment literature, which predominantly focuses on visible wavelength face image input.
A trend towards deep learning based methods is observed, including notable conceptual differences among the recent approaches.
arXiv Detail & Related papers (2020-09-02T14:26:12Z) - SER-FIQ: Unsupervised Estimation of Face Image Quality Based on
Stochastic Embedding Robustness [15.431761867166]
We propose a novel concept to measure face quality based on an arbitrary face recognition model.
We compare our proposed solution on two face embeddings against six state-of-the-art approaches from academia and industry.
arXiv Detail & Related papers (2020-03-20T16:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.