Learning Fair Face Representation With Progressive Cross Transformer
- URL: http://arxiv.org/abs/2108.04983v1
- Date: Wed, 11 Aug 2021 01:31:14 GMT
- Title: Learning Fair Face Representation With Progressive Cross Transformer
- Authors: Yong Li, Yufei Sun, Zhen Cui, Shiguang Shan, Jian Yang
- Abstract summary: We propose a progressive cross transformer (PCT) method for fair face recognition.
We show that PCT is capable of mitigating bias in face recognition while achieving state-of-the-art FR performance.
- Score: 79.73754444296213
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Face recognition (FR) has made extraordinary progress owing to the
advancement of deep convolutional neural networks. However, demographic bias
among different racial cohorts still challenges the practical face recognition
system. The race factor has been proven to be a dilemma for fair FR (FFR) as
the subject-related specific attributes induce the classification bias whilst
carrying some useful cues for FR. To mitigate racial bias and meantime preserve
robust FR, we abstract face identity-related representation as a signal
denoising problem and propose a progressive cross transformer (PCT) method for
fair face recognition. Originating from the signal decomposition theory, we
attempt to decouple face representation into i) identity-related components and
ii) noisy/identity-unrelated components induced by race. As an extension of
signal subspace decomposition, we formulate face decoupling as a generalized
functional expression model to cross-predict face identity and race
information. The face expression model is further concretized by designing dual
cross-transformers to distill identity-related components and suppress racial
noises. In order to refine face representation, we take a progressive face
decoupling way to learn identity/race-specific transformations, so that
identity-unrelated components induced by race could be better disentangled. We
evaluate the proposed PCT on the public fair face recognition benchmarks (BFW,
RFW) and verify that PCT is capable of mitigating bias in face recognition
while achieving state-of-the-art FR performance. Besides, visualization results
also show that the attention maps in PCT can well reveal the
race-related/biased facial regions.
Related papers
- ID$^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition [60.15830516741776]
Synthetic face recognition (SFR) aims to generate datasets that mimic the distribution of real face data.
We introduce a diffusion-fueled SFR model termed $textID3$.
$textID3$ employs an ID-preserving loss to generate diverse yet identity-consistent facial appearances.
arXiv Detail & Related papers (2024-09-26T06:46:40Z) - Text-Guided Face Recognition using Multi-Granularity Cross-Modal
Contrastive Learning [0.0]
We introduce text-guided face recognition (TGFR) to analyze the impact of integrating facial attributes in the form of natural language descriptions.
TGFR demonstrates remarkable improvements, particularly on low-quality images, over existing face recognition models.
arXiv Detail & Related papers (2023-12-14T22:04:22Z) - Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers [57.1091606948826]
We propose a novel FER model, named Poker Face Vision Transformer or PF-ViT, to address these challenges.
PF-ViT aims to separate and recognize the disturbance-agnostic emotion from a static facial image via generating its corresponding poker face.
PF-ViT utilizes vanilla Vision Transformers, and its components are pre-trained as Masked Autoencoders on a large facial expression dataset.
arXiv Detail & Related papers (2022-07-22T13:39:06Z) - Heterogeneous Visible-Thermal and Visible-Infrared Face Recognition
using Unit-Class Loss and Cross-Modality Discriminator [0.43748379918040853]
We propose an end-to-end framework for cross-modal face recognition.
A novel Unit-Class Loss is proposed for preserving identity information while discarding modality information.
The proposed network can be used to extract modality-independent vector representations or a matching-pair classification for test images.
arXiv Detail & Related papers (2021-11-29T06:14:00Z) - Learning Facial Representations from the Cycle-consistency of Face [23.23272327438177]
We introduce cycle-consistency in facial characteristics as free supervisory signal to learn facial representations from unlabeled facial images.
The learning is realized by superimposing the facial motion cycle-consistency and identity cycle-consistency constraints.
Our approach is competitive with those of existing methods, demonstrating the rich and unique information embedded in the disentangled representations.
arXiv Detail & Related papers (2021-08-07T11:30:35Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z) - Exploring Racial Bias within Face Recognition via per-subject
Adversarially-Enabled Data Augmentation [15.924281804465252]
We propose a novel adversarial derived data augmentation methodology that aims to enable dataset balance at a per-subject level.
Our aim is to automatically construct a synthesised dataset by transforming facial images across varying racial domains.
In a side-by-side comparison, we show the positive impact our proposed technique can have on the recognition performance for (racial) minority groups.
arXiv Detail & Related papers (2020-04-19T19:46:32Z) - DotFAN: A Domain-transferred Face Augmentation Network for Pose and
Illumination Invariant Face Recognition [94.96686189033869]
We propose a 3D model-assisted domain-transferred face augmentation network (DotFAN)
DotFAN can generate a series of variants of an input face based on the knowledge distilled from existing rich face datasets collected from other domains.
Experiments show that DotFAN is beneficial for augmenting small face datasets to improve their within-class diversity.
arXiv Detail & Related papers (2020-02-23T08:16:34Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.