Frequency Matters: Explaining Biases of Face Recognition in the Frequency Domain
- URL: http://arxiv.org/abs/2501.16896v1
- Date: Tue, 28 Jan 2025 12:27:25 GMT
- Title: Frequency Matters: Explaining Biases of Face Recognition in the Frequency Domain
- Authors: Marco Huber, Fadi Boutros, Naser Damer,
- Abstract summary: Face recognition models are vulnerable to performance variations across demographic groups.
Several works aimed at exploring possible roots of gender and ethnicity bias.
We explain bias in face recognition using state-of-the-art frequency-based explanations.
- Score: 8.291083684227576
- License:
- Abstract: Face recognition (FR) models are vulnerable to performance variations across demographic groups. The causes for these performance differences are unclear due to the highly complex deep learning-based structure of face recognition models. Several works aimed at exploring possible roots of gender and ethnicity bias, identifying semantic reasons such as hairstyle, make-up, or facial hair as possible sources. Motivated by recent discoveries of the importance of frequency patterns in convolutional neural networks, we explain bias in face recognition using state-of-the-art frequency-based explanations. Our extensive results show that different frequencies are important to FR models depending on the ethnicity of the samples.
Related papers
- Beyond Spatial Explanations: Explainable Face Recognition in the Frequency Domain [6.69421628320396]
We take a step forward and investigate explainable face recognition in the unexplored frequency domain.
This makes this work the first to propose explainability of verification-based decisions in the frequency domain.
arXiv Detail & Related papers (2024-07-16T17:29:24Z) - Explanation of Face Recognition via Saliency Maps [13.334500258498798]
This paper proposes a rigorous definition of explainable face recognition (XFR)
It then introduces a similarity-based RISE algorithm (S-RISE) to produce high-quality visual saliency maps.
An evaluation approach is proposed to systematically validate the reliability and accuracy of general visual saliency-based XFR methods.
arXiv Detail & Related papers (2023-04-12T19:04:21Z) - Are Face Detection Models Biased? [69.68854430664399]
We investigate possible bias in the domain of face detection through facial region localization.
Most existing face detection datasets lack suitable annotation for such analysis.
We observe a high disparity in detection accuracies across gender and skin-tone, and interplay of confounding factors beyond demography.
arXiv Detail & Related papers (2022-11-07T14:27:55Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - Towards Explaining Demographic Bias through the Eyes of Face Recognition
Models [6.889667606945215]
Biases inherent in both data and algorithms make the fairness of machine learning (ML)-based decision-making systems less than optimal.
We aim at providing a set of explainability tool that analyse the difference in the face recognition models' behaviors when processing different demographic groups.
We do that by leveraging higher-order statistical information based on activation maps to build explainability tools that link the FR models' behavior differences to certain facial regions.
arXiv Detail & Related papers (2022-08-29T07:23:06Z) - Beyond the Visible: A Survey on Cross-spectral Face Recognition [15.469814029453893]
Cross-spectral face recognition (CFR) refers to recognizing individuals using face images stemming from different spectral bands.
Recent advances in deep neural networks (DNNs) have resulted in significant improvement in the performance of CFR systems.
arXiv Detail & Related papers (2022-01-12T12:09:24Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Learning Fair Face Representation With Progressive Cross Transformer [79.73754444296213]
We propose a progressive cross transformer (PCT) method for fair face recognition.
We show that PCT is capable of mitigating bias in face recognition while achieving state-of-the-art FR performance.
arXiv Detail & Related papers (2021-08-11T01:31:14Z) - The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition [71.24825724518847]
Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train.
We formalize the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks.
We demonstrate how our model achieves a comparable, if not better, performance to the current state-of-the-art in FER.
arXiv Detail & Related papers (2020-09-15T09:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.