ArcFace Knows the Gender, Too!
- URL: http://arxiv.org/abs/2112.10101v1
- Date: Sun, 19 Dec 2021 10:00:36 GMT
- Title: ArcFace Knows the Gender, Too!
- Authors: Majid Farzaneh
- Abstract summary: Instead of defining a new model for gender classification, this paper uses ArcFace features to determine gender.
Discriminative methods such as Support Vector Machine (SVM), Linear Discriminant, and Logistic Regression well demonstrate that the features extracted from the ArcFace create a remarkable distinction between the gender classes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The main idea of this paper is that if a model can recognize a person, of
course, it must be able to know the gender of that person, too. Therefore,
instead of defining a new model for gender classification, this paper uses
ArcFace features to determine gender, based on the facial features. A face
image is given to ArcFace and 512 features are obtained for the face. Then,
with the help of traditional machine learning models, gender is determined.
Discriminative methods such as Support Vector Machine (SVM), Linear
Discriminant, and Logistic Regression well demonstrate that the features
extracted from the ArcFace create a remarkable distinction between the gender
classes. Experiments on the Gender Classification Dataset show that SVM with
Gaussian kernel is able to classify gender with an accuracy of 96.4% using
ArcFace features.
Related papers
- On the "Illusion" of Gender Bias in Face Recognition: Explaining the Fairness Issue Through Non-demographic Attributes [7.602456562464879]
Face recognition systems exhibit significant accuracy differences based on the user's gender.
We propose a toolchain to effectively decorrelate and aggregate facial attributes to enable a less-biased gender analysis.
Experiments show that the gender gap vanishes when images of male and female subjects share specific attributes.
arXiv Detail & Related papers (2025-01-21T10:21:19Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - Are Face Detection Models Biased? [69.68854430664399]
We investigate possible bias in the domain of face detection through facial region localization.
Most existing face detection datasets lack suitable annotation for such analysis.
We observe a high disparity in detection accuracies across gender and skin-tone, and interplay of confounding factors beyond demography.
arXiv Detail & Related papers (2022-11-07T14:27:55Z) - The Gender Gap in Face Recognition Accuracy Is a Hairy Problem [8.768049933358968]
We first demonstrate that female and male hairstyles have important differences that impact face recognition accuracy.
We then demonstrate that when the data used to estimate recognition accuracy is balanced across gender for how hairstyles occlude the face, the initially observed gender gap in accuracy largely disappears.
arXiv Detail & Related papers (2022-06-10T04:32:47Z) - VoxCeleb Enrichment for Age and Gender Recognition [12.520037579004883]
We provide speaker age labels and (an alternative) annotation of speaker gender in VoxCeleb datasets.
We demonstrate the use of this metadata by constructing age and gender recognition models.
We also compare the original VoxCeleb gender labels with our labels to identify records that might be mislabeled in the original VoxCeleb data.
arXiv Detail & Related papers (2021-09-28T06:18:57Z) - Does Face Recognition Error Echo Gender Classification Error? [9.176056742068813]
We analyze results from three different gender classification algorithms, and two face recognition algorithms.
For impostor image pairs, our results show that pairs in which one image has a gender classification error have a better impostor distribution.
For genuine image pairs, our results show that individuals whose images have a mix of correct and incorrect gender classification have a worse genuine distribution.
arXiv Detail & Related papers (2021-04-28T14:43:31Z) - Mitigating Gender Bias in Captioning Systems [56.25457065032423]
Most captioning models learn gender bias, leading to high gender prediction errors, especially for women.
We propose a new Guided Attention Image Captioning model (GAIC) which provides self-guidance on visual attention to encourage the model to capture correct gender visual evidence.
arXiv Detail & Related papers (2020-06-15T12:16:19Z) - Towards Gender-Neutral Face Descriptors for Mitigating Bias in Face
Recognition [51.856693288834975]
State-of-the-art deep networks implicitly encode gender information while being trained for face recognition.
Gender is often viewed as an important attribute with respect to identifying faces.
We present a novel Adversarial Gender De-biasing algorithm (AGENDA)' to reduce the gender information present in face descriptors.
arXiv Detail & Related papers (2020-06-14T08:54:03Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.