Impact of Blur and Resolution on Demographic Disparities in 1-to-Many
Facial Identification
- URL: http://arxiv.org/abs/2309.04447v3
- Date: Tue, 23 Jan 2024 20:34:05 GMT
- Title: Impact of Blur and Resolution on Demographic Disparities in 1-to-Many
Facial Identification
- Authors: Aman Bhatta, Gabriella Pangelinan, Michael C. King, and Kevin W.
Bowyer
- Abstract summary: This paper analyzes the accuracy of 1-to-many facial identification across demographic groups.
We show that increased blur in the probe image, or reduced resolution of the face in the probe image, can significantly increase the false positive identification rate.
- Score: 6.818318933838661
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most studies to date that have examined demographic variations in face
recognition accuracy have analyzed 1-to-1 matching accuracy, using images that
could be described as "government ID quality". This paper analyzes the accuracy
of 1-to-many facial identification across demographic groups, and in the
presence of blur and reduced resolution in the probe image as might occur in
"surveillance camera quality" images. Cumulative match characteristic curves
(CMC) are not appropriate for comparing propensity for rank-one recognition
errors across demographics, and so we use three metrics for our analysis: (1)
the well-known d' metric between mated and non-mated score distributions, and
introduced in this work, (2) absolute score difference between thresholds in
the high-similarity tail of the non-mated and the low-similarity tail of the
mated distribution, and (3) distribution of (mated - non-mated rank-one scores)
across the set of probe images. We find that demographic variation in 1-to-many
accuracy does not entirely follow what has been observed in 1-to-1 matching
accuracy. Also, different from 1-to-1 accuracy, demographic comparison of
1-to-many accuracy can be affected by different numbers of identities and
images across demographics. More importantly, we show that increased blur in
the probe image, or reduced resolution of the face in the probe image, can
significantly increase the false positive identification rate. And we show that
the demographic variation in these high blur or low resolution conditions is
much larger for male / female than for African-American / Caucasian. The point
that 1-to-many accuracy can potentially collapse in the context of processing
"surveillance camera quality" probe images against a "government ID quality"
gallery is an important one.
Related papers
- What Should Be Balanced in a "Balanced" Face Recognition Dataset? [8.820019122897154]
Various face image datasets have been proposed as 'fair' or 'balanced' to assess the accuracy of face recognition algorithms across demographics.
It is important to note that the number of identities and images in an evaluation dataset are em not driving factors for 1-to-1 face matching accuracy.
We propose a bias-aware toolkit that facilitates creation of cross-demographic evaluation datasets balanced on factors mentioned in this paper.
arXiv Detail & Related papers (2023-04-17T22:02:03Z) - Exploring Causes of Demographic Variations In Face Recognition Accuracy [10.534382915377025]
We consider accuracy differences as represented by variations in non-mated (impostor) and / or mated (genuine) distributions for 1-to-1 face matching.
Possible causes explored include differences in skin tone, face size and shape, imbalance in number of identities and images in the training data, and amount of face visible in the test data.
arXiv Detail & Related papers (2023-04-14T14:50:59Z) - The Gender Gap in Face Recognition Accuracy Is a Hairy Problem [8.768049933358968]
We first demonstrate that female and male hairstyles have important differences that impact face recognition accuracy.
We then demonstrate that when the data used to estimate recognition accuracy is balanced across gender for how hairstyles occlude the face, the initially observed gender gap in accuracy largely disappears.
arXiv Detail & Related papers (2022-06-10T04:32:47Z) - Few-shot Forgery Detection via Guided Adversarial Interpolation [56.59499187594308]
Existing forgery detection methods suffer from significant performance drops when applied to unseen novel forgery approaches.
We propose Guided Adversarial Interpolation (GAI) to overcome the few-shot forgery detection problem.
Our method is validated to be robust to choices of majority and minority forgery approaches.
arXiv Detail & Related papers (2022-04-12T16:05:10Z) - A Deep Dive into Dataset Imbalance and Bias in Face Identification [49.210042420757894]
Media portrayals often center imbalance as the main source of bias in automated face recognition systems.
Previous studies of data imbalance in FR have focused exclusively on the face verification setting.
This work thoroughly explores the effects of each kind of imbalance possible in face identification, and discuss other factors which may impact bias in this setting.
arXiv Detail & Related papers (2022-03-15T20:23:13Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - Gendered Differences in Face Recognition Accuracy Explained by
Hairstyles, Makeup, and Facial Morphology [11.50297186426025]
There is consensus in the research literature that face recognition accuracy is lower for females.
Controlling for equal amount of visible face in the test images mitigates the apparent higher false non-match rate for females.
Additional analysis shows that makeup-balanced datasets further improves females to achieve lower false non-match rates.
arXiv Detail & Related papers (2021-12-29T17:07:33Z) - Comparing Human and Machine Bias in Face Recognition [46.170389064229354]
We release improvements to the LFW and CelebA datasets which will enable future researchers to obtain measurements of algorithmic bias.
We also use these new data to develop a series of challenging facial identification and verification questions.
We find that both computer models and human survey participants perform significantly better at the verification task.
arXiv Detail & Related papers (2021-10-15T22:26:20Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Identity and Attribute Preserving Thumbnail Upscaling [93.38607559281601]
We consider the task of upscaling a low resolution thumbnail image of a person, to a higher resolution image, which preserves the person's identity and other attributes.
Our results indicate an improvement in face similarity recognition and lookalike generation as well as in the ability to generate higher resolution images which preserve an input thumbnail identity and whose race and attributes are maintained.
arXiv Detail & Related papers (2021-05-30T19:32:27Z) - Is Gender "In-the-Wild" Inference Really a Solved Problem? [0.0]
We report an extensive analysis of the feasibility of its inference regarding image (resolution, luminosity, and blurriness) and subject-based features.
Using three state-of-the-art datasets, we correlate feature analysis with gender inference accuracy.
We analyze face-based gender inference and assess the pose effect on it.
arXiv Detail & Related papers (2021-05-12T17:05:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.