Fairness Index Measures to Evaluate Bias in Biometric Recognition
- URL: http://arxiv.org/abs/2306.10919v1
- Date: Mon, 19 Jun 2023 13:28:37 GMT
- Title: Fairness Index Measures to Evaluate Bias in Biometric Recognition
- Authors: Ketan Kotwal and Sebastien Marcel
- Abstract summary: A quantitative evaluation of demographic fairness is an important step towards understanding, assessment, and mitigation of demographic bias in biometric applications.
We introduce multiple measures, based on the statistical characteristics of score distributions, for the evaluation of demographic fairness of a generic biometric verification system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The demographic disparity of biometric systems has led to serious concerns
regarding their societal impact as well as applicability of such systems in
private and public domains. A quantitative evaluation of demographic fairness
is an important step towards understanding, assessment, and mitigation of
demographic bias in biometric applications. While few, existing fairness
measures are based on post-decision data (such as verification accuracy) of
biometric systems, we discuss how pre-decision data (score distributions)
provide useful insights towards demographic fairness. In this paper, we
introduce multiple measures, based on the statistical characteristics of score
distributions, for the evaluation of demographic fairness of a generic
biometric verification system. We also propose different variants for each
fairness measure depending on how the contribution from constituent demographic
groups needs to be combined towards the final measure. In each case, the
behavior of the measure has been illustrated numerically and graphically on
synthetic data. The demographic imbalance in benchmarking datasets is often
overlooked during fairness assessment. We provide a novel weighing strategy to
reduce the effect of such imbalance through a non-linear function of sample
sizes of demographic groups. The proposed measures are independent of the
biometric modality, and thus, applicable across commonly used biometric
modalities (e.g., face, fingerprint, etc.).
Related papers
- Leveraging Prototypical Representations for Mitigating Social Bias without Demographic Information [50.29934517930506]
DAFair is a novel approach to address social bias in language models.
We leverage prototypical demographic texts and incorporate a regularization term during the fine-tuning process to mitigate bias.
arXiv Detail & Related papers (2024-03-14T15:58:36Z) - Synthetic Data for the Mitigation of Demographic Biases in Face
Recognition [10.16490522214987]
This study investigates the possibility of mitigating the demographic biases that affect face recognition technologies through the use of synthetic data.
We use synthetic datasets generated with GANDiffFace, a novel framework able to synthesize datasets for face recognition with controllable demographic distribution and realistic intra-class variations.
Our results support the proposed approach and the use of synthetic data to mitigate demographic biases in face recognition.
arXiv Detail & Related papers (2024-02-02T14:57:42Z) - Identifying Reasons for Bias: An Argumentation-Based Approach [2.9465623430708905]
We propose a novel model-agnostic argumentation-based method to determine why an individual is classified differently in comparison to similar individuals.
We evaluate our method on two datasets commonly used in the fairness literature and illustrate its effectiveness in the identification of bias.
arXiv Detail & Related papers (2023-10-25T09:47:15Z) - Gender Biases in Automatic Evaluation Metrics for Image Captioning [87.15170977240643]
We conduct a systematic study of gender biases in model-based evaluation metrics for image captioning tasks.
We demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations.
We present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments.
arXiv Detail & Related papers (2023-05-24T04:27:40Z) - Metrics for Dataset Demographic Bias: A Case Study on Facial Expression Recognition [4.336779198334903]
One of the most prominent types of demographic bias are statistical imbalances in the representation of demographic groups in the datasets.
We develop a taxonomy for the classification of these metrics, providing a practical guide for the selection of appropriate metrics.
The paper provides valuable insights for researchers in AI and related fields to mitigate dataset bias and improve the fairness and accuracy of AI models.
arXiv Detail & Related papers (2023-03-28T11:04:18Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Fairness in Biometrics: a figure of merit to assess biometric
verification systems [1.218340575383456]
We introduce the first figure of merit that is able to evaluate and compare fairness aspects between multiple biometric verification systems.
A use case with two synthetic biometric systems is introduced and demonstrates the potential of this figure of merit.
Second, a use case using face biometrics is presented where several systems are evaluated compared with this new figure of merit.
arXiv Detail & Related papers (2020-11-04T16:46:37Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.