Fairness Index Measures to Evaluate Bias in Biometric Recognition
- URL: http://arxiv.org/abs/2306.10919v1
- Date: Mon, 19 Jun 2023 13:28:37 GMT
- Title: Fairness Index Measures to Evaluate Bias in Biometric Recognition
- Authors: Ketan Kotwal and Sebastien Marcel
- Abstract summary: A quantitative evaluation of demographic fairness is an important step towards understanding, assessment, and mitigation of demographic bias in biometric applications.
We introduce multiple measures, based on the statistical characteristics of score distributions, for the evaluation of demographic fairness of a generic biometric verification system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The demographic disparity of biometric systems has led to serious concerns
regarding their societal impact as well as applicability of such systems in
private and public domains. A quantitative evaluation of demographic fairness
is an important step towards understanding, assessment, and mitigation of
demographic bias in biometric applications. While few, existing fairness
measures are based on post-decision data (such as verification accuracy) of
biometric systems, we discuss how pre-decision data (score distributions)
provide useful insights towards demographic fairness. In this paper, we
introduce multiple measures, based on the statistical characteristics of score
distributions, for the evaluation of demographic fairness of a generic
biometric verification system. We also propose different variants for each
fairness measure depending on how the contribution from constituent demographic
groups needs to be combined towards the final measure. In each case, the
behavior of the measure has been illustrated numerically and graphically on
synthetic data. The demographic imbalance in benchmarking datasets is often
overlooked during fairness assessment. We provide a novel weighing strategy to
reduce the effect of such imbalance through a non-linear function of sample
sizes of demographic groups. The proposed measures are independent of the
biometric modality, and thus, applicable across commonly used biometric
modalities (e.g., face, fingerprint, etc.).
Related papers
- Comprehensive Equity Index (CEI): Definition and Application to Bias Evaluation in Biometrics [47.762333925222926]
We present a novel metric to quantify biased behaviors of machine learning models.
We focus on and apply it to the operational evaluation of face recognition systems.
arXiv Detail & Related papers (2024-09-03T14:19:38Z) - Using Backbone Foundation Model for Evaluating Fairness in Chest Radiography Without Demographic Data [2.7436483977171333]
This study aims to investigate the effectiveness of using the backbone of Foundation Models as an embedding extractor.
We propose utilizing these groups in different stages of bias mitigation, including pre-processing, in-processing, and evaluation.
arXiv Detail & Related papers (2024-08-28T20:35:38Z) - Leveraging Prototypical Representations for Mitigating Social Bias without Demographic Information [50.29934517930506]
DAFair is a novel approach to address social bias in language models.
We leverage prototypical demographic texts and incorporate a regularization term during the fine-tuning process to mitigate bias.
arXiv Detail & Related papers (2024-03-14T15:58:36Z) - Gender Biases in Automatic Evaluation Metrics for Image Captioning [87.15170977240643]
We conduct a systematic study of gender biases in model-based evaluation metrics for image captioning tasks.
We demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations.
We present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments.
arXiv Detail & Related papers (2023-05-24T04:27:40Z) - Metrics for Dataset Demographic Bias: A Case Study on Facial Expression Recognition [4.336779198334903]
One of the most prominent types of demographic bias are statistical imbalances in the representation of demographic groups in the datasets.
We develop a taxonomy for the classification of these metrics, providing a practical guide for the selection of appropriate metrics.
The paper provides valuable insights for researchers in AI and related fields to mitigate dataset bias and improve the fairness and accuracy of AI models.
arXiv Detail & Related papers (2023-03-28T11:04:18Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Fairness in Biometrics: a figure of merit to assess biometric
verification systems [1.218340575383456]
We introduce the first figure of merit that is able to evaluate and compare fairness aspects between multiple biometric verification systems.
A use case with two synthetic biometric systems is introduced and demonstrates the potential of this figure of merit.
Second, a use case using face biometrics is presented where several systems are evaluated compared with this new figure of merit.
arXiv Detail & Related papers (2020-11-04T16:46:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.