Evaluating Proposed Fairness Models for Face Recognition Algorithms
- URL: http://arxiv.org/abs/2203.05051v1
- Date: Wed, 9 Mar 2022 21:16:43 GMT
- Title: Evaluating Proposed Fairness Models for Face Recognition Algorithms
- Authors: John J. Howard, Eli J. Laird, Yevgeniy B. Sirotin, Rebecca E. Rubin,
Jerry L. Tipton, and Arun R. Vemury
- Abstract summary: This paper characterizes two proposed measures of face recognition algorithm fairness (fairness measures) from scientists in the U.S. and Europe.
We propose a set of interpretability criteria, termed the Functional Fairness Measure Criteria (FFMC), that outlines a set of properties desirable in a face recognition algorithm fairness measure.
We believe this is currently the largest open-source dataset of its kind.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of face recognition algorithms by academic and commercial
organizations is growing rapidly due to the onset of deep learning and the
widespread availability of training data. Though tests of face recognition
algorithm performance indicate yearly performance gains, error rates for many
of these systems differ based on the demographic composition of the test set.
These "demographic differentials" in algorithm performance can contribute to
unequal or unfair outcomes for certain groups of people, raising concerns with
increased worldwide adoption of face recognition systems. Consequently,
regulatory bodies in both the United States and Europe have proposed new rules
requiring audits of biometric systems for "discriminatory impacts" (European
Union Artificial Intelligence Act) and "fairness" (U.S. Federal Trade
Commission). However, no standard for measuring fairness in biometric systems
yet exists. This paper characterizes two proposed measures of face recognition
algorithm fairness (fairness measures) from scientists in the U.S. and Europe.
We find that both proposed methods are challenging to interpret when applied to
disaggregated face recognition error rates as they are commonly experienced in
practice. To address this, we propose a set of interpretability criteria,
termed the Functional Fairness Measure Criteria (FFMC), that outlines a set of
properties desirable in a face recognition algorithm fairness measure. We
further develop a new fairness measure, the Gini Aggregation Rate for Biometric
Equitability (GARBE), and show how, in conjunction with the Pareto
optimization, this measure can be used to select among alternative algorithms
based on the accuracy/fairness trade-space. Finally, we have open-sourced our
dataset of machine-readable, demographically disaggregated error rates. We
believe this is currently the largest open-source dataset of its kind.
Related papers
- Comprehensive Equity Index (CEI): Definition and Application to Bias Evaluation in Biometrics [47.762333925222926]
We present a novel metric to quantify biased behaviors of machine learning models.
We focus on and apply it to the operational evaluation of face recognition systems.
arXiv Detail & Related papers (2024-09-03T14:19:38Z) - Individual Fairness under Uncertainty [26.183244654397477]
Algorithmic fairness is an established area in machine learning (ML) algorithms.
We propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels.
We argue that this perspective represents a more realistic model of fairness research for real-world application deployment.
arXiv Detail & Related papers (2023-02-16T01:07:58Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Meta Balanced Network for Fair Face Recognition [51.813457201437195]
We systematically and scientifically study bias from both data and algorithm aspects.
We propose a novel meta-learning algorithm, called Meta Balanced Network (MBN), which learns adaptive margins in large margin loss.
Extensive experiments show that MBN successfully mitigates bias and learns more balanced performance for people with different skin tones in face recognition.
arXiv Detail & Related papers (2022-05-13T10:25:44Z) - Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of
Demographic Data Collection in the Pursuit of Fairness [0.0]
We consider calls to collect more data on demographics to enable algorithmic fairness.
We show how these techniques largely ignore broader questions of data governance and systemic oppression.
arXiv Detail & Related papers (2022-04-18T04:50:09Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Domain-Incremental Continual Learning for Mitigating Bias in Facial
Expression and Action Unit Recognition [5.478764356647437]
We propose the novel usage of Continual Learning (CL) as a potent bias mitigation method to enhance the fairness of FER systems.
We compare different non-CL-based and CL-based methods for their classification accuracy and fairness scores on expression recognition and Action Unit (AU) detection tasks.
Our experimental results show that CL-based methods, on average, outperform other popular bias mitigation techniques on both accuracy and fairness metrics.
arXiv Detail & Related papers (2021-03-15T18:22:17Z) - Mitigating Face Recognition Bias via Group Adaptive Classifier [53.15616844833305]
This work aims to learn a fair face representation, where faces of every group could be more equally represented.
Our work is able to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
arXiv Detail & Related papers (2020-06-13T06:43:37Z) - SensitiveLoss: Improving Accuracy and Fairness of Face Representations
with Discrimination-Aware Deep Learning [17.088716485755917]
We propose a discrimination-aware learning method to improve accuracy and fairness of biased face recognition algorithms.
We experimentally show that learning processes based on the most used face databases have led to popular pre-trained deep face models that present a strong algorithmic discrimination.
Our approach works as an add-on to pre-trained networks and is used to improve their performance in terms of average accuracy and fairness.
arXiv Detail & Related papers (2020-04-22T10:32:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.