Review of Demographic Bias in Face Recognition
- URL: http://arxiv.org/abs/2502.02309v1
- Date: Tue, 04 Feb 2025 13:28:49 GMT
- Title: Review of Demographic Bias in Face Recognition
- Authors: Ketan Kotwal, Sebastien Marcel,
- Abstract summary: This review consolidates research efforts providing a comprehensive overview of the multifaceted aspects of demographic bias in FR.
We examine the primary causes, datasets, assessment metrics, and mitigation approaches associated with demographic disparities in FR.
This article aims to provide researchers with a unified perspective on the state-of-the-art while emphasizing the critical need for equitable and trustworthy FR systems.
- Score: 2.7624021966289605
- License:
- Abstract: Demographic bias in face recognition (FR) has emerged as a critical area of research, given its impact on fairness, equity, and reliability across diverse applications. As FR technologies are increasingly deployed globally, disparities in performance across demographic groups -- such as race, ethnicity, and gender -- have garnered significant attention. These biases not only compromise the credibility of FR systems but also raise ethical concerns, especially when these technologies are employed in sensitive domains. This review consolidates extensive research efforts providing a comprehensive overview of the multifaceted aspects of demographic bias in FR. We systematically examine the primary causes, datasets, assessment metrics, and mitigation approaches associated with demographic disparities in FR. By categorizing key contributions in these areas, this work provides a structured approach to understanding and addressing the complexity of this issue. Finally, we highlight current advancements and identify emerging challenges that need further investigation. This article aims to provide researchers with a unified perspective on the state-of-the-art while emphasizing the critical need for equitable and trustworthy FR systems.
Related papers
- Optimisation Strategies for Ensuring Fairness in Machine Learning: With and Without Demographics [4.662958544712181]
This paper introduces two formal frameworks to tackle open questions in machine learning fairness.
In one framework, operator-valued optimisation and min-max objectives are employed to address unfairness in time-series problems.
In the second framework, the challenge of lacking sensitive attributes, such as gender and race, in commonly used datasets is addressed.
arXiv Detail & Related papers (2024-11-13T22:29:23Z) - A Survey on Group Fairness in Federated Learning: Challenges, Taxonomy of Solutions and Directions for Future Research [5.08731160761218]
Group fairness in machine learning is a critical area of research focused on achieving equitable outcomes across different groups.
Federated learning amplifies the need for fairness due to the heterogeneous data distributions across clients.
No dedicated survey has focused comprehensively on group fairness in federated learning.
We create a novel taxonomy of these approaches based on key criteria such as data partitioning, location, and applied strategies.
arXiv Detail & Related papers (2024-10-04T18:39:28Z) - Quantifying the Cross-sectoral Intersecting Discrepancies within Multiple Groups Using Latent Class Analysis Towards Fairness [6.683051393349788]
''Leave No One Behind'' initiative urges us to address multiple and intersecting forms of inequality in accessing services, resources, and opportunities.
An increasing number of AI tools are applied to decision-making processes in various sectors such as health, energy, and housing.
This research introduces an innovative approach to quantify cross-sectoral intersecting discrepancies.
arXiv Detail & Related papers (2024-05-24T08:10:31Z) - Towards Fair Face Verification: An In-depth Analysis of Demographic
Biases [11.191375513738361]
Deep learning-based person identification and verification systems have remarkably improved in terms of accuracy in recent years.
However, such systems have been found to exhibit significant biases related to race, age, and gender.
This paper presents an in-depth analysis, with a particular emphasis on the intersectionality of these demographic factors.
arXiv Detail & Related papers (2023-07-19T14:49:14Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - A Deep Dive into Dataset Imbalance and Bias in Face Identification [49.210042420757894]
Media portrayals often center imbalance as the main source of bias in automated face recognition systems.
Previous studies of data imbalance in FR have focused exclusively on the face verification setting.
This work thoroughly explores the effects of each kind of imbalance possible in face identification, and discuss other factors which may impact bias in this setting.
arXiv Detail & Related papers (2022-03-15T20:23:13Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Improving Fairness of AI Systems with Lossless De-biasing [15.039284892391565]
Mitigating bias in AI systems to increase overall fairness has emerged as an important challenge.
We present an information-lossless de-biasing technique that targets the scarcity of data in the disadvantaged group.
arXiv Detail & Related papers (2021-05-10T17:38:38Z) - Through the Data Management Lens: Experimental Analysis and Evaluation
of Fair Classification [75.49600684537117]
Data management research is showing an increasing presence and interest in topics related to data and algorithmic fairness.
We contribute a broad analysis of 13 fair classification approaches and additional variants, over their correctness, fairness, efficiency, scalability, and stability.
Our analysis highlights novel insights on the impact of different metrics and high-level approach characteristics on different aspects of performance.
arXiv Detail & Related papers (2021-01-18T22:55:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.