Trade-offs Between Individual and Group Fairness in Machine Learning: A Comprehensive Review
- URL: http://arxiv.org/abs/2602.00094v1
- Date: Fri, 23 Jan 2026 19:25:38 GMT
- Title: Trade-offs Between Individual and Group Fairness in Machine Learning: A Comprehensive Review
- Authors: Sandra Benítez-Peña, Blas Kolic, Victoria Menendez, Belén Pulido,
- Abstract summary: Group Fairness (GF) focuses on mitigating disparities across demographic subpopulations, and Individual Fairness (IF) emphasizes consistent treatment of similar individuals.<n>This survey examines methods that jointly address GF and IF, integrating both perspectives within unified frameworks.<n>We provide a systematic and critical review of hybrid fairness approaches, organizing existing methods according to the fairness mechanisms they employ and the algorithmic and mathematical strategies used to reconcile multiple fairness criteria.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic fairness has become a central concern in computational decision-making systems, where ensuring equitable outcomes is essential for both ethical and legal reasons. Two dominant notions of fairness have emerged in the literature: Group Fairness (GF), which focuses on mitigating disparities across demographic subpopulations, and Individual Fairness (IF), which emphasizes consistent treatment of similar individuals. These notions have traditionally been studied in isolation. In contrast, this survey examines methods that jointly address GF and IF, integrating both perspectives within unified frameworks and explicitly characterizing the trade-offs between them. We provide a systematic and critical review of hybrid fairness approaches, organizing existing methods according to the fairness mechanisms they employ and the algorithmic and mathematical strategies used to reconcile multiple fairness criteria. For each class of methods, we examine their theoretical foundations, optimization mechanisms, and empirical evaluation practices, and discuss their limitations. Additionally, we discuss the challenges and identify open research directions for developing principled, context-aware hybrid fairness methods. By synthesizing insights across the literature, this survey aims to serve as a comprehensive resource for researchers and practitioners seeking to design hybrid algorithms that provide reliable fairness guarantees at both the individual and group levels.
Related papers
- Toward Unifying Group Fairness Evaluation from a Sparsity Perspective [14.456880823997757]
We propose a unified sparsity-based framework for evaluating algorithmic fairness.<n>The framework aligns with existing fairness criteria and demonstrates broad applicability to a wide range of machine learning tasks.
arXiv Detail & Related papers (2025-11-01T02:02:11Z) - A Survey on Group Fairness in Federated Learning: Challenges, Taxonomy of Solutions and Directions for Future Research [3.521369597288705]
Group fairness in machine learning is an important area of research focused on achieving equitable outcomes across different groups.<n> Federated Learning amplifies the need for fairness methodologies due to its inherent heterogeneous data distributions.<n>No comprehensive survey has specifically focused on group fairness in Federated Learning.
arXiv Detail & Related papers (2024-10-04T18:39:28Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Counterpart Fairness -- Addressing Systematic between-group Differences in Fairness Evaluation [17.495053606192375]
When using machine learning to aid decision-making, it is critical to ensure that an algorithmic decision is fair and does not discriminate against specific individuals/groups.
Existing group fairness methods aim to ensure equal outcomes across groups delineated by protected variables like race or gender.
In cases where systematic differences between groups play a significant role in outcomes, these methods may overlook the influence of non-protected variables.
arXiv Detail & Related papers (2023-05-29T15:41:12Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Increasing Fairness via Combination with Learning Guarantees [8.116369626226815]
We propose a fairness quality measure named 'discriminative risk (DR)' to reflect both individual and group fairness aspects.<n>We investigate its properties and establish the first- and second-order oracle bounds to show that fairness can be boosted via ensemble combination with theoretical learning guarantees.
arXiv Detail & Related papers (2023-01-25T20:31:06Z) - Survey on Fairness Notions and Related Tensions [4.257210316104905]
Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting.
However, objective machine learning (ML) algorithms are prone to bias, which results in yet unfair decisions.
This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy.
arXiv Detail & Related papers (2022-09-16T13:36:05Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.