A Survey on Group Fairness in Federated Learning: Challenges, Taxonomy of Solutions and Directions for Future Research
- URL: http://arxiv.org/abs/2410.03855v1
- Date: Fri, 4 Oct 2024 18:39:28 GMT
- Title: A Survey on Group Fairness in Federated Learning: Challenges, Taxonomy of Solutions and Directions for Future Research
- Authors: Teresa Salazar, Helder Araújo, Alberto Cano, Pedro Henriques Abreu,
- Abstract summary: Group fairness in machine learning is a critical area of research focused on achieving equitable outcomes across different groups.
Federated learning amplifies the need for fairness due to the heterogeneous data distributions across clients.
No dedicated survey has focused comprehensively on group fairness in federated learning.
We create a novel taxonomy of these approaches based on key criteria such as data partitioning, location, and applied strategies.
- Score: 5.08731160761218
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Group fairness in machine learning is a critical area of research focused on achieving equitable outcomes across different groups defined by sensitive attributes such as race or gender. Federated learning, a decentralized approach to training machine learning models across multiple devices or organizations without sharing raw data, amplifies the need for fairness due to the heterogeneous data distributions across clients, which can exacerbate biases. The intersection of federated learning and group fairness has attracted significant interest, with 47 research works specifically dedicated to addressing this issue. However, no dedicated survey has focused comprehensively on group fairness in federated learning. In this work, we present an in-depth survey on this topic, addressing the critical challenges and reviewing related works in the field. We create a novel taxonomy of these approaches based on key criteria such as data partitioning, location, and applied strategies. Additionally, we explore broader concerns related to this problem and investigate how different approaches handle the complexities of various sensitive groups and their intersections. Finally, we review the datasets and applications commonly used in current research. We conclude by highlighting key areas for future research, emphasizing the need for more methods to address the complexities of achieving group fairness in federated systems.
Related papers
- Fair Clustering: Critique, Caveats, and Future Directions [11.077625489695922]
Clustering is a fundamental problem in machine learning and operations research.
We take a critical view of fair clustering, identifying a collection of ignored issues.
arXiv Detail & Related papers (2024-06-22T23:34:53Z) - Quantifying the Cross-sectoral Intersecting Discrepancies within Multiple Groups Using Latent Class Analysis Towards Fairness [6.683051393349788]
This research introduces an innovative approach to quantify cross-sectoral intersecting discrepancies.
We validate our approach using both proprietary and public datasets.
Our findings reveal significant discrepancies between minority ethnic groups, highlighting the need for targeted interventions in real-world AI applications.
arXiv Detail & Related papers (2024-05-24T08:10:31Z) - Advances in Robust Federated Learning: Heterogeneity Considerations [25.261572089655264]
Key challenge is to efficiently train models across multiple clients with different data distributions, model structures, task objectives, computational capabilities, and communication resources.
In this paper, we first outline the basic concepts of heterogeneous federated learning.
We then summarize the research challenges in federated learning in terms of five aspects: data, model, task, device, and communication.
arXiv Detail & Related papers (2024-05-16T06:35:42Z) - Supervised Algorithmic Fairness in Distribution Shifts: A Survey [17.826312801085052]
In real-world applications, machine learning models are often trained on a specific dataset but deployed in environments where the data distribution may shift.
This shift can lead to unfair predictions, disproportionately affecting certain groups characterized by sensitive attributes, such as race and gender.
arXiv Detail & Related papers (2024-02-02T11:26:18Z) - Federated Learning for Generalization, Robustness, Fairness: A Survey
and Benchmark [55.898771405172155]
Federated learning has emerged as a promising paradigm for privacy-preserving collaboration among different parties.
We provide a systematic overview of the important and recent developments of research on federated learning.
arXiv Detail & Related papers (2023-11-12T06:32:30Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Deep Clustering: A Comprehensive Survey [53.387957674512585]
Clustering analysis plays an indispensable role in machine learning and data mining.
Deep clustering, which can learn clustering-friendly representations using deep neural networks, has been broadly applied in a wide range of clustering tasks.
Existing surveys for deep clustering mainly focus on the single-view fields and the network architectures, ignoring the complex application scenarios of clustering.
arXiv Detail & Related papers (2022-10-09T02:31:32Z) - FairFed: Enabling Group Fairness in Federated Learning [22.913999279079878]
Federated learning has been viewed as a promising solution for learning machine learning models among multiple parties.
We propose FairFed, a novel algorithm to enhance group fairness via a fairness-aware aggregation method.
Our proposed method outperforms the state-of-the-art fair federated learning frameworks under a high heterogeneous sensitive attribute distribution.
arXiv Detail & Related papers (2021-10-02T17:55:20Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Through the Data Management Lens: Experimental Analysis and Evaluation
of Fair Classification [75.49600684537117]
Data management research is showing an increasing presence and interest in topics related to data and algorithmic fairness.
We contribute a broad analysis of 13 fair classification approaches and additional variants, over their correctness, fairness, efficiency, scalability, and stability.
Our analysis highlights novel insights on the impact of different metrics and high-level approach characteristics on different aspects of performance.
arXiv Detail & Related papers (2021-01-18T22:55:40Z) - Heterogeneous Representation Learning: A Review [66.12816399765296]
Heterogeneous Representation Learning (HRL) brings some unique challenges.
We present a unified learning framework which is able to model most existing learning settings with the heterogeneous inputs.
We highlight the challenges that are less-touched in HRL and present future research directions.
arXiv Detail & Related papers (2020-04-28T05:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.