FairFed: Enabling Group Fairness in Federated Learning
- URL: http://arxiv.org/abs/2110.00857v1
- Date: Sat, 2 Oct 2021 17:55:20 GMT
- Title: FairFed: Enabling Group Fairness in Federated Learning
- Authors: Yahya H. Ezzeldin, Shen Yan, Chaoyang He, Emilio Ferrara, Salman
Avestimehr
- Abstract summary: Federated learning has been viewed as a promising solution for learning machine learning models among multiple parties.
We propose FairFed, a novel algorithm to enhance group fairness via a fairness-aware aggregation method.
Our proposed method outperforms the state-of-the-art fair federated learning frameworks under a high heterogeneous sensitive attribute distribution.
- Score: 22.913999279079878
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As machine learning becomes increasingly incorporated in crucial
decision-making scenarios such as healthcare, recruitment, and loan assessment,
there have been increasing concerns about the privacy and fairness of such
systems. Federated learning has been viewed as a promising solution for
collaboratively learning machine learning models among multiple parties while
maintaining the privacy of their local data. However, federated learning also
poses new challenges in mitigating the potential bias against certain
populations (e.g., demographic groups), which typically requires centralized
access to the sensitive information (e.g., race, gender) of each data point.
Motivated by the importance and challenges of group fairness in federated
learning, in this work, we propose FairFed, a novel algorithm to enhance group
fairness via a fairness-aware aggregation method, aiming to provide fair model
performance across different sensitive groups (e.g., racial, gender groups)
while maintaining high utility. The formulation can potentially provide more
flexibility in the customized local debiasing strategies for each client. When
running federated training on two widely investigated fairness datasets, Adult
and COMPAS, our proposed method outperforms the state-of-the-art fair federated
learning frameworks under a high heterogeneous sensitive attribute
distribution.
Related papers
- From Optimization to Generalization: Fair Federated Learning against Quality Shift via Inter-Client Sharpness Matching [10.736121438623003]
Federated learning has been recognized as a vital approach for training deep neural networks with decentralized medical data.
In practice, it is challenging to ensure consistent imaging quality across various institutions.
This imbalance in image quality can cause the federated model to develop an inherent bias towards higher-quality images.
arXiv Detail & Related papers (2024-04-27T07:05:41Z) - Distribution-Free Fair Federated Learning with Small Samples [54.63321245634712]
FedFaiREE is a post-processing algorithm developed specifically for distribution-free fair learning in decentralized settings with small samples.
We provide rigorous theoretical guarantees for both fairness and accuracy, and our experimental results further provide robust empirical validation for our proposed method.
arXiv Detail & Related papers (2024-02-25T17:37:53Z) - Dynamic Fair Federated Learning Based on Reinforcement Learning [19.033986978896074]
Federated learning enables a collaborative training and optimization of global models among a group of devices without sharing local data samples.
We propose a dynamic q fairness federated learning algorithm with reinforcement learning, called DQFFL.
Our DQFFL outperforms the state-of-the-art methods in terms of overall performance, fairness and convergence speed.
arXiv Detail & Related papers (2023-11-02T03:05:40Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - FAIR-FATE: Fair Federated Learning with Momentum [0.41998444721319217]
We propose a novel FAIR FederATEd Learning algorithm that aims to achieve group fairness while maintaining high utility.
To the best of our knowledge, this is the first approach in machine learning that aims to achieve fairness using a fair Momentum estimate.
Experimental results on real-world datasets demonstrate that FAIR-FATE outperforms state-of-the-art fair Federated Learning algorithms.
arXiv Detail & Related papers (2022-09-27T20:33:38Z) - FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning [102.92349569788028]
We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
arXiv Detail & Related papers (2022-06-07T11:43:32Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Unified Group Fairness on Federated Learning [22.143427873780404]
Federated learning (FL) has emerged as an important machine learning paradigm where a global model is trained based on private data from distributed clients.
Recent researches focus on achieving fairness among clients, but they ignore the fairness towards different groups formed by sensitive attribute(s) (e.g., gender and/or race)
We propose a novel FL algorithm, named Group Distributionally Robust Federated Averaging (G-DRFA), which mitigates the distribution shift across groups with theoretical analysis of convergence rate.
arXiv Detail & Related papers (2021-11-09T08:21:38Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.