Achieving Fairness Across Local and Global Models in Federated Learning
- URL: http://arxiv.org/abs/2406.17102v1
- Date: Mon, 24 Jun 2024 19:42:16 GMT
- Title: Achieving Fairness Across Local and Global Models in Federated Learning
- Authors: Disha Makhija, Xing Han, Joydeep Ghosh, Yejin Kim,
- Abstract summary: This study introduces textttEquiFL, a novel approach designed to enhance both local and global fairness in Federated Learning environments.
textttEquiFL incorporates a fairness term into the local optimization objective, effectively balancing local performance and fairness.
We demonstrate that textttEquiFL not only strikes a better balance between accuracy and fairness locally at each client but also achieves global fairness.
- Score: 9.902848777262918
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Achieving fairness across diverse clients in Federated Learning (FL) remains a significant challenge due to the heterogeneity of the data and the inaccessibility of sensitive attributes from clients' private datasets. This study addresses this issue by introducing \texttt{EquiFL}, a novel approach designed to enhance both local and global fairness in federated learning environments. \texttt{EquiFL} incorporates a fairness term into the local optimization objective, effectively balancing local performance and fairness. The proposed coordination mechanism also prevents bias from propagating across clients during the collaboration phase. Through extensive experiments across multiple benchmarks, we demonstrate that \texttt{EquiFL} not only strikes a better balance between accuracy and fairness locally at each client but also achieves global fairness. The results also indicate that \texttt{EquiFL} ensures uniform performance distribution among clients, thus contributing to performance fairness. Furthermore, we showcase the benefits of \texttt{EquiFL} in a real-world distributed dataset from a healthcare application, specifically in predicting the effects of treatments on patients across various hospital locations.
Related papers
- WassFFed: Wasserstein Fair Federated Learning [31.135784690264888]
Federated Learning (FL) employs a training approach to address scenarios where users' data cannot be shared across clients.
We propose a Wasserstein Fair Federated Learning framework, namely WassFFed.
arXiv Detail & Related papers (2024-11-11T11:26:22Z) - Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization [81.32266996009575]
In federated learning (FL), the multi-step update and data heterogeneity among clients often lead to a loss landscape with sharper minima.
We propose FedLESAM, a novel algorithm that locally estimates the direction of global perturbation on client side.
arXiv Detail & Related papers (2024-05-29T08:46:21Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedCRL: Personalized Federated Learning with Contrastive Shared Representations for Label Heterogeneity in Non-IID Data [13.146806294562474]
This paper proposes a novel personalized federated learning algorithm, named Federated Contrastive Shareable Representations (FedCoSR)
parameters of local models' shallow layers and typical local representations are both considered shareable information for the server.
To address poor performance caused by label distribution skew among clients, contrastive learning is adopted between local and global representations.
arXiv Detail & Related papers (2024-04-27T14:05:18Z) - FedLoGe: Joint Local and Generic Federated Learning under Long-tailed
Data [46.29190753993415]
Federated Long-Tailed Learning (Fed-LT) is a paradigm wherein data collected from decentralized local clients manifests a globally prevalent long-tailed distribution.
This paper introduces an approach termed Federated Local and Generic Model Training in Fed-LT (FedLoGe), which enhances both local and generic model performance.
arXiv Detail & Related papers (2024-01-17T05:04:33Z) - GLOCALFAIR: Jointly Improving Global and Local Group Fairness in Federated Learning [8.033939709734451]
Federated learning (FL) has emerged as a prospective solution for collaboratively learning a shared model across clients without sacrificing their data privacy.
FL tends to be biased against certain demographic groups due to the inherent FL properties, such as data heterogeneity and party selection.
We propose GFAIR, a client-server codesign that can improve global and local group fairness without the need for sensitive statistics about the client's private datasets.
arXiv Detail & Related papers (2024-01-07T18:10:14Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - FLIS: Clustered Federated Learning via Inference Similarity for Non-IID
Data Distribution [7.924081556869144]
We present a new algorithm, FLIS, which groups the clients population in clusters with jointly trainable data distributions.
We present experimental results to demonstrate the benefits of FLIS over the state-of-the-art benchmarks on CIFAR-100/10, SVHN, and FMNIST datasets.
arXiv Detail & Related papers (2022-08-20T22:10:48Z) - FedDC: Federated Learning with Non-IID Data via Local Drift Decoupling
and Correction [48.85303253333453]
Federated learning (FL) allows multiple clients to collectively train a high-performance global model without sharing their private data.
We propose a novel federated learning algorithm with local drift decoupling and correction (FedDC)
Our FedDC only introduces lightweight modifications in the local training phase, in which each client utilizes an auxiliary local drift variable to track the gap between the local model parameter and the global model parameters.
Experiment results and analysis demonstrate that FedDC yields expediting convergence and better performance on various image classification tasks.
arXiv Detail & Related papers (2022-03-22T14:06:26Z) - Fair and Consistent Federated Learning [48.19977689926562]
Federated learning (FL) has gain growing interests for its capability of learning from distributed data sources collectively.
We propose an FL framework to jointly consider performance consistency and algorithmic fairness across different local clients.
arXiv Detail & Related papers (2021-08-19T01:56:08Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.