Proportional Fairness in Federated Learning
- URL: http://arxiv.org/abs/2202.01666v5
- Date: Tue, 9 May 2023 15:16:24 GMT
- Title: Proportional Fairness in Federated Learning
- Authors: Guojun Zhang, Saber Malekmohammadi, Xi Chen and Yaoliang Yu
- Abstract summary: PropFair is a novel and easy-to-implement algorithm for finding proportionally fair solutions in federated learning.
We demonstrate that PropFair can approximately find PF solutions, and it achieves a good balance between the average performances of all clients and of the worst 10% clients.
- Score: 27.086313029073683
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasingly broad deployment of federated learning (FL) systems in
the real world, it is critical but challenging to ensure fairness in FL, i.e.
reasonably satisfactory performances for each of the numerous diverse clients.
In this work, we introduce and study a new fairness notion in FL, called
proportional fairness (PF), which is based on the relative change of each
client's performance. From its connection with the bargaining games, we propose
PropFair, a novel and easy-to-implement algorithm for finding proportionally
fair solutions in FL and study its convergence properties. Through extensive
experiments on vision and language datasets, we demonstrate that PropFair can
approximately find PF solutions, and it achieves a good balance between the
average performances of all clients and of the worst 10% clients. Our code is
available at
\url{https://github.com/huawei-noah/Federated-Learning/tree/main/FairFL}.
Related papers
- Federated Fairness Analytics: Quantifying Fairness in Federated Learning [2.9674793945631097]
Federated Learning (FL) is a privacy-enhancing technology for distributed ML.
FL inherits fairness challenges from classical ML and introduces new ones.
We propose Federated Fairness Analytics - a methodology for measuring fairness.
arXiv Detail & Related papers (2024-08-15T15:23:32Z) - Embracing Federated Learning: Enabling Weak Client Participation via Partial Model Training [21.89214794178211]
In Federated Learning (FL), clients may have weak devices that cannot train the full model or even hold it in their memory space.
We propose EmbracingFL, a general FL framework that allows all available clients to join the distributed training.
Our empirical study shows that EmbracingFL consistently achieves high accuracy as like all clients are strong, outperforming the state-of-the-art width reduction methods.
arXiv Detail & Related papers (2024-06-21T13:19:29Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Fairness-Aware Client Selection for Federated Learning [13.781019191483864]
Federated learning (FL) has enabled multiple data owners (a.k.a. FL clients) to train machine learning models collaboratively without revealing private data.
Since the FL server can only engage a limited number of clients in each training round, FL client selection has become an important research problem.
We propose the Fairness-aware Federated Client Selection (FairFedCS) approach. Based on Lyapunov optimization, it dynamically adjusts FL clients' selection probabilities by jointly considering their reputations, times of participation in FL tasks and contributions to the resulting model performance.
arXiv Detail & Related papers (2023-07-20T10:04:55Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - A Fair Federated Learning Framework With Reinforcement Learning [23.675056844328]
Federated learning (FL) is a paradigm where many clients collaboratively train a model under the coordination of a central server.
We propose a reinforcement learning framework, called PG-FFL, which automatically learns a policy to assign aggregation weights to clients.
We conduct extensive experiments over diverse datasets to verify the effectiveness of our framework.
arXiv Detail & Related papers (2022-05-26T15:10:16Z) - FL Games: A federated learning framework for distribution shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL Games, a game-theoretic framework for federated learning for learning causal features that are invariant across clients.
arXiv Detail & Related papers (2022-05-23T07:51:45Z) - E2FL: Equal and Equitable Federated Learning [26.5268278194427]
Federated Learning (FL) enables data owners to train a shared global model without sharing their private data.
We present Equal and Equitable Federated Learning (E2FL) to produce fair federated learning models by preserving two main fairness properties, equity and equality, concurrently.
We validate the efficiency and fairness of E2FL in different real-world FL applications, and show that E2FL outperforms existing baselines in terms of the resulting efficiency, fairness of different groups, and fairness among all individual clients.
arXiv Detail & Related papers (2022-05-20T22:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.