A Fair Federated Learning Framework With Reinforcement Learning
- URL: http://arxiv.org/abs/2205.13415v1
- Date: Thu, 26 May 2022 15:10:16 GMT
- Title: A Fair Federated Learning Framework With Reinforcement Learning
- Authors: Yaqi Sun, Shijing Si, Jianzong Wang, Yuhan Dong, Zhitao Zhu, Jing Xiao
- Abstract summary: Federated learning (FL) is a paradigm where many clients collaboratively train a model under the coordination of a central server.
We propose a reinforcement learning framework, called PG-FFL, which automatically learns a policy to assign aggregation weights to clients.
We conduct extensive experiments over diverse datasets to verify the effectiveness of our framework.
- Score: 23.675056844328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a paradigm where many clients collaboratively
train a model under the coordination of a central server, while keeping the
training data locally stored. However, heterogeneous data distributions over
different clients remain a challenge to mainstream FL algorithms, which may
cause slow convergence, overall performance degradation and unfairness of
performance across clients. To address these problems, in this study we propose
a reinforcement learning framework, called PG-FFL, which automatically learns a
policy to assign aggregation weights to clients. Additionally, we propose to
utilize Gini coefficient as the measure of fairness for FL. More importantly,
we apply the Gini coefficient and validation accuracy of clients in each
communication round to construct a reward function for the reinforcement
learning. Our PG-FFL is also compatible to many existing FL algorithms. We
conduct extensive experiments over diverse datasets to verify the effectiveness
of our framework. The experimental results show that our framework can
outperform baseline methods in terms of overall performance, fairness and
convergence speed.
Related papers
- Embracing Federated Learning: Enabling Weak Client Participation via Partial Model Training [21.89214794178211]
In Federated Learning (FL), clients may have weak devices that cannot train the full model or even hold it in their memory space.
We propose EmbracingFL, a general FL framework that allows all available clients to join the distributed training.
Our empirical study shows that EmbracingFL consistently achieves high accuracy as like all clients are strong, outperforming the state-of-the-art width reduction methods.
arXiv Detail & Related papers (2024-06-21T13:19:29Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - Federated Learning Can Find Friends That Are Advantageous [14.993730469216546]
In Federated Learning (FL), the distributed nature and heterogeneity of client data present both opportunities and challenges.
We introduce a novel algorithm that assigns adaptive aggregation weights to clients participating in FL training, identifying those with data distributions most conducive to a specific learning objective.
arXiv Detail & Related papers (2024-02-07T17:46:37Z) - Dynamic Fair Federated Learning Based on Reinforcement Learning [19.033986978896074]
Federated learning enables a collaborative training and optimization of global models among a group of devices without sharing local data samples.
We propose a dynamic q fairness federated learning algorithm with reinforcement learning, called DQFFL.
Our DQFFL outperforms the state-of-the-art methods in terms of overall performance, fairness and convergence speed.
arXiv Detail & Related papers (2023-11-02T03:05:40Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - FL Games: A federated learning framework for distribution shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL Games, a game-theoretic framework for federated learning for learning causal features that are invariant across clients.
arXiv Detail & Related papers (2022-05-23T07:51:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.