A Fair Federated Learning Framework With Reinforcement Learning
- URL: http://arxiv.org/abs/2205.13415v1
- Date: Thu, 26 May 2022 15:10:16 GMT
- Title: A Fair Federated Learning Framework With Reinforcement Learning
- Authors: Yaqi Sun, Shijing Si, Jianzong Wang, Yuhan Dong, Zhitao Zhu, Jing Xiao
- Abstract summary: Federated learning (FL) is a paradigm where many clients collaboratively train a model under the coordination of a central server.
We propose a reinforcement learning framework, called PG-FFL, which automatically learns a policy to assign aggregation weights to clients.
We conduct extensive experiments over diverse datasets to verify the effectiveness of our framework.
- Score: 23.675056844328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a paradigm where many clients collaboratively
train a model under the coordination of a central server, while keeping the
training data locally stored. However, heterogeneous data distributions over
different clients remain a challenge to mainstream FL algorithms, which may
cause slow convergence, overall performance degradation and unfairness of
performance across clients. To address these problems, in this study we propose
a reinforcement learning framework, called PG-FFL, which automatically learns a
policy to assign aggregation weights to clients. Additionally, we propose to
utilize Gini coefficient as the measure of fairness for FL. More importantly,
we apply the Gini coefficient and validation accuracy of clients in each
communication round to construct a reward function for the reinforcement
learning. Our PG-FFL is also compatible to many existing FL algorithms. We
conduct extensive experiments over diverse datasets to verify the effectiveness
of our framework. The experimental results show that our framework can
outperform baseline methods in terms of overall performance, fairness and
convergence speed.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.