PAGE: Equilibrate Personalization and Generalization in Federated
Learning
- URL: http://arxiv.org/abs/2310.08961v1
- Date: Fri, 13 Oct 2023 09:11:35 GMT
- Title: PAGE: Equilibrate Personalization and Generalization in Federated
Learning
- Authors: Qian Chen, Zilong Wang, Jiaqi Hu, Haonan Yan, Jianying Zhou, Xiaodong
Lin
- Abstract summary: Federated learning (FL) is becoming a major driving force behind machine learning as a service.
We propose the first algorithm to balance personalization and generalization on top of game theory, dubbed PAGE.
Experiments show that PAGE outperforms state-of-the-art FL baselines in terms of global and local prediction accuracy simultaneously.
- Score: 13.187836371243385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is becoming a major driving force behind machine
learning as a service, where customers (clients) collaboratively benefit from
shared local updates under the orchestration of the service provider (server).
Representing clients' current demands and the server's future demand, local
model personalization and global model generalization are separately
investigated, as the ill-effects of data heterogeneity enforce the community to
focus on one over the other. However, these two seemingly competing goals are
of equal importance rather than black and white issues, and should be achieved
simultaneously. In this paper, we propose the first algorithm to balance
personalization and generalization on top of game theory, dubbed PAGE, which
reshapes FL as a co-opetition game between clients and the server. To explore
the equilibrium, PAGE further formulates the game as Markov decision processes,
and leverages the reinforcement learning algorithm, which simplifies the
solving complexity. Extensive experiments on four widespread datasets show that
PAGE outperforms state-of-the-art FL baselines in terms of global and local
prediction accuracy simultaneously, and the accuracy can be improved by up to
35.20% and 39.91%, respectively. In addition, biased variants of PAGE imply
promising adaptiveness to demand shifts in practice.
Related papers
- FedReMa: Improving Personalized Federated Learning via Leveraging the Most Relevant Clients [13.98392319567057]
Federated Learning (FL) is a distributed machine learning paradigm that achieves a globally robust model through decentralized computation and periodic model synthesis.
Despite their wide adoption, existing FL and PFL works have yet to comprehensively address the class-imbalance issue.
We propose FedReMa, an efficient PFL algorithm that can tackle class-imbalance by utilizing an adaptive inter-client co-learning approach.
arXiv Detail & Related papers (2024-11-04T05:44:28Z) - Overcoming Data and Model Heterogeneities in Decentralized Federated Learning via Synthetic Anchors [21.931436901703634]
Conventional Federated Learning (FL) involves collaborative training of a global model while maintaining user data privacy.
One of its branches, decentralized FL, is a serverless network that allows clients to own and optimize different local models separately.
We propose a novel Decentralized FL technique by introducing Synthetic Anchors, dubbed as DeSA.
arXiv Detail & Related papers (2024-05-19T11:36:45Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - FL Games: A federated learning framework for distribution shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL Games, a game-theoretic framework for federated learning for learning causal features that are invariant across clients.
arXiv Detail & Related papers (2022-05-23T07:51:45Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.