FL Games: A federated learning framework for distribution shifts
- URL: http://arxiv.org/abs/2205.11101v1
- Date: Mon, 23 May 2022 07:51:45 GMT
- Title: FL Games: A federated learning framework for distribution shifts
- Authors: Sharut Gupta and Kartik Ahuja and Mohammad Havaei and Niladri
Chatterjee and Yoshua Bengio
- Abstract summary: Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL Games, a game-theoretic framework for federated learning for learning causal features that are invariant across clients.
- Score: 71.98708418753786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning aims to train predictive models for data that is
distributed across clients, under the orchestration of a server. However,
participating clients typically each hold data from a different distribution,
whereby predictive models with strong in-distribution generalization can fail
catastrophically on unseen domains. In this work, we argue that in order to
generalize better across non-i.i.d. clients, it is imperative to only learn
correlations that are stable and invariant across domains. We propose FL Games,
a game-theoretic framework for federated learning for learning causal features
that are invariant across clients. While training to achieve the Nash
equilibrium, the traditional best response strategy suffers from high-frequency
oscillations. We demonstrate that FL Games effectively resolves this challenge
and exhibits smooth performance curves. Further, FL Games scales well in the
number of clients, requires significantly fewer communication rounds, and is
agnostic to device heterogeneity. Through empirical evaluation, we demonstrate
that FL Games achieves high out-of-distribution performance on various
benchmarks.
Related papers
- Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - PAGE: Equilibrate Personalization and Generalization in Federated
Learning [13.187836371243385]
Federated learning (FL) is becoming a major driving force behind machine learning as a service.
We propose the first algorithm to balance personalization and generalization on top of game theory, dubbed PAGE.
Experiments show that PAGE outperforms state-of-the-art FL baselines in terms of global and local prediction accuracy simultaneously.
arXiv Detail & Related papers (2023-10-13T09:11:35Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - A Fair Federated Learning Framework With Reinforcement Learning [23.675056844328]
Federated learning (FL) is a paradigm where many clients collaboratively train a model under the coordination of a central server.
We propose a reinforcement learning framework, called PG-FFL, which automatically learns a policy to assign aggregation weights to clients.
We conduct extensive experiments over diverse datasets to verify the effectiveness of our framework.
arXiv Detail & Related papers (2022-05-26T15:10:16Z) - Test-Time Robust Personalization for Federated Learning [5.553167334488855]
Federated Learning (FL) is a machine learning paradigm where many clients collaboratively learn a shared global model with decentralized training data.
Personalized FL additionally adapts the global model to different clients, achieving promising results on consistent local training and test distributions.
We propose Federated Test-time Head Ensemble plus tuning(FedTHE+), which personalizes FL models with robustness to various test-time distribution shifts.
arXiv Detail & Related papers (2022-05-22T20:08:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.